entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.03353v1 | 20240905085539 | Modelling the age distribution of longevity leaders | [
"Csaba Kiss",
"László Németh",
"Bálint Vető"
] | q-bio.PE | [
"q-bio.PE",
"math.PR",
"60J25, 60K20"
] |
Real-time diagnostics on a QKD link via
QBER Time Series Analysis
Dimitris Syvridis
MOX - Dipartimento di Matematica “F. Brioschi”, Politecnico di
Milano, via Bonardi 9, 20133 Milan, Italy
^1 [email protected]
^2 [email protected]
^3 [email protected]
September 9, 2024
===============================================================================================================================================================================================================================================
empty
§ INTRODUCTION
Exceptionally long human lifespans are one of the cornerstones of demography and mortality research.
Studying the group of record holders may reveal not only the underlying mortality mechanism of a population but also potentially shed some light on the future developments of human longevity.
As life expectancy increases <cit.> with deaths shifting to older ages, the distribution of deaths at the oldest-old ages <cit.> gains more interest of demographers, actuaries and decision makers of numerous disciplines.
The pattern of adult human mortality has already been described <cit.> but there is still a debate about the exact distribution of deaths at adult ages. More details on the necessity of a correctly specified model for the underlying mortality process and its impact on further research are discussed in <cit.>. Numerous publications review the ongoing discussion on the existence of a mortality plateau <cit.>, and the levelling-off of adult human death rates at the oldest ages is supported by the findings in <cit.> while others cast some doubt on this observation <cit.>. If there is a mortality plateau then the distribution of deaths at the oldest-old ages must be gamma-Gompertz and human lifespan can increase further without any maximum <cit.>. Further studies discuss the existence of a limit to human lifespan with more focus on the extreme value distribution aspect of the deaths at the oldest-old ages for various populations and different lifetime distributions <cit.>. These models can be helpful in determining the plausibility of longevity leaders as well.
We contribute to this discussion by proposing a stochastic model to describe the evolution of the age of the world's oldest person. Based on our estimates the model provides a good fit to the titleholder data since 1955, collected by the Gerontology Research Group <cit.>.
With the model results, it is possible to predict the age of the oldest person in the world in the future.
When should we expect to see the next Jeanne Calment, the supercentenarian with the longest human lifespan ever documented?
Will her record ever be surpassed?
Our results provide a prediction for the age distribution of the record holder in the coming decades to answer these questions.
§ RESULTS
Our model describes the evolution of the age of the oldest living person under the following assumptions.
We assume that the births of individuals follow a Poisson process with time-dependent intensity <cit.>.
The lifespans of individuals in the population are independent and their distribution may depend on the date of birth.
Then the age of the record holder in the population evolves in time as a Markov process with explicit transition probabilities.
As the first main result of this paper, we explicitly compute the distribution of the age of the record holder for any given birth rate parameter and lifespan distribution.
The detailed mathematical description and the properties of the general model are described in Section <ref>.
We apply our general result to the case which approximates the human birth rate over the world and the human lifespan.
We specify the intensity function of the Poisson process of births to have an exponential growth in time.
The underlying force of mortality is chosen so that it follows an extension of the Gompertz mortality model <cit.>
and the lifespan distribution of individuals is given by the gamma–Gompertz () distribution with time-dependent parameters.
This distribution adequately captures the slowing down of senescence mortality at the oldest old ages.
Given the growth parameters of the birth rate, we fit the model parameters to the statistics of the oldest person titleholder data using maximum likelihood method.
The optimal parameters of the model fit well to the data.
It shows in particular that the age of the oldest person alive increases over time, and it will most likely increase further in the future.
We compute the expected value and a confidence interval for the age of the world's oldest person using the fitted model parameters for each year between 1955 and 2019 shown by the green curves on Figure <ref>.
The detailed discussion of the model specification, likelihood calculations as well as the parameter fitting are given in Section <ref>.
Section <ref> contains calculations related to the gamma–Gompertz–Makeham generalization of the gamma–Gompertz distribution.
Our results enable us to predict the age distribution of the world's oldest person at future time points.
We compute the probability density of the age of the world's oldest person in different years not only in the past but also in the future.
These densities are shown in Figure <ref>.
When comparing the age distribution of the oldest person in the world in different years to the age of Jeanne Calment at her death, we find that on Jan 1st 2060 we can expect that the age of the world's oldest person will exceed her age with probability around 0.5.
This also means that with high probability her age record will already be broken by that time.
In Figure <ref> two extreme outliers with unexpectedly long lifetimes can be observed. Jeanne Calment died at the age of 122.45 years in 1997, and Sarah Knauss died at the age 119.27 in 2000.
In our model, the probability of observing an age greater than or equal to their actual age at the time of their death is 0.000286 for Calment and 0.0116 for Knauss.
See details in Subsection <ref>.
The fact that Calment and Knauss are outliers among the oldest old in the world became even more evident when we performed a backtesting of our model. We estimated the parameters based on the data on the world's oldest person between 1955 and 1988 where the ending date is the time when Calment became the world's oldest person. The model-estimated mean and confidence interval of the world's oldest person using the full data and the partial dataset (before Calment) are shown in Figure <ref> by green and red, respectively.
The estimate using the data until 1988 is less reliable after 2000 which is shown by the fact that the observed data is out of the confidence interval in the majority of the time after 2000.
When we compare the two confidence intervals, we can conclude that, based on the data before 1988, Calment and Knauss already had extremely high ages at their deaths.
Adding the remaining data set, the estimated mean age of the world's oldest person becomes lower.
Hence we can conclude that now we can consider the ages of the two outliers between 1988 and 2000 at their death to be more extreme than based on the information available in 1988.
The other important observable is the reign length of a record holder.
The numerical value of the expected reign length with our estimated model parameters is 1.195 in 1955 and it is 1.188 in 2019.
The empirical value of the reign length is 1.008 which is not much less than the model-based estimate.
Our approach to studying the age of the oldest old is completely novel because it takes into account jointly the age and the time of birth of individuals.
Although the age and the reign length of the world's oldest person depend in a complex non-linear way on the total lifespan and the time of birth of supercentenarians, we compute explicitly the probability distribution of the age of the oldest person.
Hence, the performance of our predictions cannot be directly compared to previous results in the literature because in the usual approach, the oldest person in each cohort is considered separately, and it is not relevant whether this person was ever the oldest in the population, see e.g. the extreme value method in <cit.>.
Our model contributes to the mathematical understanding of the evolution of the oldest individual, which is the extra benefit compared to a prediction using the trend in the data, e.g. in linear regression.
In this way, we not only observe but also prove mathematically that the dynamics of the birth process and that of the lifespan distribution which we consider in this paper necessarily imply the increase of the expected age of the world's oldest person.
§ MATHEMATICAL MODEL FOR THE AGE OF THE OLDEST PERSON
In this section, we provide the mathematical definition of a general model for the age of the world's oldest person, where the births of individuals follow a Poisson process and their lifespans are independent.
Under the assumptions of the general model, the age of the record holder in the population evolves in time as a Markov process with explicit transition probabilities.
In Subsections <ref>–<ref>, we describe the exact distribution of the age of the record holder in this generality
for any given birth rate parameter and lifespan distribution using the two-dimensional representation of the age process of the oldest person.
In the time-homogeneous case with constant birth rate and identical lifespan distributions the reign length distribution of a record holder is computed in Subsections <ref>–<ref>.
We explain the role of the entry age parameter in Subsection <ref>.
§.§ Model description and two-dimensional representation
The model is formally defined as follows.
Let λ(t) be the birth rate parameter which depends on time and let F_t and f_t be a family of cumulative distribution functions and density functions corresponding to non-negative random variables which are also time-dependent.
We assume that individuals are born according to a Poisson point process at rate λ(t) and that the lifespan of an individual born at time t is given by F_t so that lifespans are independent for different individuals.
Let Y_t denote the age of the oldest person in the population at time t.
The process (Y_t:t∈) is Markovian.
The Markov property holds because at any time t the history of the process (Y_s:s≤ t) provides information about the lifetime of individuals born before the current record holder
while any transition of (Y_s:s≥ t) depends only on the lifetime of the current record holder and of those born after them.
The evolution of the Markov process Y_t is the following.
It has a deterministic linear growth with slope 1 due to the ageing of the current record holder.
This happens until the death of the record holder.
Additionally, given that Y_t-=lim_s↑ tY_s=y for some t with y>0,
the process has a downward jump at time t at rate f_t(y)/(1-F_t(y)) which is the hazard rate of the distribution F_t at y.
This corresponds to the possibility that the record holder dies at time t which happens at rate f_t(y)/(1-F_t(y)).
The conditional distribution of the jump is given by
(Y_t<x | Y_t-=y,Y_t<y)
=exp(-∫_x^yλ(t-u)(1-F_t-u(u)) ụ)
for all x>0.
The jump distribution in (<ref>) has an absolutely continuous part supported on [0,y] with density
j_y,t(x)=exp(-∫_x^yλ(t-u)(1-F_t-u(u)) ụ)
λ(t-x)(1-F_t-x(x))
and a point mass at 0 with probability
a_y,t=exp(-∫_0^yλ(t-u)(1-F_t-u(u)) ụ).
As we shall see in the relevant parameter regime the probability a_y,t of the point mass at 0 is negligible.
The transition formula in (<ref>) can be proven using the description below.
We introduce a two-dimensional representation of the process Y_t as follows.
Let Λ={(t_i,x_i):i∈ I} be a marked Poisson process in ×_+
where {t_i:i∈ I} forms a Poisson point process on with intensity λ(t)
and x_i≥0 is sampled independently for each i∈ I according to the distribution F_t_i.
The point (t_i,x_i) represents an individual born at time t_i with lifespan x_i for all i∈ I, that is, the individual i is alive in the time interval [t_i,t_i+x_i) and their age at time t is t-t_i if t∈[t_i,t_i+x_i).
Hence the marked Poisson process Λ contains all relevant information about the age statistics of the population at any time.
In particular the age of the oldest person Y_t can be expressed in terms of Λ as
Y_t=max{(t-t_i)𝕀_{t∈[t_i,t_i+x_i)}:i∈ I}.
where the indicator 𝕀_{t∈[t_i,t_i+x_i)} is 1 exactly if the ith person is alive at time t.
The transition distribution formula (<ref>) can be seen using the two-dimensional representation as follows.
Given that the current record holder dies at time t at age y the event {Y_t<x} means that nobody with age between x and y can be alive at time t.
This event can be equivalently characterized in terms of the Poisson process of birth at rate λ(·) thinned by the probability that the person is still alive at time t.
Indeed the event {Y_t<x} can be expressed as a Poisson process of intensity at time t-u given by λ(t-u)(1-F_t-u(u)) for u∈[x,y] not having any point in the time interval [t-y,t-x].
This probability appears exactly on the right-hand side of (<ref>).
In other words, for any u∈[x,y] people are born at time t-u at rate λ(t-u).
On the other hand, the probability for a person born at time t-u to be alive at time t (that is, at age u) is 1-F_t-u(u).
See Figure <ref> for illustration.
§.§ Exact distribution of the oldest person's age process
We assume that all birth events are already sampled on (-∞,t] together with the corresponding lifespans.
Then the distribution of Y_t can be computed explicitly for all t∈ using the two-dimensional representation.
For all t∈ the density
h_t(x)=exp(-∫_x^∞λ(t-u)(1-F_t-u(u)) ụ)
λ(t-x)(1-F_t-x(x))
for all x>0 and the point mass at 0
m_t=exp(-∫_0^∞λ(t-u)(1-F_t-u(u)) ụ)
characterize the distribution of Y_t which can be seen as follows.
We mention that the point mass at 0 is negligible in the application.
Similarly to the proof of the transition formula in (<ref>) the event {Y_t<x} for any x>0 is the same as the event that nobody with age at least x is alive at time t.
We express this event in terms of the Poisson process of birth at rate λ(·) thinned by the probability that the person is still alive at time t.
The event {Y_t<x} means that a Poisson process of intensity at time t-u given by λ(t-u)(1-F_t-u(u)) for u≥ x does not have any point in (-∞,t-x] yielding
(Y_t<x)=exp(-∫_x^∞λ(t-u)(1-F_t-u(u)) ụ).
In other words for any u≥ x individuals are born at time t-u at rate λ(t-u).
A person born at time t-u is alive at time t at age u with probability 1-F_t-u(u).
(<ref>)–(<ref>) follow by differentiation in (<ref>) and by taking the x→0 limit.
See Figure <ref> for illustration.
§.§ Homogeneous model
The exact computation of the reign length distribution (see Subsection <ref>) can only be performed in a special case of our general model described in Subsection <ref>.
We introduce this special case as the homogeneous model where individuals are born at the times of a Poisson process of constant rate λ=λ(t)>0 for all t.
The lifespan of individuals are independent and identically distributed with a fixed density f=f_t and cumulative distribution function F=F_t for all t which does not depend on time.
In the homogeneous model the jump distribution given in (<ref>)–(<ref>) simplifies to
j_y(x) =j_y,t(x)=exp(-λ∫_x^y(1-F(u)) ụ)λ(1-F(x)),
a_y =a_y,t=exp(-λ∫_0^y(1-F(u)) ụ).
The distribution of Y_t does not depend on time in this case hence it is a stationary distribution as well.
The formulas for the density of Y_t and point mass at 0 reduce in the homogeneous case to
h(x) =exp(-λ∫_x^∞(1-F(u)) ụ)λ(1-F(x)),
m =exp(-λ∫_0^∞(1-F(u)) ụ)
where the integral ∫_0^∞(1-F(u)) ụ is equal to the expected lifespan.
The equilibrium condition for the homogeneous density h can be written as
h'(x)+h(x)f(x)/1-F(x)-∫_x^∞ h(y)f(y)/1-F(y)j_y(x) ỵ=0.
After differentiation and using the fact that
/x̣j_y(x)=j_y(x)(λ(1-F(x))-f(x)/1-F(x))
one can derive from (<ref>) the second order differential equation
h”(x)+(2f(x)/1-F(x)-λ(1-F(x)))h'(x)
+(2f(x)^2/(1-F(x))^2+f'(x)/1-F(x))h(x)=0.
The point mass m at 0 satisfies
mλ=∫_0^∞ h(y)f(y)/1-F(y)j_y(0) ỵ.
§.§ The peaks process
In the homogeneous model the sequence of peaks in Y_t forms a discrete time Markov chain.
By peak we mean a local maximum of Y_t with value being equal to the lifespan of the last record holder.
Each time the oldest person dies the process Y_t has a peak with a downward jump following it.
Let Z_n denote the age of record holders at which they die which are the values of the peaks of the process Y_t.
The sequence Z_n forms a discrete time Markov chain.
The Markov property follows by the fact that ages at death of previous record holders only give information on people born before the current record holder but transitions depend on the lifespan of the current record holder and that of people born after them.
The stationary density of Z_n is given by
z(x)=f(x)exp(-λ∫_x^∞(1-F(u)) ụ)/∫_0^∞ f(y)exp(-λ∫_y^∞(1-F(u)) ụ)ỵ.
The formula can be seen as follows.
To have a record holder who dies at age x there has to be a person who has lifespan x which gives the factor f(x) in the numerator on the right-hand side of (<ref>).
The exponential factor is by the two-dimensional representation equal to the probability that no people born before the record holder who just died can be alive at the time the record holder dies.
The denominator on the right-hand side of (<ref>) makes z(x) a probability density function.
The density of Z_n can also be characterized by the following description.
It satisfies the integral equation
z(x)=∫_0^∞ z(w)∫_0^min(x,y)j_w(y)f(x)/1-F(y) ỵ ẉ+∫_0^∞ z(w)a_wf(x) ẉ
which comes from the possible transitions of the peak process as follows.
If the previous record holder had a total lifetime w∈[0,∞) then at the death the process Y_t jumps down to some value y at rate j_w(y) or to 0 with probability a_w.
The density of the age at which a person dies who becomes a record holder at age y is f(x)/(1-F(y)).
From the integral equation in (<ref>) one can derive the second order differential equation for the function g(x)=z(s)/f(x) given by
g”(x)-λ(1-F(x))g'(x)+λ f(x)g(x)=0
which is satisfied by g(x)=cexp(-λ∫_x^∞(1-F(u)) ụ) in accordance with (<ref>).
§.§ Reign length distribution
In the homogeneous model, let W_n denote the reign length of the nth record holder, that is, the time length for which this person is the oldest person of the population.
The density of the random reign length is given by
r(w)=∫_0^∞ h(y)f(y+w) ỵ+m∫_0^wf(z)λ e^-λ(w-z)f(z) ẓ/∫_0^∞ h(y)(1-F(y)) ỵ+m.
The density formula in (<ref>) can be derived based on the stationary density of the peaks process given by (<ref>) as follows.
It holds for the density of the reign length that
r(w)=∫_0^∞ z(x)(∫_0^xj_x(y)f(y+w)/1-F(y) ỵ
+a_x∫_0^wf(z)λ e^-λ(w-z) ẓ)x̣
based on the decomposition with respect to the previous value of the peaks process Z_n.
The integral ∫_0^wf(z)λ e^-λ(w-z) ẓ is the density of the convolution of the density f with an independent exponential distribution of parameter λ.
On the right-hand side of (<ref>), one can use the definitions of the density z given by (<ref>) and the homogeneous jump distribution j_x and a_x given by (<ref>).
Then in the numerator after the exchange of the order of integrations in the first term and by using the formula for the stationary distribution given by (<ref>) one gets the numerator of (<ref>).
In the denominator one can use the equality
∫_0^∞ f(y)exp(-λ∫_y^∞(1-F(u)) ụ)ỵ
=∫_0^∞ h(y)(1-F(y)) ỵ+m.
which follows by integration by parts.
Note also that the density is not equal to the remaining reign length of Y_t under the stationary distribution because it would involve the integral ∫_0^∞ h(y)f(y+w)/(1-F(y)) ỵ in place of the first term in the numerator on the right-hand side of (<ref>).
§.§ Entry age parameter
Next we introduce another parameter which we call the entry age and we denote it by E.
As opposed to our original model we consider individuals as being born at age E at a modified birth rate and with a modified lifespan distribution.
As a result we obtain a model to the age of the world's oldest person all values of the entry age parameter E≥0 and we can fit the model parameters with different values of the entry age.
For any value E≥0 of the entry age we denote by λ_E(t) the rate at which people reach the age E at time t, that is,
λ_E(t)=λ(t-E)(1-F_t-E(E))
because the new birth process of rate λ_E(t) is obtained by an inhomogeneous thinning of the original Poisson process of the birth events.
The lifespan distribution of those born at time t with age E becomes the remaining lifetime distribution at age E.
The modified cumulative distribution function and density are given by
F_t^E(x)=F_t-E(x+E)-F_t-E(E)/1-F_t-E(E),
f_t^E(x)=f_t-E(x+E)/1-F_t-E(E).
§ MODEL SPECIFICATION AND PARAMETER FITTING
In Subsection <ref> we specify the general model introduced and discussed in Section <ref>, that is,
we assume that the birth rate parameter increases exponentially in time and that the lifespan distribution is given by the gamma–Gompertz–Makeham distribution with time-dependent parameters.
We provide the details of the computation of the likelihood as a function of the model parameters in Subsection <ref>.
We show the way to maximize the likelihood and how the optimal parameters can be found using the Nelder–Mead method in Subsection <ref>.
With these values of the parameters, the age of the world's oldest person and the reign length of the record holder can be computed as described in Subsection <ref>.
§.§ Model specification: birth rate parameter and lifespan distribution
For the rest of the paper we specifiy our general model described in Section <ref>
to the following choice of the birth rate parameter and of the lifespan distribution.
We choose the value of the entry age to be E=0,30,60 and we fit the model parameters for all three values of E separately.
First we specify the intensity function of the Poisson process of births with an exponential growth in time.
For any of the three values of the entry age E we assume that the birth rate at age E is given by
λ_E(t)=C_Ee^κ_Et
where the numerical values of the parameters C_E and κ_E are obtained
by linear regression of the logarithmic data of newborns, people at age 30 and 60 published by the United Nations since 1950.
We extrapolate the linear regression backwards in time and we use the numerical values shown in Table <ref>.
We assume that the underlying force of mortality is chosen so that the lifespan distribution of individuals follows the gamma–Gompertz () distribution with cumulative distribution function and density
F^_a,b,γ(x) =1-(1+aγ/b(e^bx-1))^-1/γ,
f^_a,b,γ(x) =ae^bx(1+aγ/b(e^bx-1))^-1-1/γ
for x≥0 where a,b,γ are positive parameters.
We mention that the gamma–Gompertz–Makeham () distribution differs from the distribution by the presence of a non-negative extrinsic mortality parameter c which appears as an additive term in the force of mortality.
See (<ref>) for the definition of the distribution.
In our model, we exclude the extrinsic mortality for the following two reasons.
Since the extrinsic mortality becomes irrelevant at high ages and we aim to model the front-end of the death distribution at the oldest-old ages,
we do not expect to obtain a reliable estimate on the extrinsic mortality using the data about the world's oldest person.
On the other hand, as explained later in Subsection <ref>, the likelihood maximization provides unrealistic lifespan distributions even for the model if one tries to optimize in all the parameters at the same time.
In order for the algorithm to result in a distribution close to the actual human lifespan distribution, the number of model parameters had to be decreased.
For our model we suppose that in the lifespan distribution, parameters b=b_E, the rate of aging and γ=γ_E, the magnitude of heterogeneity
are constants over time and that they only depend on the value of the entry age parameter E.
The parameter a, the initial level of mortality at the entry age for individuals born at time t, depends on time given by the exponentially decreasing function
a_E(t)=K_Ee^-α_E(t-2000)
where the exponent α_E and the constant K_E only depends on the entry age E.
The reason for subtracting 2000 in (<ref>) is only technical, the numerical values of the parameters do not become tiny with this definition.
In the model with entry age E, we assume that the birth rate λ_E(t) is given by (<ref>) and we fit the gamma–Gompetz distribution with parameters b_E,γ_E and a_E(t) given by (<ref>) for the modified distribution function F_t^E(x) and density f_t^E(x) in (<ref>).
This means that we search for the best fitting values of the parameters α_E,K_E,b_E,γ_E which results in an approximation of the remaining lifetime distribution at the age E.
§.§ Likelihood calculations
The aim of the maximum likelihood method is to give an estimate to the parameters α_E, K_E, b_E and γ_E for E=0,30,60 by finding those values for which the likelihood of the full sample is the largest.
The sample is obtained from the the historical data on the world's oldest person available in <cit.>.
We transform this information into a list of triples of the form (t_i,y_i,z_i) for i=1,…,n
where t_i is the ith time in the sample when the oldest person dies at age y_i and the new record holder has age z_i at time t_i.
Then the data has to satisfy the consistency relation t_i-z_i=t_i+1-y_i+1 since the two sides express the date of birth of the same person.
In the model with entry age E, the likelihood of the ith data point (t_i,y_i,z_i) given the previous data point is equal to
f_t_i-y_i+E^E(y_i-E)/1-F_t_i-y_i+E^E(z_i-1-E)j^E_y_i,t_i(z_i)
=f_t_i-y_i+E^E(y_i-E)/1-F_t_i-y_i+E^E(z_i-1-E)exp(-∫_z_i^y_iλ_E(t_i-u+E)(1-F_t_i-u+E^E(u-E)) ụ)
λ_E(t_i-z_i+E)(1-F_t_i-z_i+E^E(z_i-E))
for all i=2,3,…,n except for i=1 in which case the 1-F_y_1-t_1+E^E(z_0-E) factor in the denominator is missing.
In (<ref>) above we use the transition probabilities of the model with entry age E given by
j^E_y,t(x)=exp(-∫_x^yλ_E(t-u+E)(1-F_t-u+E^E(u-E)) ụ)λ_E(t-x+E)(1-F_t-x+E^E(x-E))
as a generalization of (<ref>).
The explanation of the left-hand side of (<ref>) is that the person died at time t_i at age y_i had age E at time t_i-y_i+E.
The previous data point ensures that this person has already reached age z_i-1 hence we condition their lifetime distribution on this fact.
The transition probabilities in (<ref>) are obtained similarly to (<ref>) with the difference that a person at age u with u∈[x,y] at time t had age E at time t-u+E.
Note that when computing the likelihood of the full data by multiplying the right-hand side of (<ref>) for different values of i the consistency relation of the data implies that the factor 1-F_t_i-z_i+E(z_i-E) of the ith term cancels with the factor 1-F_t_i+1-y_i+1+E(z_i-E) coming from the (i+1)st term.
Hence the log-likelihood of the full sample is given by
l(α,K,b,γ)
=∑_i=1^n(log f_t_i-y_i+E^E(y_i-E)
-∫_z_i^y_iλ_E(t_i-u+E)(1-F_t_i-u+E^E(u-E)) ụ+logλ_E(t_i-z_i+E))
+log(1-F_t_n-z_n-E^E(z_n+E))
=∑_i=1^n(log f^_Ke^-α(t_i-y_i+E-2000),b,γ(y_i-E)
-∫_z_i^y_iCe^κ(t_i-u+E)(1-F^_Ke^-α(t_i-u+E-2000),b,γ(u-E))ụ)
+log(1-F^_Ke^-α(t_n-z_n+E-2000),b,γ(z_n-E))
+nlog C+∑_i=1^nκ(t_i-z_i+E).
where we suppress the dependence of the parameters α,K,b,γ on the entry age.
Note that the last two terms do not depend on the parameters α,K,b,γ hence we can omit these terms in the maximization of the log-likelihood.
§.§ Likelihood maximization
We implemented the calculation of the log-likelihood function l(α,K,b,γ) given by (<ref>) in Python.
We used numerical integration to obtain the integrals on the right-hand side of (<ref>).
We mention that the general integral formula in (<ref>) could not be used because the parameter a of the gamma–Gompertz distribution in the integrand depends on the integration variable on the right-hand side of (<ref>).
In order to maximize the value of the log-likelihood function l(α,K,b,γ) we applied the Nelder–Mead method
<cit.> which is already implemented in Python.
We mention that initially we used the gamma–Gompertz–Makeham distribution as lifespan distribution, see (<ref>) for the definition, which contains the extra parameter c to be fitted but it turned out that the number of model parameters has to be reduced.
The behaviour of the optimization algorithm in the five parameters α,K,b,c,γ using the gamma–Gompretz–Makeham model was very similar the case of four parameters α,K,c,γ in the gamma–Gompertz model.
Running the optimization in the full set of parameters (α,K,b,c,γ in the gamma–Gompertz–Makeham model or α,K,b,γ in the gamma–Gompertz model), it turned out that after a few rounds the parameter K started to decrease dramatically and reached values below 10^-10.
The resulting lifespan distribution seemed very unrealistic with almost no mortality before the age of 100.
This happened for all values of the entry age E=0,30,60.
We explain this phenomenon by the fact that historical data about the oldest person in the world only gives information about the behaviour of the lifespan distribution between the ages 107 and 123.
The simple optimization in the four parameters α,K,b,γ simultaneously yields an excellent fit for the tail decay of the lifespan distribution with the historical data but the result may be very far from the actual human lifespan.
This would limit the practical relevance of our results.
The mathematical reason for the fact that the four-parameter optimization does not result in a satisfactory approximation to the human lifespan distribution is the following.
In these cases, the optimization procedure diverges to those regimes of the parameter space _+^4 where the corresponding gamma–Gompertz distribution is degenerate.
One can prevent reaching these unrealistic combinations of parameters by reducing the amount of freedom in the optimization.
Hence we specify some of the parameters a priori and we perform the optimization in the remaining ones so that it provides a good fit to the data on the age of the oldest old as well as a realistic lifespan distribution.
We believe that the most robust of the four parameters of the model is b which is the exponent in the time dependence of the mortality rate.
By setting the rate of aging b=0.09 the algorithm gives the optimal triple α,K,γ with the best likelihood which is very stable under changing the initial values of these parameters.
The running time is also very short.
The Nelder–Mead algorithm, being a numerical maximization method, heavily relies on the tolerance parameter, which determines the minimal improvement required for the algorithm to continue running. If this parameter is set too high, the algorithm might stop before reaching the optimum. Conversely, if set too low, the algorithm might take excessively long to converge. To address this, we drew inspiration from dynamic learning rate algorithms used in neural network training and developed the following meta-algorithm.
First, we run the Nelder–Mead optimization. Based on the improvement from the starting point, we dynamically adjust the tolerance factor, similar to how learning rates are modified during neural network training. We then run the optimization again, recalibrating the tolerance factor based on the observed improvement, and repeat the process. This iterative adjustment allows us to get closer to the optimum, a hypothesis supported by our practical experience with this meta-algorithm.
Following this meta-algorithm, only a few calls of the Nelder–Mead method is enough to reach the optimum.
The Python codes for the likelihood calculations as well as the Nelder–Mead optimization implemented to this problem are available in <cit.>.
The numerical values of the resulting parameters for the three choices of the entry age are shown in Table <ref>.
The survival probability functions with the parameters given in Table <ref> for individuals born in 2000 corresponding to the entry age E=0,30,60 are shown on Figure <ref> as a function of the age.
We also computed the optimal values of the parameters α,K,γ for other values of the rate of aging b as a sensitivity analysis.
The resulting parameter values for the choices b=0.11, b=0.13 and b=0.15 are shown in Table <ref>.
We mention as an alternative approach that scaling the parameters could enhance the optimization process, but this requires prior knowledge of the range within which the parameters vary. This range could be determined through our iterative application of the Nelder–Mead algorithm.
§.§ Computation of the oldest person's age and of the reign length
We observe that the model with the parameters in Table <ref> fits well to the titleholder data.
We focus on two statistics of the process in order to support this observation about the comparison:
the age of the world's oldest person and the reign length of the record holder.
In the case of both statistics exact formulas are only available for the homogeneous model introduced in Subsection <ref> where the birth rate is constant as well as the lifespan distribution does not depend on time.
Hence we apply an approximation where the error is negligible compared to the difference from the statistics computed using the data.
In the general model the distribution of the age of the world's oldest person at time t is given by the density h_t(x) in (<ref>) and by the point mass m_t at 0 in (<ref>).
For the numerical computations, we ignore the point mass m_t which is below the round-off error in the numerical results.
The difficulty in computing the mean age of the oldest person at time t is that parameter a of the gamma–Gompertz–Makeham distribution function F_t-u in the exponent of (<ref>) also depends on the integration variable u.
In our approximation we fix the value of the parameter a of the distribution in h_t(x) in (<ref>) to a value which is equal to a_0(t-d) in (<ref>) with some delay d.
The delay d is chosen so that the mean age of the oldest person computed using a_0(t-d) as parameter a for all times in the distribution function in (<ref>) is equal to the same value d.
For a given t, this value of d can be obtained as the fixed point of the contraction map
d↦∫_0^∞ xe^-∫_x^∞λ(t-u)(1-F_t-d(u)) ụλ(t-x)(1-F_t-d(x)) x̣
which provides a reasonable approximation for the mean age of the world's oldest person.
For the comparison with the data and for the prediction, we use the model with entry age E=0.
Hence in (<ref>), the function λ is given in (<ref>) with E=0 and F_t-d is the distribution function with parameters given by the E=0 values in Table <ref> and with a=a_0(t-d) in (<ref>).
This approximation is reasonable because the distribution of the age of the oldest person is highly concentrated.
The fixed point of the map in (<ref>) as the expected age of the world's oldest person can be found in a few steps of iterations.
We show the result on Figure <ref>.
We applied 10 iterations using the fitted model parameters for each year between 1955 and 2019.
By computing the standard deviation of the age of the oldest person as well we obtain the mean and a confidence interval for the age.
The predictions for the age distribution of the world's oldest person in the future shown on Figure <ref>.
We obtained them by using the exact age distribution formula in Subsection <ref> along with the numerical values of the parameters C,κ,α,K,γ given in Tables <ref> and <ref> for entry age E=0.
In our model, the distribution function of the age of the world's oldest person is given in (<ref>) where the distribution function can be substituted with the estimated parameter values at any time.
In this way, the probability of observing an age greater than or equal to Calment's or Knauss' actual age at the time of their death can be computed exactly.
The numerical values are 0.000286 for Calment and it is 0.0116 for Knauss.
The backtesting mentioned in the Results section is performed as follows.
We estimated the best parameter values with entry age 0 based on the reduced data on the world's oldest person between 1955 and 1988 where the ending date is the time when Calment became the world's oldest person.
The resulting parameters α=0.01516,K=0.00002064,γ=0.08413 are numerically not very far from the optimal parameters in Table <ref> but the difference is more visible on Figure <ref>.
The figure shows the model-based mean age and confidence interval for the age of the world's oldest person computed using the full data as well as the data until 1988.
For the reign length of record holders, we again used the expected age at a given time obtained as the fixed point of the iteration in (<ref>).
The numerical value of the expected reign length obtained from the iteration is 1.195 in 1955 and it is 1.188 in 2019.
The empirical value of the reign length is 1.008 computed from the data by dividing the total length of the time interval between 1955 and 2019 by the number of record holders.
§ METHODS
In this section, we provide supplementary information related to the main result of this paper.
We perform explicit computations with the gamma–Gompertz–Makeham model
and we express the integral of the survival function in terms of a hypergeometric function.
The cumulative distribution function and the density of the gamma–Gompertz–Makeham () distribution are given by
F^_a,b,c,γ(x) =1-e^-cx/(1+aγ/b(e^bx-1))^1/γ,
f^_a,b,c,γ(x) =e^-cx/(1+aγ/b(e^bx-1))^1+1/γc(b-aγ)+a(b+cγ)e^bx/b
for x≥0 where a,b,c,γ are positive parameters.
The positivity of parameters implies the finiteness of all moments and, in particular, the convergence of the integral of the survival function
∫_x^∞(1-F^_a,b,c,γ(u)) ụ.
In the homogeneous model, the integral of the survival function appears in the density of the distribution of Y_t in (<ref>) and in the stationary density of the peaks process in (<ref>).
We show below that in the gamma–Gompertz–Makeham model the integral of the survival function can be computed explicitly and it is given by
∫_x^∞(1-F^_a,b,c,γ(u))ụ
=( b/a γ)^1/γe^-(c+b/γ)x/b/γ+c_2F_1(1/γ,1/γ+c/b;1+1/γ+c/b;a γ-b/a γe^-bx)
where _2F_1(a,b;c;z) is the hypergeometric function.
See 15.1.1 in <cit.> for the definition and properties.
We prove (<ref>) based on the following integral representation 15.3.1 in <cit.> of the hypergeometric function
_2F_1(a,b,c,z)=Γ(c)/Γ(b) Γ(c-b)∫_0^1 t^b-1(1-t)^c-b-1(1-tz)^-a ṭ
which holds whenever (c)>(b)>0.
First we prove an identity for complex parameters α,β,δ which satisfy (α+β)>0 and we compute
∫_x^∞e^-β u/(1+δ e^u)^α ụ =1/δ^α∫_x^∞e^-(α+β)u/(1+e^-u/δ)^α ụ
=e^-(α+β)x/δ^α∫_0^1 y^α+β-1(1+e^-xy/δ)^-α ỵ
=e^-(α+β)x/(α+β)δ^α·_2F_1(α,α+β;1+α+β;-e^-x/δ)
where we applied a change of variables y=e^x-u in the second equality above and we applied the hypergeometric identity (<ref>) in the last equality with a=α, b=α+β, c=1+α+β, z=-e^-x/δ together with the observation that with these values of the parameters the prefactor of the integral on the right-hand side of (<ref>) simplifies to α+β.
Note that the condition (c)>(b)>0 for (<ref>) to hold is satisfied by our assumption (α+β)>0 which also makes the integrals in (<ref>) convergent.
Next we show (<ref>) using (<ref>) as follows.
We write
∫_x^∞(1-F^_a,b,c,γ(u)) ụ =1/(1-aγ/b)^1/γ∫_x^∞e^-cu/(1+aγ/b-aγe^bu)^1/γ ụ
=1/(1-aγ/b)^1/γb∫_bx^∞e^-cv/b/(1+aγ/b-aγe^v)^1/γ ṿ
=1/(1-aγ/b)^1/γbe^-(1/γ+c/b)bx/(1/γ+c/b)(aγ/b-aγ)^1/γ_2F_1(1/γ,1/γ+c/b;1+1/γ+c/b;a γ-b/a γe^-bx)
where we applied the change of variables v=bu in the second equality above and we used (<ref>) with α=1/γ, β=c/b, δ=aγ/(b-aγ) and with x replaced by bx.
The right-hand side of (<ref>) simplifies to that of (<ref>).
10
rm
url<#>1urlprefixURL doiprefixDOI:
OepVau02
authorOeppen, J. & authorVaupel, J. W.
journaltitleBroken limits to life expectancy.
Science volume296,
pages1029–1031 (year2002).
Can10
authorCanudas-Romo, V.
journaltitleThree measures of longevity: Time
trends and record values.
Demography volume47,
pages299–312 (year2010).
Vauetal21
authorVaupel, J. W., authorVillavicencio, F. &
authorBergeron-Boucher, M.-P.
journaltitleDemographic perspectives on the
rise of longevity.
Proceedings of the National Academy of
Sciences volume118, pagese2019536118
(year2021).
Gom25
authorGompertz, B.
journaltitleOn the nature of the function
expressive of the law of human mortality, and on a new mode of determining
the value of life contingencies.
Philosophical Transactions of the Royal Society
of London volume115, pages513–583
(year1825).
NemMis18
authorNémeth, L. & authorMissov, T. I.
journaltitleAdequate life-expectancy
reconstruction for adult human mortality data.
Plos one volume13,
pagese0198485 (year2018).
Misetal16
authorMissov, T. I., authorNémeth, L. &
authorDańko, M. J.
journaltitleHow much can we trust life tables?
sensitivity of mortality measures to right-censoring treatment.
Palgrave Communications
volume2, pages1–10 (year2016).
Maietal21
authorMaier, H., authorJeune, B. &
authorVaupel, J. W.
titleExceptional lifespans
(publisherSpringer Nature, year2021).
Danetal23
authorDang, L. H. K. et al.
journaltitleThe question of the human mortality
plateau.
Demographic Research
volume48, pages321–338
(year2023).
VijLeb17
authorVijg, J. & authorLe Bourg, E.
journaltitleAging and the Inevitable Limit to
Human Life Span.
Gerontology volume63,
pages432–434 (year2017).
Alvetal21
authorAlvarez, J.-A., authorVillavicencio, F.,
authorStrozza, C. & authorCamarda, C. G.
journaltitleRegularities in human mortality
after age 105.
PloS one volume16,
pagese0253940 (year2021).
Beletal22
authorBelzile, L. R., authorDavison, A. C.,
authorGampe, J., authorRootzén, H. &
authorZholud, D.
journaltitleIs there a cap on longevity? A
statistical review.
Annual Review of Statistics and Its
Application volume9, pages21–45
(year2022).
RooZho17
authorRootzén, H. & authorZholud, D.
journaltitleHuman life is unlimited–but
short.
Extremes volume20,
pages713–728 (year2017).
Baretal18
authorBarbi, E., authorLagona, F.,
authorMarsili, M., authorVaupel, J. W. &
authorWachter, K. W.
journaltitleThe plateau of human mortality:
Demography of longevity pioneers.
Science volume360,
pages1459–1461 (year2018).
Modetal17
authorModig, K., authorAndersson, T.,
authorVaupel, J., authorRau, R. &
authorAhlbom, A.
journaltitleHow long do centenarians survive?
Life expectancy and maximum lifespan.
Journal of internal medicine
volume282, pages156–163
(year2017).
WilRob03
authorWilmoth, J. R. & authorRobine, J.-M.
journaltitleThe world trend in maximum life
span.
Population and Development Review
volume29, pages239–257
(year2003).
GAvGav11
authorGavrilov, L. & authorGavrilova, N.
journaltitleMortality measurement at advanced
ages: A study of the social security administration death master file.
North American Actuarial Journal
volume15, pages442–447
(year2011).
New18
authorNewman, S. J.
journaltitleErrors as a primary cause of
late-life mortality deceleration and plateaus.
PLoS biology volume16,
pagese2006776 (year2018).
Cam22
authorCamarda, C. G.
journaltitleThe curse of the plateau. measuring
confidence in human mortality estimates at extreme ages.
Theoretical Population Biology
volume144, pages24–36
(year2022).
MisVau15
authorMissov, T. I. & authorVaupel, J. W.
journaltitleMortality implications of mortality
plateaus.
siam REVIEW volume57,
pages61–70 (year2015).
Gbaetal17
authorGbari, S., authorPoulain, M.,
authorDal, L. & authorDenuit, M.
journaltitleExtreme value analysis of mortality
at the oldest ages: a case study based on individual ages at death.
North American Actuarial Journal
volume21, pages397–416
(year2017).
HanSib16
authorHanayama, N. & authorSibuya, M.
journaltitleEstimating the upper limit of
lifetime probability distribution, based on data of japanese centenarians.
Journals of Gerontology Series A: Biomedical
Sciences and Medical Sciences volume71,
pages1014–1021 (year2016).
Einetal19
authorEinmahl, J. J., authorEinmahl, J. H. &
authorde Haan, L.
journaltitleLimits to human life span through
extreme value theory.
Journal of the American Statistical
Association volume114, pages1075–1080
(year2019).
LiLiu20
authorLi, J. & authorLiu, J.
journaltitleA modified extreme value
perspective on best-performance life expectancy.
Journal of Population Research
volume37, pages345–375
(year2020).
Mil20
authorMilholland, B.
journaltitleJeanne Calment, actuarial
paradoxography and the limit to human lifespan.
Rejuvenation Research
volume23, pages17–18
(year2020).
tableurl
authorGroup, G. R.
titleWorld's oldest person titleholders since 1955.
howpublished<https://grg.org/Adams/C.HTM>
(year2018).
note[Online; accessed 14-06-2023].
Bri86
authorBrillinger, D. R.
journaltitleA biometrics invited paper with
discussion: the natural variability of vital rates and associated
statistics.
Biometrics volume42,
pages693–734 (year1986).
NM65
authorNelder, J. A. & authorMead, R.
journaltitleA Simplex Method for Function
Minimization.
The Computer Journal
volume7, pages308–313
(year1965).
Kiss24
authorKiss, C.
titleModelling the age of the oldest person in the world.
howpublished<https://github.com/csabi0312/modelling-the-age-of-the-oldest-person-in-the-world>
(year2024).
AbrSte84
authorAbramowitz, M. & authorStegun, I. A.
titlePocketbook of Mathematical Functions
(publisherVerlag Harri Deutsch, addressThun-Frankfurt
am Main, year1984).
§ ACKNOWLEDGEMENTS
We thank Katalin Kovács for some useful advice at an early stage of the project which led to this collaboration.
The work of Cs. Kiss and B. Vető was supported by the NKFI (National Research, Development and Innovation Office) grant FK142124.
B. Vető is also grateful for the support of the NKFI grant KKP144059 “Fractal geometry and applications” and for the Bolyai Research Scholarship of the Hungarian Academy of Sciences.
L. Németh was supported by MaRDI, funded by the Deutsche Forschungsgemeinschaft (DFG), project number 460135501, NFDI 29/1 “MaRDI – Mathematische Forschungsdateninitiative.
§ AUTHOR CONTRIBUTIONS
B. V. initiated the research. Cs. K. and B. V. derived the model, prepared the scripts and figures, and carried out the estimations. Cs. K., L. N. and B. V. analyzed the results and wrote the manuscript.
§ DATA AVAILABILITY
The titleholder data are freely available at https://grg.org/Adams/C.HTM
§ ADDITIONAL INFORMATION
The authors declare no competing interests.
§ TABLES
|
http://arxiv.org/abs/2409.03097v1 | 20240904215744 | Real-time operator evolution in two and three dimensions via sparse Pauli dynamics | [
"Tomislav Begušić",
"Garnet Kin-Lic Chan"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125, USA
§ ABSTRACT
We study real-time operator evolution using sparse Pauli dynamics, a recently developed method for simulating expectation values of quantum circuits. On the examples of energy and charge diffusion in 1D spin chains and sudden quench dynamics in the 2D transverse-field Ising model, it is shown that this approach can compete with state-of-the-art tensor network methods. We further demonstrate the flexibility of the approach by studying quench dynamics in the 3D transverse-field Ising model which is highly challenging for tensor network methods.
For the simulation of expectation value dynamics starting in a computational basis state, we introduce an extension of sparse Pauli dynamics that truncates the growing sum of Pauli operators by discarding terms with a large number of X and Y matrices. This is validated by our 2D and 3D simulations. Finally, we argue that sparse Pauli dynamics is not only capable of converging challenging observables to high accuracy, but can also serve as a reliable approximate approach even when given only limited computational resources.
Real-time operator evolution in two and three dimensions via sparse Pauli dynamics
Garnet Kin-Lic Chan
September 9, 2024
==================================================================================
./figures/
§ INTRODUCTION
Numerical simulations of quantum dynamics are essential for our understanding of strongly correlated many-body physics. For large systems and long-time dynamics, exact state-vector or tensor network contraction <cit.> methods become computationally intractable and approximate numerical methods are required. These can be formulated in different pictures, for example, the Schrödinger picture, where the state is evolved, or the Heisenberg picture, where the state is unchanged and time evolution is applied to the observable.
Operator time evolution appears naturally in the dynamics of high-temperature systems <cit.> and the theory of operator spreading <cit.>, and has proven useful in the computation of time-correlation functions <cit.> and out-of-time-order correlations <cit.>. Compared to the Schrödinger picture, working in the Heisenberg picture has the benefit that one can take advantage of the dynamical light-cone structure, i.e., the fact that the support of some relevant observables is initially local and grows in time. Operator time evolution has been the subject of tensor network studies, e.g., those based on matrix-product operators (MPO) <cit.> or projected entangled-pair operators (PEPO) <cit.>, whose performance depends on the degree of operator entanglement in the evolved observable.
Alternative Heisenberg-picture methods have been formulated to take advantage of the sparsity of the observable in the Pauli operator representation.
Under time evolution, local observables spread to a superposition of an increasing number of Pauli operators, and
various strategies to curb the (at worst) exponential growth of the number of Pauli operators have been employed, including stochastic sampling of Pauli paths <cit.> or discarding Pauli paths based on Pauli weights <cit.>, perturbation order with respect to the nearest Clifford transformation <cit.>, Fourier order <cit.>, or combinations thereof <cit.>. These methods have mainly been developed in the context of simulating quantum circuit expectation values, where the number of Pauli operators can be low due to the use of Clifford gates or noise <cit.>.
Here, we consider one of these methods, namely the sparse Pauli dynamics (SPD) <cit.>, which was applied successfully recently to simulate the kicked transverse-field Ising model quantum simulation of Ref. <cit.>. There, we showed that it can simulate the expectation values faster than the quantum device to reach comparable accuracy and that, given more time, can produce more accurate results than the quantum processor. These simulations also demonstrated that SPD performs well compared to other classical simulation strategies based on state <cit.> and operator evolution <cit.>. In this work, we analyze its performance in simulating real-time dynamics on 1D spin chains, where we compare it to MPO dynamics including its recent dissipation assisted variant <cit.>, as well as in the transverse-field Ising model on square (2D) and cubic (3D) lattices, where we use available 2D tensor network simulation benchmarks. For the latter, we introduce a modification of the original method that further reduces the computational cost of simulating time-dependent expectation values when the initial state is a computational basis state (e.g., | 0^⊗ n⟩).
§ REAL-TIME SPARSE PAULI DYNAMICS
In SPD we write the observable operator
O = ∑_P ∈𝒫 a_P P
as a sum of Pauli operators P with complex coefficients a_P, where 𝒫 is a set of Pauli operators that contribute to O. The central part of our algorithm is the action of a Pauli rotation operator U_σ(θ) = exp(-i θσ / 2) on the observable (<ref>):
Õ = U_σ(θ)^† O U_σ(θ) = ∑_P ∈𝒫_C a_P P + ∑_P ∈𝒫_A (a_P cos(θ) P + i a_P sin(θ) σ· P),
which follows from
U_σ(θ)^† P U_σ(θ) =
P, [P, σ] = 0,
cos(θ) P + i sin(θ) σ· P, {P, σ} = 0,
where [·, ·] denotes a commutator and {·, ·} an anticommutator. In Eq. (<ref>), 𝒫_C (𝒫_A) is a set of Pauli operators in 𝒫 that commute (anticommute) with σ. The right-hand side of Eq. (<ref>) can be brought into the form of Eq. (<ref>) by identifying which σ· P already exist in 𝒫 and which have to be added to represent the rotated observable Õ. In general, the number of Pauli operators that we have to store, N = |𝒫|, grows exponentially with the number of unitary Pauli rotation operators applied. To limit the growth of N, we replace the exactly rotated observable Õ by
Õ_δ = Π_δ (Õ),
where Π_δ acts by discarding all Pauli operators P with |a_P| < δ. Here, the threshold δ defines the approximation error, i.e., δ = 0 corresponds to exact dynamics. In practice, the goal is to converge the simulation with respect to this tunable parameter. Additional implementation details can be found in Appendix <ref>.
For real-time dynamics under a Hamiltonian
H = ∑_j c_j H_j,
where c_j are real coefficients and H_j are Pauli operators, we replace the exact time-evolution operator U_Δ t = exp(- i H Δ t), corresponding to a time step Δ t, by the first-order Trotter splitting formula
U_Δ t≈∏_j exp(- i c_j Δ t H_j).
Now the real-time evolution takes the form of applying multiple Pauli rotation gates, which allows us to use the SPD method as defined above. The size of the time step determines not only the Trotter error in Eq. (<ref>) but also the truncation error in SPD. Specifically, for small values of threshold δ and time step Δ t, the error is a function of their ratio δ / Δ t. To show this, let us consider one substep of the dynamics in which we apply one Pauli rotation operator exp(- i c_j Δ t H_j) to the observable (<ref>), the result of which is shown in Eq. (<ref>) with θ = 2 c_j Δ t. For a sufficiently small time step, we can expand Eq. (<ref>) up to first order in Δ t,
Õ≈∑_P ∈𝒫_C a_P P + ∑_P ∈𝒫_A (a_P P + 2 i a_P c_j Δ t σ· P) = O + 2 i c_j Δ t ∑_P ∈𝒫_A a_P H_j · P.
For small threshold δ, we can assume that the threshold-based truncation will mainly discard Pauli operators H_j · P that do not already exist in O. The condition under which they are discarded reads |2 c_j Δ t a_P| < δ, i.e., |a_P| < (δ / Δ t) / 2|c_j|. Consequently, the truncation error and the number of Pauli operators depend only on the ratio δ / Δ t. Since the number of Pauli operators determines the computational cost per time step, it is generally preferred to use a larger time step—at fixed δ / Δ t, the cost per time step is constant, but a larger time step requires less steps to reach a given total time. In turn, the size of the time step is limited by the Trotter error, which must be validated before converging the calculation with respect to truncation threshold δ.
§ RESULTS
§.§ Spin and energy diffusion constants in 1D chains
We begin with the computation of diffusion constants of conserved densities in spin chains of length L at infinite temperature. The diffusion constant <cit.>
D = 1/2∂/∂ t d^2(t)
is defined through the time-derivative of the mean-square displacement
d^2(t) = ∑_j C_j(t) j^2 - ( ∑_j C_j(t) j )^2,
where
C_j(t) = Tr[q_j q_L+1/2(t)]/{∑_j Tr[q_j q_L+1/2(0)] }
are the dynamical correlations between the operators q localized at the central site (L+1)/2 and sites j. These operators represent conserved densities in the sense that ∑_j q_j(t) = ∑_j q_j(0), which leads to ∑_j C_j(t) = ∑_j C_j(0) = 1.
This problem is naturally formulated in the Heisenberg picture and we employ SPD to numerically evolve q_L+1/2(t) in time. Within the sparse Pauli representation of the operator, it is also easy to evaluate the overlap with another Pauli operator (or a sum of Pauli operators) q_j (see Appendix <ref>). Since the conservation laws are not strictly obeyed by the non-unitary truncation scheme employed in SPD, we replace the denominator in Eq. (<ref>) by ∑_j Tr[q_j q_L+1/2(t)], i.e., we renormalize the correlations at post-processing.
§.§.§ Models
Below, we introduce the examples of Ref. <cit.>, which were studied using dissipation assisted operator evolution (DAOE) combined with a MPO representation of the operator. There, the authors introduced an artificial dissipator that reduces the entanglement of the said MPO and demonstrated that this computational strategy is well founded for the simulation of diffusive transport at high temperature.
The first example is the one-dimensional tilted-field Ising model <cit.>
H = ∑_j=1^L-1 Z_j Z_j+1 + ∑_j=1^L (1.4 X_j + 0.9045 Z_j)
with open boundary conditions, while the conserved densities are the local energies
q_j = 1/2 Z_j-1 Z_j + 1/2 Z_j Z_j+1 + 1.4 X_j + 0.9054 Z_j.
L=51 is the number of sites in the chain.
The second model is the XX-ladder Hamiltonian <cit.>
H = 1/4∑_j=1^L-1∑_a=1,2 (X_j,a X_j+1, a + Y_j,a Y_j+1,a) + 1/4∑_j=1^L (X_j,1 X_j, 2 + Y_j,1 Y_j,2),
where the total spin is a conserved property and we consider the diffusion of q_j = (Z_j,1 + Z_j,2)/2 along the chain of length L=41 (number of sites is n=2L=82).
For these two models, we simulated the mean-square displacement (<ref>) for a total time of t = 20, with a time step of Δ t = 0.02, unless stated otherwise. The diffusion constant was computed by linear regression of d^2(t) between t=10 and t=20.
§.§.§ Numerical results
Figure <ref> shows the mean-square displacements for the tilted-field Ising (panel A) and XX-ladder models (panel C), simulated using SPD with different values of the threshold δ ranging from 2^-18 to 2^-13. We can observe how d^2(t) converges onto a straight line at large t as we reduce the threshold and make the simulations more accurate. Figures <ref>B, D plot the corresponding diffusion constant as a function δ/Δ t for two sets of data, one using Δ t = 0.01 and the other using Δ t = 0.02. For this property, the results are amenable to an extrapolation in δ→ 0, and we find D ≈ 1.4 for the tilted-field Ising chain (same value was reported in Ref. <cit.>), while D ≈ 0.94 for the XX-ladder (compared to D ≈ 0.95 of Ref. <cit.> and D ≈ 0.96-0.98 reported in Ref. <cit.>). The plots of the diffusion constant also reveal the scaling relationship discussed in Sec. <ref>. Namely, as the threshold is reduced, the simulations using different δ and Δ t but the same δ / Δ t become closer to each other. In addition, we can numerically verify that using a larger time step reduces the computational cost. For example, the two points in Fig. <ref>D at fixed δ / Δ t = 2^-18/0.02 ≈ 0.00019 take around 84h (Δ t = 0.01, blue circle) and 43h (Δ t = 0.02, orange square) to simulate on 6 cores.
To illustrate that the above problems are non-trivial to simulate, we present a comparison between SPD and matrix-product operator (MPO) dynamics without dissipation assistance (Fig. <ref>, see details in Appendix <ref>), a standard method for 1D dynamics.
Using a MPO bond dimension up to χ=2^9 we find that the MPO simulations are still far from the exact results even for shorter chains (L=9-21) of the tilted-field Ising model, whereas SPD is more accurate already at rather large values of δ. Similarly, Ref. <cit.> reported that the MPO dynamics of the L=51 chain with bond dimension χ = 2^9 diverges from the exact result already at t=8. For the same system, SPD is visually well converged up to t=15 already at δ = 2^-15.
In the remainder of this Section, we analyze the operator evolution in terms of contributions from Pauli operators of different weights. Here, the Pauli weight is defined as the number of non-identity Pauli matrices in a Pauli operator. To this end, for any operator O=∑_P ∈𝒫 a_P P, we introduce F_m = ∑_P ∈𝒫_m |a_P|^2, where 𝒫_m ⊆𝒫 is a subset of Pauli operators with Pauli weight m. Then the sum F = ∑_m=1^n F_m is constant for unitary dynamics and equal to the square of the Frobenius norm, i.e., F = Tr(O^†O)/2^n. Figure <ref>A shows the breakdown of F into individual components F_m (up to m=12) for the example of tilted-field Ising model [Eqs. (<ref>) and (<ref>)], demonstrating how the dynamics evolves initially low-weight Pauli operators into a sum of operators with higher weights. After some time, F of the operator evolved with SPD deviates from its initial value because of threshold-based truncation. This truncation appears to affect high-weight Pauli operators more than the low-weight Paulis. This is confirmed in Fig. <ref>B, where we show F_m contributions for m up to 5. As shown in previous works <cit.>, Pauli operators corresponding to the local conserved quantity (here m=1 and m=2 for the local energy (<ref>)) obey the long-time scaling F_m ∼ t^-1/2, whereas other Pauli operators (here m>2) obey F_m ∼ t^-3/2 in the long-time limit.
This observation motivates methods that truncate the time-evolved operator based on Pauli weights, including the dissipation assisted operator evolution (DAOE) approach <cit.>, low-weight simulation algorithm (LOWESA) <cit.> and observable's back-propagation on Pauli paths (OBPPP) <cit.>, two approaches that were originally developed for the simulation of noisy quantum circuits, and the restricted state space approximation <cit.>, introduced in the context of simulating nuclear magnetic resonance experiments. Our results indicate that SPD can take advantage of such behavior without explicitly truncating the sum based on Pauli weights because high-weight Pauli operators appear with small coefficients that do not meet the threshold criterion.
§.§ Time-dependent expectation values
§.§.§ X-truncated sparse Pauli dynamics
In the examples above, we focused on infinite-temperature time-correlation functions of a time-evolved operator with a local, low-weight Pauli operator. Here, we consider expectation values of the form ⟨ O ⟩_t = ⟨ 0^⊗ n | O(t) | 0^⊗ n⟩, where high-weight Pauli operators can contribute as long as they are composed only of identity and Z matrices (i.e., if they are diagonal in the computational basis). Default Heisenberg evolution does not account for the information about the state over which we take the expectation value but rather treats all Pauli operators equally. Within the framework of SPD, this means that we keep a large number of Pauli operators, of which only a fraction contribute to the observable.
To further truncate the number of Pauli operators without introducing large errors in the expectation value ⟨ O ⟩_t, we propose to discard Pauli operators composed of more than M X or Y Pauli matrices. We refer to the number of X/Y matrices as the X-weight of a Pauli operator and we call this additional truncation scheme X-truncated SPD (xSPD). The truncation introduces the additional assumption that for certain (short) times, there is limited operator backflow from high X-weight Paulis to the manifold of Z-type Pauli operators. For each calculation, we test the value of M to ensure that the error introduced by the X-truncation scheme is sufficiently small (for example, smaller than the target convergence criterion). The X-truncation is applied only every T steps of the dynamics to limit the impact on the accuracy. In our calculations, we fixed T to 5 steps. Finally, we note here that alternatives to our hard M cutoff could also be considered, such as introducing an artificial dissipator based on the Pauli's X-weight that would be similar in spirit to DAOE <cit.>.
In the following, we apply this modification of the original SPD method to dynamics in the 2D (square lattice) and 3D (cubic lattice) transverse-field Ising models described by the Hamiltonian
H = -∑_⟨ j k ⟩ X_j X_k - h ∑_j=1^L Z_j,
where the first sum runs over nearest neighbors on the lattice with open boundary conditions and h controls the magnitude of the field. We consider the time-dependent magnetization ⟨ Z⟩_t = ⟨ 0^⊗ n| Z_0(t) |0^⊗ n⟩, where Z_0 denotes the Z Pauli operator on the central site. Physically this corresponds to the magnetization induced after a sudden quench from infinite h to a finite value of h.
In this setting, the first-order Trotter splitting
U^(1)(t) = [e^ i Δ t ∑_⟨ j k ⟩ X_j X_k e^ i h Δ t ∑_j=1^L Z_j]^K,
where K = t/Δ t is the number of time steps, is equivalent to the second-order splitting
U^(2)(t) = [e^ i 1/2 h Δ t ∑_j=1^L Z_j e^ i Δ t ∑_⟨ j k ⟩ X_j X_k e^ i 1/2 h Δ t ∑_j=1^L Z_j]^K
= e^ i 1/2 h Δ t ∑_j=1^L Z_j U^(1)(t) e^ - i 1/2 h Δ t ∑_j=1^L Z_j
because Z-Pauli rotations commute with the observable and apply only a phase to the initial state.
§.§.§ 2D transverse-field Ising model
The quantum quench dynamics of magnetization in the 2D transverse-field Ising model has been studied by means of infinite projected entangled pair state (iPEPS) <cit.> and neural network quantum state <cit.> calculations. While the iPEPS simulations correspond to dynamics in the thermodynamic limit, neural network simulations were performed on a 10 × 10 lattice, which was shown to be sufficiently large to exhibit negligible finite-size effects <cit.>. In our simulations, we used an 11 × 11 square lattice. We set h=h_c, where h_c = 3.04438(2) corresponds to the quantum critical point <cit.>, and simulate the magnetization up to t = 0.92, where we can compare our results to different update schemes used in iPEPS simulations, namely the full update (FU) <cit.>, neighborhood tensor update (NTU) <cit.>, and gradient tensor update (GTU) <cit.>.
Figure <ref> shows the convergence of xSPD with respect to the threshold δ. As expected, the method converges quickly at short times but requires small values of δ to converge the values at longer times. Our most accurate simulation agrees well with FU and GTU iPEPS results, and shows some deviation from the NTU scheme at the end of the simulation time.
We note that although the NTU iPEPS calculation corresponds to the largest bond dimension amongst the iPEPS data, the accuracy of truncation is also believed to be less than for the FU iPEPS simulation, thus the relative accuracy of the different reference iPEPS data is unclear.
The disagreement between the two smallest δ xSPD simulations at the end of the simulation time is only 0.002 (0.007 over the 3 smallest δs) which provides an estimate of the threshold error. This threshold error is comparable to the estimated Trotter and X-truncation errors (discussed below).
In this example, we employed a time step of Δ t = 0.04 and set the X-truncation parameter to M=5. To validate this choice of parameters, we analyze the associated errors in Fig. <ref>. Specifically, for the time step (Fig. <ref>A), we set δ / Δ t = 2^-19/0.01 = 1.90734 × 10^-4 and compute the observable using three different time steps. We estimate that the Trotter error is below 0.003 within the simulated time of t=0.92. Similarly, we perform SPD and xSPD simulations with varying values of M, using δ=2^-20. We estimate that the error due to M=5 X-truncation is about 0.003. In contrast, employing M=7 would lead to almost no error but also limited computational savings, while M=3 produces an error that is greater than our convergence target of less than 0.01. Note that due to symmetry, all Pauli operators appear with an even X-weight, which is why we only consider odd values of M.
Regarding the computational cost, the most accurate simulation at δ = 2^-23 generated up to 8.5 billion Pauli operators, used over 1TB of memory, and took around 36 hours on 16 CPU cores. For comparison, the least accurate simulation shown in Fig. <ref>, corresponding to δ = 2^-18, generated at most 84 million Pauli operators in 32 minutes on 16 CPU cores.
§.§.§ 3D transverse-field Ising model
As our final example, we present the quench dynamics of magnetization for spins on a simple cubic lattice with L=11 and n=L^3=1331 sites. Here we consider two values of h, h=1 (weak field) and h=5.15813(6) (critical point).
In the case of h=1, using Δ t = 0.04 and M=7, we could run the dynamics up to t=1 with thresholds as low as 2^-19 (see Fig. <ref>). The most accurate simulations (δ≤ 2^-17) agree to within ≈ 0.02, which is comparable to the estimated time step (Trotter) and X-truncation errors (see Fig. <ref>A, C in Appendix <ref>). Interestingly, even the fastest, least-accurate simulation (δ=2^-14) exhibits “qualitative accuracy”, i.e., recovers the general trend of the most accurate available result. We ascribe this to the fact that sparse Pauli representation can easily reproduce the dynamics dominated by few Pauli operators. However, the method struggles to include small contributions from many Pauli operators generated by the time evolution. For example, while only about 10^6 Pauli operators are generated at threshold δ=2^-14 after 1 time unit, around 7 × 10^8 Pauli operators are generated during the same dynamics with a threshold of δ=2^-19. Yet, the difference in the observable appears limited to around 0.04.
The system with h=5.158136 (Fig. <ref>B) was propagated up to t=0.6 using Δ t= 0.02 and M=5. For this choice of parameters, the Trotter and X-truncation errors are estimated to be below 0.005 (see Fig. <ref>B, D in Appendix <ref>). Because of the shorter dynamics we could use a smaller value of the X-truncation parameter M compared to the weak-field case. The results, using thresholds as low as δ=2^-20, are converged to below 0.01 for times t<0.5, after which our most accurate simulations begin to deviate from each other.
In general, these 3D calculations are expected to pose challenges for tensor network techniques, which for a fixed bond dimension show an exponential scaling with the connectivity of the lattice (assuming site tensors with a number of bonds equal to the number of neighbours). Although SPD does not show this exponential scaling, the 3D simulations for a given side-length L are still more costly than the 2D ones, primarily because the number of sites n=L^3 is a factor of L larger than in the 2D case.
For these reasons, we cannot simulate as many Pauli operators as in the 2D case. Specifically, our memory budget of about 1.5TB is reached already with less than 10^9 Pauli operators, an order of magnitude less than in our 2D square lattice simulations. With this number of Pauli operators, the computation with h=1 takes around 73 h on 16 CPU cores, about two times longer than our most accurate 2D calculation. Nonetheless, the relative feasibility of these simulations illustrate the kinds of systems that can be (approximately) studied by SPD dynamics that would otherwise be challenging to consider.
§.§.§ Computational savings from X-truncation
We now analyze the savings due to the X-truncation scheme. Figures <ref>A, B show the number of Pauli operators as a function of time for 2D and 3D simulations. While the number of Pauli operators in xSPD is suppressed by the X-truncation, the number of Z-type operators (Pauli operators composed only of Z and identity matrices) is almost the same as in SPD. Since the total cost of the computation is proportional to the number of all Pauli operators N, we can use the ratio N_ SPD/N_ xSPD to quantify the computational savings (Fig. <ref>C). We observe a factor of 6 decrease in the number of Pauli operators in the 2D case, and up to a factor of 5 for the 3D simulation. Such savings allow a factor of 2–4 lower thresholds compared to the original SPD. For example, within a budget of a few days and 1.5 TB of memory, the most accurate SPD calculation we could run for the 2D system would be limited to δ=2^-21, which is not converged with respect to threshold at the longest simulated times (see Fig. <ref>). In contrast, with xSPD we could afford to run the same simulation with δ=2^-23.
§ CONCLUSION
To conclude, we have presented numerical simulations of real-time operator evolution with SPD and its modified version, xSPD. We have shown that, in systems for which reference data is available, the performance of these methods is at the level of state-of-the-art tensor network approaches, while the flexibility and simplicity of SPD allows for applications to dynamics problems where tensor network approaches have yet to make an impact, such as in 3D lattices. For the studied systems, we found that we can obtain very accurate results either by converging the observables in the case of short-time, 2D and 3D transverse field Ising model dynamics, or by extrapolating to zero threshold for the long-time diffusion coefficients in 1D chains. Apart from reaching quantitatively converged results for these challenging examples of time-dependent observables, we also found that SPD could serve as a practical method for computing qualitatively accurate results. In this respect, our work motivates further research into using sparse Pauli representations for real-time quantum dynamics.
We thank Jacek Dziarmaga for sharing the iPEPS simulation data presented in Fig. <ref> and Huanchen Zhai for help with setting up MPO simulations presented in Fig. <ref>.
The authors were supported by the US Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of Basic Energy Sciences, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number DE-SC0022088. TB acknowledges financial support from the Swiss National Science Foundation through the Postdoc Mobility Fellowship (grant number P500PN-214214). GKC is a Simons Investigator in Physics. Computations presented here were conducted in the Resnick High Performance Computing Center, a facility supported by Resnick Sustainability Institute at the California Institute of Technology.
§ ADDITIONAL DETAILS OF SPD IMPLEMENTATION AND WORKING EQUATIONS
SPD is implemented as described in Ref. <cit.>. Briefly, a sum of N Pauli operators is stored in the form of three arrays: an array of N complex coefficients a, an array of N integer phases φ, and a N × 2n_64 array of 64-bit unsigned integers that stores two bitstrings x and z for each Pauli operator:
O = ∑_j=0^N-1 a_j (-i)^φ∏_k=0^n-1 Z_k^z_jk X_k^x_jk.
The number of unsigned integers needed to store n bits is n_64 = ⌈ n ⌉. Pauli operators are sorted using lexicographic ordering on the bitstrings. In this way, we can find the position j of a given Pauli operator in the sum—or the position at which to insert a new Pauli operator so that the ordering is preserved—in 𝒪(log N) time. Similarly, deleting Pauli operators preserves the order trivially.
Apart from searching, inserting, and deleting Pauli operators, other key operations on this sparse representation of a sum of Pauli operators include identifying which Pauli operators in the sum anticommute with a given Pauli operator and multiplying the sum of Pauli operators by a Pauli. For the anticommutation of Pauli operators A and B, we have
A anticommutes with B = z_A · x_B - x_A · z_B.
Here the multiplications on the right-hand side correspond to the AND logical operator between bits and additions to the XOR logical operator (addition in ℤ_2). The product of two Pauli operators C = A B is given by
(z_C, x_C) = (z_A + z_B, x_A + x_B),
φ_C = φ_A + φ_B + 2 z_A · x_B,
a_C = a_A a_B.
In Sec. <ref>, we also introduced an overlap (inner product) between sums of Pauli operators Tr[O_1^† O_2]/2^n represented in the sparse Pauli format described above. Let us assume that N_1 < N_2, i.e., that O_1 has fewer Pauli operators than O_2. Then, we can search for all Pauli operators of O_1 in O_2 in 𝒪(N_1 log N_2) time and the overlap is
1/2^nTr[O_1^† O_2] = ∑_j (-i)^φ_2, k[j] - φ_1, j a_1, j^∗ a_2, k[j],
where the sum runs over Pauli operators in O_1 that were found in O_2 and k[j] is the index of the j-th found Pauli in O_2. The expectation over the all-zero state, as needed in Sec. <ref>, can be computed as
⟨ 0^⊗ n | O | 0^⊗ n⟩ = ∑_j a_j (-i)^φ_j,
where the sum runs over Z-type Pauli operators (for which all bits in x_j are 0). Finally, the X-weight of an operator is computed as the number of set bits in the corresponding x array.
For convenience, our implementation interfaces to Qiskit <cit.> for setting up the calculations. Specifically, it converts Qiskit's SparsePauliOp into our representation described above that is then used in the simulations.
§ DETAILS OF MPO SIMULATIONS
MPO simulations were performed using the time-dependent density matrix renormalization group (TD-DMRG) method, as implemented in the Block2 code <cit.>. After constructing the MPO of the observable at time zero, we convert it to an MPS | O with a doubled number of sites, i.e., ⟨ i_1 i_2 … | O | j_1 j_2 …⟩ / 2^n/2 = i_1 j_1 i_2 j_2 … | O. The sites of the MPS are ordered so that the two physical legs of a single site in the MPO appear on neighboring sites in the MPS. The Liouvillian superoperator L |O≡ [H, O] that governs the dynamics of the observable is then represented as an MPO in the extended Hilbert space with twice as many sites. Specifically, each Pauli operator in the Hamiltonian corresponds to a sum of two Pauli operators in the Liouvillian. For single-site Pauli operators σ_i ∈{I, X, Y, Z} at site i, we have σ_2i - σ_2i+1, while the nearest-neighbor interaction terms σ_i σ_i+1 correspond to σ_2iσ_2i+2 - σ_2i+1σ_2i+3 in the superoperator picture.
The initial observable MPS is propagated with the Liouvillian MPO using the time-step-targeting method <cit.> and a time step of Δ t = 0.04. The correlation functions of the form Tr[O_1^† O_2]/2^n, used in Eq. (<ref>), were evaluated as the inner product O_1 | O_2 of the two matrix product states.
§ TIME STEP AND X-TRUNCATION PARAMETERS IN 3D SIMULATIONS
46
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Gray and Kourtis(2021)]gray2021hyper
author author J. Gray and author S. Kourtis, title title Hyper-optimized tensor network
contraction, @noop journal journal
Quantum volume 5, pages 410
(year 2021)NoStop
[Huang et al.(2021)Huang,
Zhang, Newman, Ni,
Ding, Cai, Gao, Wang, Wu, Zhang et al.]huang2021efficient
author author C. Huang, author F. Zhang,
author M. Newman, author X. Ni, author
D. Ding, author J. Cai, author X. Gao, author T. Wang, author F. Wu, author
G. Zhang, et al., title title Efficient parallelization of tensor network
contraction for simulating quantum computation, @noop journal journal Nature Computational Science volume 1, pages 578 (year
2021)NoStop
[Gray and Chan(2024)]gray2022hyper
author author J. Gray and author G. K.-L. Chan, title title Hyperoptimized Approximate
Contraction of Tensor Networks with Arbitrary Geometry, https://doi.org/10.1103/PhysRevX.14.011009 journal journal Phys. Rev. X volume 14, pages 011009 (year 2024)NoStop
[Rakovszky et al.(2022)Rakovszky, von Keyserlingk, and Pollmann]rakovszky2022dissipation
author author T. Rakovszky, author C. W. von
Keyserlingk, and author F. Pollmann, title title Dissipation-assisted
operator evolution method for capturing hydrodynamic transport, https://doi.org/10.1103/PhysRevB.105.075131 journal journal Phys. Rev. B volume 105, pages 075131 (year 2022)NoStop
[Nahum et al.(2018)Nahum,
Vijay, and Haah]Nahum2018
author author A. Nahum, author S. Vijay, and author J. Haah, title title Operator Spreading in Random Unitary Circuits, https://doi.org/10.1103/PhysRevX.8.021014 journal
journal Phys. Rev. X volume 8, pages 021014 (year 2018)NoStop
[von Keyserlingk et al.(2018)von Keyserlingk, Rakovszky, Pollmann, and Sondhi]VonKeyserlingk2018
author author C. W. von Keyserlingk, author T. Rakovszky, author F. Pollmann, and author S. L. Sondhi, title title Operator Hydrodynamics,
OTOCs, and Entanglement Growth in Systems without Conservation Laws, https://doi.org/10.1103/PhysRevX.8.021013 journal
journal Phys. Rev. X volume 8, pages 021013 (year 2018)NoStop
[Khemani et al.(2018)Khemani, Vishwanath, and Huse]Khemani2018
author author V. Khemani, author A. Vishwanath, and author D. A. Huse, title title Operator Spreading and the
Emergence of Dissipative Hydrodynamics under Unitary Evolution with
Conservation Laws, https://doi.org/10.1103/PhysRevX.8.031057
journal journal Phys. Rev. X volume 8, pages 031057 (year
2018)NoStop
[von Keyserlingk et al.(2022)von Keyserlingk, Pollmann, and Rakovszky]VonKeyserlingk2022
author author C. von
Keyserlingk, author F. Pollmann, and author T. Rakovszky, title title Operator backflow and
the classical simulation of quantum transport, https://doi.org/10.1103/PhysRevB.105.245101 journal journal Phys. Rev. B volume 105, pages 245101 (year 2022)NoStop
[Mi et al.(2021)Mi,
Roushan, Quintana, Mandrà, Marshall, Neill,
Arute, Arya, Atalaya,
Babbush, Bardin, Barends,
Basso, Bengtsson, Boixo,
Bourassa, Broughton, Buckley,
Buell, Burkett, Bushnell,
Chen, Chiaro, Collins,
Courtney, Demura, Derk,
Dunsworth, Eppens, Erickson,
Farhi, Fowler, Foxen,
Gidney, Giustina, Gross,
Harrigan, Harrington, Hilton,
Ho, Hong, Huang,
Huggins, Ioffe, Isakov,
Jeffrey, Jiang, Jones,
Kafri, Kelly, Kim,
Kitaev, Klimov, Korotkov,
Kostritsa, Landhuis, Laptev,
Lucero, Martin, McClean,
McCourt, McEwen, Megrant,
Miao, Mohseni, Montazeri,
Mruczkiewicz, Mutus, Naaman,
Neeley, Newman, Niu,
O'Brien, Opremcak, Ostby,
Pato, Petukhov, Redd,
Rubin, Sank, Satzinger,
Shvarts, Strain, Szalay,
Trevithick, Villalonga, White, Yao, Yeh, Zalcman,
Neven, Aleiner, Kechedzhi,
Smelyanskiy, and Chen]Mi2021
author author X. Mi, author P. Roushan,
author C. Quintana, author S. Mandrà, author
J. Marshall, author
C. Neill, author F. Arute, author K. Arya, author J. Atalaya, author R. Babbush,
author J. C. Bardin, author R. Barends, author
J. Basso, author A. Bengtsson, author S. Boixo, author A. Bourassa, author M. Broughton, author B. B. Buckley, author D. A. Buell, author B. Burkett, author N. Bushnell,
author Z. Chen, author
B. Chiaro, author R. Collins, author W. Courtney, author S. Demura, author A. R. Derk, author A. Dunsworth, author D. Eppens,
author C. Erickson, author E. Farhi, author
A. G. Fowler, author
B. Foxen, author C. Gidney, author M. Giustina, author J. A. Gross, author M. P. Harrigan, author S. D. Harrington, author J. Hilton,
author A. Ho, author
S. Hong, author T. Huang, author W. J. Huggins, author L. B. Ioffe, author S. V. Isakov, author E. Jeffrey,
author Z. Jiang, author C. Jones, author
D. Kafri, author J. Kelly, author S. Kim, author A. Kitaev, author P. V. Klimov,
author A. N. Korotkov, author F. Kostritsa, author
D. Landhuis, author
P. Laptev, author E. Lucero, author O. Martin, author J. R. McClean, author T. McCourt, author M. McEwen,
author A. Megrant, author K. C. Miao, author
M. Mohseni, author S. Montazeri, author W. Mruczkiewicz, author J. Mutus, author O. Naaman, author M. Neeley, author M. Newman, author M. Y. Niu, author T. E. O'Brien, author A. Opremcak,
author E. Ostby, author B. Pato, author
A. Petukhov, author
N. Redd, author N. C. Rubin, author D. Sank, author K. J. Satzinger, author V. Shvarts, author D. Strain,
author M. Szalay, author M. D. Trevithick, author B. Villalonga, author
T. White, author Z. J. Yao, author P. Yeh, author A. Zalcman, author H. Neven,
author I. Aleiner, author K. Kechedzhi, author
V. Smelyanskiy, and author
Y. Chen, title title Information scrambling in quantum circuits, https://doi.org/10.1126/science.abg5029 journal journal Science volume 374, pages
1479 (year 2021)NoStop
[Verstraete et al.(2004)Verstraete, García-Ripoll, and Cirac]Verstraete2004
author author F. Verstraete, author J. J. García-Ripoll, and author J. I. Cirac, title title Matrix
Product Density Operators: Simulation of Finite-Temperature and Dissipative
Systems, https://doi.org/10.1103/PhysRevLett.93.207204 journal journal Phys. Rev. Lett. volume 93, pages 207204 (year
2004)NoStop
[Zwolak and Vidal(2004)]Zwolak2004
author author M. Zwolak and author G. Vidal, title title Mixed-State Dynamics in
One-Dimensional Quantum Lattice Systems: A Time-Dependent Superoperator
Renormalization Algorithm, https://doi.org/10.1103/PhysRevLett.93.207205 journal
journal Phys. Rev. Lett. volume 93, pages 207205 (year 2004)NoStop
[Anand et al.(2023)Anand,
Temme, Kandala, and Zaletel]Anand2023
author author S. Anand, author K. Temme,
author A. Kandala, and author M. Zaletel, http://arxiv.org/abs/2306.17839 title Classical benchmarking
of zero noise extrapolation beyond the exactly-verifiable regime (year 2023), https://arxiv.org/abs/2306.17839
arXiv:2306.17839 NoStop
[Alhambra and Cirac(2021)]Alhambra2021
author author Á. M. Alhambra and author J. I. Cirac, title title Locally
Accurate Tensor Networks for Thermal States and Time Evolution, https://doi.org/10.1103/PRXQuantum.2.040331 journal journal PRX Quantum volume 2, pages
040331 (year 2021)NoStop
[Liao et al.(2023)Liao,
Wang, Zhou, Zhang, and Xiang]liao2023simulation
author author H.-J. Liao, author K. Wang, author Z.-S. Zhou, author
P. Zhang, and author
T. Xiang, @noop title
Simulation of IBM's kicked Ising experiment with Projected Entangled Pair
Operator (year 2023), https://arxiv.org/abs/2308.03082 arXiv:2308.03082 [quant-ph] NoStop
[Rall et al.(2019)Rall,
Liang, Cook, and Kretschmer]rall2019simulation
author author P. Rall, author D. Liang,
author J. Cook, and author W. Kretschmer, title
title Simulation of qubit quantum circuits via pauli
propagation, https://doi.org/10.1103/PhysRevA.99.062337 journal journal Phys. Rev. A volume
99, pages 062337 (year 2019)NoStop
[Shao et al.(2023)Shao,
Wei, Cheng, and Liu]Shao2023
author author Y. Shao, author F. Wei, author S. Cheng, and author
Z. Liu, http://arxiv.org/abs/2306.05804 title Simulating Quantum Mean
Values in Noisy Variational Quantum Algorithms: A Polynomial-Scale
Approach (year 2023), https://arxiv.org/abs/2306.05804 arXiv:2306.05804 NoStop
[Begušić et al.(2023)Begušić, Hejazi, and Chan]begusic2023simulating
author author T. Begušić, author K. Hejazi, and author G. K.-L. Chan, @noop title Simulating quantum circuit
expectation values by Clifford perturbation theory (year
2023), https://arxiv.org/abs/2306.04797 arXiv:2306.04797
[quant-ph] NoStop
[Nemkov et al.(2023)Nemkov,
Kiktenko, and Fedorov]Nemkov_Fedorov:2023
author author N. A. Nemkov, author E. O. Kiktenko, and author A. K. Fedorov, title title Fourier expansion in
variational quantum algorithms, https://doi.org/10.1103/PhysRevA.108.032406 journal journal Phys. Rev. A volume 108, pages 032406 (year 2023)NoStop
[Fontana et al.(2023)Fontana, Rudolph, Duncan, Rungger, and Cîrstoiu]fontana2023classical
author author E. Fontana, author M. S. Rudolph, author R. Duncan,
author I. Rungger, and author C. Cîrstoiu, @noop
title Classical simulations of noisy variational quantum
circuits (year 2023), https://arxiv.org/abs/2306.05400 arXiv:2306.05400 [quant-ph] NoStop
[Rudolph et al.(2023)Rudolph, Fontana, Holmes, and Cincio]rudolph2023classical
author author M. S. Rudolph, author E. Fontana,
author Z. Holmes, and author L. Cincio, @noop
title Classical surrogate simulation of quantum systems with
LOWESA (year 2023), https://arxiv.org/abs/2308.09109 arXiv:2308.09109 [quant-ph] NoStop
[Aharonov et al.(2023)Aharonov, Gao, Landau, Liu, and Vazirani]Aharonov2023
author author D. Aharonov, author X. Gao,
author Z. Landau, author Y. Liu, and author
U. Vazirani, title title A Polynomial-Time Classical Algorithm for Noisy Random Circuit
Sampling, in https://doi.org/10.1145/3564246.3585234 booktitle Proc. 55th Annu. ACM Symp. Theory Comput. (publisher ACM, address New York, NY, USA, year 2023) pp. pages 945–957, https://arxiv.org/abs/2211.03999 arXiv:2211.03999 NoStop
[Schuster et al.(2024)Schuster, Yin, Gao, and Yao]Schuster2024
author author T. Schuster, author C. Yin,
author X. Gao, and author N. Y. Yao, https://doi.org/10.48550/arXiv.2407.12768 title A
polynomial-time classical algorithm for noisy quantum circuits (year 2024), https://arxiv.org/abs/2407.12768
arXiv:2407.12768 NoStop
[Begušić et al.(2024)Begušić, Gray, and Chan]Begusic2024
author author T. Begušić, author J. Gray, and author G. K.-L. Chan, title title Fast and converged
classical simulations of evidence for the utility of quantum computing before
fault tolerance, https://doi.org/10.1126/sciadv.adk4321
journal journal Sci. Adv. volume 10, pages eadk4321 (year
2024)NoStop
[Kim et al.(2023)Kim,
Eddins, Anand, Wei,
Van Den Berg, Rosenblatt, Nayfeh, Wu, Zaletel, Temme, and Kandala]kim2023evidence
author author Y. Kim, author A. Eddins,
author S. Anand, author K. X. Wei, author
E. Van Den Berg, author
S. Rosenblatt, author
H. Nayfeh, author Y. Wu, author M. Zaletel, author K. Temme, and author A. Kandala, title title Evidence for the utility of quantum
computing before fault tolerance, https://doi.org/10.1038/s41586-023-06096-3 journal journal Nature volume 618, pages
500 (year 2023)NoStop
[Kechedzhi et al.(2024)Kechedzhi, Isakov, Mandrà, Villalonga, Mi, Boixo, and Smelyanskiy]Kechedzhi2024
author author K. Kechedzhi, author S. Isakov,
author S. Mandrà, author B. Villalonga, author
X. Mi, author S. Boixo, and author V. Smelyanskiy, title title
Effective quantum volume, fidelity and computational cost of noisy quantum
processing experiments, https://doi.org/https://doi.org/10.1016/j.future.2023.12.002 journal journal Future Gener. Comput. Syst. volume 153, pages 431 (year
2024)NoStop
[Tindall et al.(2024)Tindall, Fishman, Stoudenmire, and Sels]Tindall2024
author author J. Tindall, author M. Fishman,
author E. M. Stoudenmire, and author D. Sels, title title Efficient Tensor Network Simulation of IBM's
Eagle Kicked Ising Experiment, https://doi.org/10.1103/PRXQuantum.5.010308 journal journal PRX Quantum volume 5, pages
010308 (year 2024)NoStop
[Patra et al.(2024)Patra,
Jahromi, Singh, and Orús]Patra2024
author author S. Patra, author S. S. Jahromi,
author S. Singh, and author R. Orús, title
title Efficient tensor network simulation of IBM's largest
quantum processors, https://doi.org/10.1103/PhysRevResearch.6.013326 journal
journal Phys. Rev. Res. volume 6, pages 013326 (year 2024)NoStop
[Bertini et al.(2021)Bertini, Heidrich-Meisner, Karrasch,
Prosen, Steinigeweg, and Žnidarič]Bertini2021
author author B. Bertini, author F. Heidrich-Meisner, author C. Karrasch, author T. Prosen,
author R. Steinigeweg, and author M. Žnidarič, title title Finite-temperature transport in
one-dimensional quantum lattice models, https://doi.org/10.1103/RevModPhys.93.025003 journal
journal Rev. Mod. Phys. volume 93, pages 025003 (year 2021)NoStop
[Kim and Huse(2013)]Kim2013
author author H. Kim and author D. A. Huse, title title Ballistic Spreading of Entanglement
in a Diffusive Nonintegrable System, https://doi.org/10.1103/PhysRevLett.111.127205 journal
journal Phys. Rev. Lett. volume 111, pages 127205 (year 2013)NoStop
[Steinigeweg et al.(2014)Steinigeweg, Heidrich-Meisner, Gemmer,
Michielsen, and De
Raedt]Steinigeweg2014
author author R. Steinigeweg, author F. Heidrich-Meisner, author J. Gemmer, author K. Michielsen, and author H. De Raedt, title title Scaling of diffusion constants in the
spin-1/2 XX ladder, https://doi.org/10.1103/PhysRevB.90.094417 journal journal Phys. Rev. B volume 90, pages 094417 (year 2014)NoStop
[Karrasch et al.(2015)Karrasch, Kennes, and Heidrich-Meisner]Karrasch2015
author author C. Karrasch, author D. M. Kennes, and author F. Heidrich-Meisner, title title Spin and
thermal conductivity of quantum spin chains and ladders, https://doi.org/10.1103/PhysRevB.91.115130 journal journal Phys. Rev. B volume 91, pages 115130 (year 2015)NoStop
[Kloss et al.(2018)Kloss,
Lev, and Reichman]Kloss2018
author author B. Kloss, author Y. B. Lev, and author D. Reichman, title title Time-dependent variational principle
in matrix-product state manifolds: Pitfalls and potential, https://doi.org/10.1103/PhysRevB.97.024307 journal journal Phys. Rev. B volume 97, pages 024307 (year 2018)NoStop
[Rakovszky et al.(2018)Rakovszky, Pollmann, and von
Keyserlingk]Rakovszky2018
author author T. Rakovszky, author F. Pollmann, and author C. W. von Keyserlingk, title title Diffusive
Hydrodynamics of Out-of-Time-Ordered Correlators with Charge Conservation, https://doi.org/10.1103/PhysRevX.8.031058 journal
journal Phys. Rev. X volume 8, pages 031058 (year 2018)NoStop
[Kuprov et al.(2007)Kuprov,
Wagner-Rundell, and Hore]kuprov2007polynomially
author author I. Kuprov, author N. Wagner-Rundell, and author P. Hore, title title Polynomially scaling spin
dynamics simulation algorithm based on adaptive state-space restriction, https://doi.org/10.1016/j.jmr.2007.09.014 journal
journal J. Magn. Reson. volume 189, pages 241 (year 2007)NoStop
[Kuprov(2008)]Kuprov2008
author author I. Kuprov, title title Polynomially scaling spin
dynamics II: Further state-space compression using Krylov subspace techniques
and zero track elimination, https://doi.org/10.1016/j.jmr.2008.08.008 journal journal J. Magn. Reson. volume 195, pages 45 (year 2008)NoStop
[Karabanov et al.(2011)Karabanov, Kuprov, Charnock, van der Drift, Edwards, and Köckenberger]Karabanov2011
author author A. Karabanov, author I. Kuprov,
author G. T. P. Charnock,
author A. van der Drift, author L. J. Edwards, and author W. Köckenberger, title title On the accuracy of the state space restriction
approximation for spin dynamics simulations, https://doi.org/10.1063/1.3624564 journal journal J. Chem. Phys. volume 135, pages 084106 (year 2011)NoStop
[Hogben et al.(2011)Hogben,
Krzystyniak, Charnock, Hore, and Kuprov]Hogben2011
author author H. Hogben, author M. Krzystyniak,
author G. Charnock, author P. Hore, and author
I. Kuprov, title title Spinach – A software library for simulation of spin dynamics in
large spin systems, https://doi.org/10.1016/j.jmr.2010.11.008
journal journal J. Magn. Reson. volume 208, pages 179 (year
2011)NoStop
[Czarnik et al.(2019)Czarnik, Dziarmaga, and Corboz]Czarnik2019
author author P. Czarnik, author J. Dziarmaga, and author P. Corboz, title title Time evolution of an infinite
projected entangled pair state: An efficient algorithm, https://doi.org/10.1103/PhysRevB.99.035115 journal journal Phys. Rev. B volume 99, pages 035115 (year 2019)NoStop
[Dziarmaga(2021)]Dziarmaga2021
author author J. Dziarmaga, title title Time evolution of an
infinite projected entangled pair state: Neighborhood tensor update, https://doi.org/10.1103/PhysRevB.104.094411 journal
journal Phys. Rev. B volume 104, pages 094411 (year 2021)NoStop
[Dziarmaga(2022)]Dziarmaga2022
author author J. Dziarmaga, title title Time evolution of an
infinite projected entangled pair state: A gradient tensor update in the
tangent space, https://doi.org/10.1103/PhysRevB.106.014304
journal journal Phys. Rev. B volume 106, pages 014304 (year
2022)NoStop
[Schmitt and Heyl(2020)]Schmitt2020
author author M. Schmitt and author M. Heyl, title title Quantum Many-Body Dynamics in Two
Dimensions with Artificial Neural Networks, https://doi.org/10.1103/PhysRevLett.125.100503 journal
journal Phys. Rev. Lett. volume 125, pages 100503 (year 2020)NoStop
[Blöte and Deng(2002)]Blote2002
author author H. W. J. Blöte and author Y. Deng, title title Cluster
Monte Carlo simulation of the transverse Ising model, https://doi.org/10.1103/PhysRevE.66.066110 journal journal Phys. Rev. E volume 66, pages 066110 (year 2002)NoStop
[Javadi-Abhari et al.(2024)Javadi-Abhari, Treinish, Krsulich,
Wood, Lishman, Gacon,
Martiel, Nation, Bishop,
Cross, Johnson, and Gambetta]Qiskit
author author A. Javadi-Abhari, author M. Treinish, author K. Krsulich,
author C. J. Wood, author J. Lishman, author
J. Gacon, author S. Martiel, author P. D. Nation, author L. S. Bishop, author A. W. Cross, author B. R. Johnson, and author J. M. Gambetta, http://arxiv.org/abs/2405.08810 title
Quantum computing with Qiskit (year 2024), https://arxiv.org/abs/2405.08810 arXiv:2405.08810 NoStop
[Zhai et al.(2023)Zhai,
Larsson, Lee, Cui,
Zhu, Sun, Peng, Peng, Liao, Tölle, Yang, Li, and Chan]Zhai2023
author author H. Zhai, author H. R. Larsson,
author S. Lee, author
Z.-H. Cui, author T. Zhu, author C. Sun, author L. Peng, author R. Peng, author
K. Liao, author J. Tölle, author J. Yang, author S. Li, and author G. K.-L. Chan, title title Block2 : A
comprehensive open source framework to develop and apply state-of-the-art
DMRG algorithms in electronic structure and beyond, https://doi.org/10.1063/5.0180424 journal journal J. Chem. Phys. volume 159, pages 234801 (year 2023)NoStop
[Feiguin and White(2005)]Feiguin2005
author author A. E. Feiguin and author S. R. White, title title Time-step targeting
methods for real-time dynamics using the density matrix renormalization
group, https://doi.org/10.1103/PhysRevB.72.020404 journal journal Phys. Rev. B volume
72, pages 020404 (year 2005)NoStop
[Ronca et al.(2017)Ronca,
Li, Jimenez-Hoyos, and Chan]Ronca2017
author author E. Ronca, author Z. Li, author C. A. Jimenez-Hoyos, and author G. K. L. Chan, title title Time-Step Targeting Time-Dependent
and Dynamical Density Matrix Renormalization Group Algorithms with ab Initio
Hamiltonians, https://doi.org/10.1021/acs.jctc.7b00682 journal journal J. Chem. Theory Comput. volume 13, pages 5560 (year
2017)NoStop
|
http://arxiv.org/abs/2409.02578v1 | 20240904095859 | Searching for the massless dark photon in $c\to uγ'$ | [
"BESIII Collaboration",
"M. Ablikim",
"M. N. Achasov",
"P. Adlarson",
"O. Afedulidis",
"X. C. Ai",
"R. Aliberti",
"A. Amoroso",
"Y. Bai",
"O. Bakina",
"I. Balossino",
"Y. Ban",
"H. -R. Bao",
"V. Batozskaya",
"K. Begzsuren",
"N. Berger",
"M. Berlowski",
"M. Bertani",
"D. Bettoni",
"F. Bianchi",
"E. Bianco",
"A. Bortone",
"I. Boyko",
"R. A. Briere",
"A. Brueggemann",
"H. Cai",
"X. Cai",
"A. Calcaterra",
"G. F. Cao",
"N. Cao",
"S. A. Cetin",
"X. Y. Chai",
"J. F. Chang",
"G. R. Che",
"Y. Z. Che",
"G. Chelkov",
"C. Chen",
"C. H. Chen",
"Chao Chen",
"G. Chen",
"H. S. Chen",
"H. Y. Chen",
"M. L. Chen",
"S. J. Chen",
"S. L. Chen",
"S. M. Chen",
"T. Chen",
"X. R. Chen",
"X. T. Chen",
"Y. B. Chen",
"Y. Q. Chen",
"Z. J. Chen",
"Z. Y. Chen",
"S. K. Choi",
"G. Cibinetto",
"F. Cossio",
"J. J. Cui",
"H. L. Dai",
"J. P. Dai",
"A. Dbeyssi",
"R. E. de Boer",
"D. Dedovich",
"C. Q. Deng",
"Z. Y. Deng",
"A. Denig",
"I. Denysenko",
"M. Destefanis",
"F. De Mori",
"B. Ding",
"X. X. Ding",
"Y. Ding",
"Y. Ding",
"J. Dong",
"L. Y. Dong",
"M. Y. Dong",
"X. Dong",
"M. C. Du",
"S. X. Du",
"Y. Y. Duan",
"Z. H. Duan",
"P. Egorov",
"Y. H. Fan",
"J. Fang",
"J. Fang",
"S. S. Fang",
"W. X. Fang",
"Y. Fang",
"Y. Q. Fang",
"R. Farinelli",
"L. Fava",
"F. Feldbauer",
"G. Felici",
"C. Q. Feng",
"J. H. Feng",
"Y. T. Feng",
"M. Fritsch",
"C. D. Fu",
"J. L. Fu",
"Y. W. Fu",
"H. Gao",
"X. B. Gao",
"Y. N. Gao",
"Yang Gao",
"S. Garbolino",
"I. Garzia",
"L. Ge",
"P. T. Ge",
"Z. W. Ge",
"C. Geng",
"E. M. Gersabeck",
"A. Gilman",
"K. Goetzen",
"L. Gong",
"W. X. Gong",
"W. Gradl",
"S. Gramigna",
"M. Greco",
"M. H. Gu",
"Y. T. Gu",
"C. Y. Guan",
"A. Q. Guo",
"L. B. Guo",
"M. J. Guo",
"R. P. Guo",
"Y. P. Guo",
"A. Guskov",
"J. Gutierrez",
"K. L. Han",
"T. T. Han",
"F. Hanisch",
"X. Q. Hao",
"F. A. Harris",
"K. K. He",
"K. L. He",
"F. H. Heinsius",
"C. H. Heinz",
"Y. K. Heng",
"C. Herold",
"T. Holtmann",
"P. C. Hong",
"G. Y. Hou",
"X. T. Hou",
"Y. R. Hou",
"Z. L. Hou",
"B. Y. Hu",
"H. M. Hu",
"J. F. Hu",
"Q. P. Hu",
"S. L. Hu",
"T. Hu",
"Y. Hu",
"G. S. Huang",
"K. X. Huang",
"L. Q. Huang",
"X. T. Huang",
"Y. P. Huang",
"Y. S. Huang",
"T. Hussain",
"F. Hölzken",
"N. Hüsken",
"N. in der Wiesche",
"J. Jackson",
"S. Janchiv",
"J. H. Jeong",
"Q. Ji",
"Q. P. Ji",
"W. Ji",
"X. B. Ji",
"X. L. Ji",
"Y. Y. Ji",
"X. Q. Jia",
"Z. K. Jia",
"D. Jiang",
"H. B. Jiang",
"P. C. Jiang",
"S. S. Jiang",
"T. J. Jiang",
"X. S. Jiang",
"Y. Jiang",
"J. B. Jiao",
"J. K. Jiao",
"Z. Jiao",
"S. Jin",
"Y. Jin",
"M. Q. Jing",
"X. M. Jing",
"T. Johansson",
"S. Kabana",
"N. Kalantar-Nayestanaki",
"X. L. Kang",
"X. S. Kang",
"M. Kavatsyuk",
"B. C. Ke",
"V. Khachatryan",
"A. Khoukaz",
"R. Kiuchi",
"O. B. Kolcu",
"B. Kopf",
"M. Kuessner",
"X. Kui",
"N. Kumar",
"A. Kupsc",
"W. Kühn",
"L. Lavezzi",
"T. T. Lei",
"Z. H. Lei",
"M. Lellmann",
"T. Lenz",
"C. Li",
"C. Li",
"C. H. Li",
"Cheng Li",
"D. M. Li",
"F. Li",
"G. Li",
"H. B. Li",
"H. J. Li",
"H. N. Li",
"Hui Li",
"J. R. Li",
"J. S. Li",
"K. Li",
"K. L. Li",
"L. J. Li",
"L. K. Li",
"Lei Li",
"M. H. Li",
"P. R. Li",
"Q. M. Li",
"Q. X. Li",
"R. Li",
"S. X. Li",
"T. Li",
"W. D. Li",
"W. G. Li",
"X. Li",
"X. H. Li",
"X. L. Li",
"X. Y. Li",
"X. Z. Li",
"Y. G. Li",
"Z. J. Li",
"Z. Y. Li",
"C. Liang",
"H. Liang",
"H. Liang",
"Y. F. Liang",
"Y. T. Liang",
"G. R. Liao",
"Y. P. Liao",
"J. Libby",
"A. Limphirat",
"C. C. Lin",
"C. X. Lin",
"D. X. Lin",
"T. Lin",
"B. J. Liu",
"B. X. Liu",
"C. Liu",
"C. X. Liu",
"F. Liu",
"F. H. Liu",
"Feng Liu",
"G. M. Liu",
"H. Liu",
"H. B. Liu",
"H. H. Liu",
"H. M. Liu",
"Huihui Liu",
"J. B. Liu",
"J. Y. Liu",
"K. Liu",
"K. Y. Liu",
"Ke Liu",
"L. Liu",
"L. C. Liu",
"Lu Liu",
"M. H. Liu",
"P. L. Liu",
"Q. Liu",
"S. B. Liu",
"T. Liu",
"W. K. Liu",
"W. M. Liu",
"X. Liu",
"X. Liu",
"Y. Liu",
"Y. Liu",
"Y. B. Liu",
"Z. A. Liu",
"Z. D. Liu",
"Z. Q. Liu",
"X. C. Lou",
"F. X. Lu",
"H. J. Lu",
"J. G. Lu",
"X. L. Lu",
"Y. Lu",
"Y. P. Lu",
"Z. H. Lu",
"C. L. Luo",
"J. R. Luo",
"M. X. Luo",
"T. Luo",
"X. L. Luo",
"X. R. Lyu",
"Y. F. Lyu",
"F. C. Ma",
"H. Ma",
"H. L. Ma",
"J. L. Ma",
"L. L. Ma",
"L. R. Ma",
"M. M. Ma",
"Q. M. Ma",
"R. Q. Ma",
"T. Ma",
"X. T. Ma",
"X. Y. Ma",
"Y. M. Ma",
"F. E. Maas",
"I. MacKay",
"M. Maggiora",
"S. Malde",
"Y. J. Mao",
"Z. P. Mao",
"S. Marcello",
"Z. X. Meng",
"J. G. Messchendorp",
"G. Mezzadri",
"H. Miao",
"T. J. Min",
"R. E. Mitchell",
"X. H. Mo",
"B. Moses",
"N. Yu. Muchnoi",
"J. Muskalla",
"Y. Nefedov",
"F. Nerling",
"L. S. Nie",
"I. B. Nikolaev",
"Z. Ning",
"S. Nisar",
"Q. L. Niu",
"W. D. Niu",
"Y. Niu",
"S. L. Olsen",
"S. L. Olsen",
"Q. Ouyang",
"S. Pacetti",
"X. Pan",
"Y. Pan",
"A. Pathak",
"Y. P. Pei",
"M. Pelizaeus",
"H. P. Peng",
"Y. Y. Peng",
"K. Peters",
"J. L. Ping",
"R. G. Ping",
"S. Plura",
"V. Prasad",
"F. Z. Qi",
"H. Qi",
"H. R. Qi",
"M. Qi",
"T. Y. Qi",
"S. Qian",
"W. B. Qian",
"C. F. Qiao",
"X. K. Qiao",
"J. J. Qin",
"L. Q. Qin",
"L. Y. Qin",
"X. P. Qin",
"X. S. Qin",
"Z. H. Qin",
"J. F. Qiu",
"Z. H. Qu",
"C. F. Redmer",
"K. J. Ren",
"A. Rivetti",
"M. Rolo",
"G. Rong",
"Ch. Rosner",
"M. Q. Ruan",
"S. N. Ruan",
"N. Salone",
"A. Sarantsev",
"Y. Schelhaas",
"K. Schoenning",
"M. Scodeggio",
"K. Y. Shan",
"W. Shan",
"X. Y. Shan",
"Z. J. Shang",
"J. F. Shangguan",
"L. G. Shao",
"M. Shao",
"C. P. Shen",
"H. F. Shen",
"W. H. Shen",
"X. Y. Shen",
"B. A. Shi",
"H. Shi",
"H. C. Shi",
"J. L. Shi",
"J. Y. Shi",
"Q. Q. Shi",
"S. Y. Shi",
"X. Shi",
"J. J. Song",
"T. Z. Song",
"W. M. Song",
"Y. J. Song",
"Y. X. Song",
"S. Sosio",
"S. Spataro",
"F. Stieler",
"S. S Su",
"Y. J. Su",
"G. B. Sun",
"G. X. Sun",
"H. Sun",
"H. K. Sun",
"J. F. Sun",
"K. Sun",
"L. Sun",
"S. S. Sun",
"T. Sun",
"W. Y. Sun",
"Y. Sun",
"Y. J. Sun",
"Y. Z. Sun",
"Z. Q. Sun",
"Z. T. Sun",
"C. J. Tang",
"G. Y. Tang",
"J. Tang",
"M. Tang",
"Y. A. Tang",
"L. Y. Tao",
"Q. T. Tao",
"M. Tat",
"J. X. Teng",
"V. Thoren",
"W. H. Tian",
"Y. Tian",
"Z. F. Tian",
"I. Uman",
"Y. Wan",
"S. J. Wang",
"B. Wang",
"B. L. Wang",
"Bo Wang",
"D. Y. Wang",
"F. Wang",
"H. J. Wang",
"J. J. Wang",
"J. P. Wang",
"K. Wang",
"L. L. Wang",
"M. Wang",
"N. Y. Wang",
"S. Wang",
"S. Wang",
"T. Wang",
"T. J. Wang",
"W. Wang",
"W. Wang",
"W. P. Wang",
"X. Wang",
"X. F. Wang",
"X. J. Wang",
"X. L. Wang",
"X. N. Wang",
"Y. Wang",
"Y. D. Wang",
"Y. F. Wang",
"Y. H. Wang",
"Y. L. Wang",
"Y. N. Wang",
"Y. Q. Wang",
"Yaqian Wang",
"Yi Wang",
"Z. Wang",
"Z. L. Wang",
"Z. Y. Wang",
"Ziyi Wang",
"D. H. Wei",
"F. Weidner",
"S. P. Wen",
"Y. R. Wen",
"U. Wiedner",
"G. Wilkinson",
"M. Wolke",
"L. Wollenberg",
"C. Wu",
"J. F. Wu",
"L. H. Wu",
"L. J. Wu",
"X. Wu",
"X. H. Wu",
"Y. Wu",
"Y. H. Wu",
"Y. J. Wu",
"Z. Wu",
"L. Xia",
"X. M. Xian",
"B. H. Xiang",
"T. Xiang",
"D. Xiao",
"G. Y. Xiao",
"S. Y. Xiao",
"Y. L. Xiao",
"Z. J. Xiao",
"C. Xie",
"X. H. Xie",
"Y. Xie",
"Y. G. Xie",
"Y. H. Xie",
"Z. P. Xie",
"T. Y. Xing",
"C. F. Xu",
"C. J. Xu",
"G. F. Xu",
"H. Y. Xu",
"M. Xu",
"Q. J. Xu",
"Q. N. Xu",
"W. Xu",
"W. L. Xu",
"X. P. Xu",
"Y. Xu",
"Y. C. Xu",
"Z. S. Xu",
"F. Yan",
"L. Yan",
"W. B. Yan",
"W. C. Yan",
"X. Q. Yan",
"H. J. Yang",
"H. L. Yang",
"H. X. Yang",
"J. H. Yang",
"T. Yang",
"Y. Yang",
"Y. F. Yang",
"Y. F. Yang",
"Y. X. Yang",
"Z. W. Yang",
"Z. P. Yao",
"M. Ye",
"M. H. Ye",
"J. H. Yin",
"Junhao Yin",
"Z. Y. You",
"B. X. Yu",
"C. X. Yu",
"G. Yu",
"J. S. Yu",
"M. C. Yu",
"T. Yu",
"X. D. Yu",
"Y. C. Yu",
"C. Z. Yuan",
"J. Yuan",
"J. Yuan",
"L. Yuan",
"S. C. Yuan",
"Y. Yuan",
"Z. Y. Yuan",
"C. X. Yue",
"A. A. Zafar",
"F. R. Zeng",
"S. H. Zeng",
"X. Zeng",
"Y. Zeng",
"Y. J. Zeng",
"Y. J. Zeng",
"X. Y. Zhai",
"Y. C. Zhai",
"Y. H. Zhan",
"A. Q. Zhang",
"B. L. Zhang",
"B. X. Zhang",
"D. H. Zhang",
"G. Y. Zhang",
"H. Zhang",
"H. Zhang",
"H. C. Zhang",
"H. H. Zhang",
"H. H. Zhang",
"H. Q. Zhang",
"H. R. Zhang",
"H. Y. Zhang",
"J. Zhang",
"J. Zhang",
"J. J. Zhang",
"J. L. Zhang",
"J. Q. Zhang",
"J. S. Zhang",
"J. W. Zhang",
"J. X. Zhang",
"J. Y. Zhang",
"J. Z. Zhang",
"Jianyu Zhang",
"L. M. Zhang",
"Lei Zhang",
"P. Zhang",
"Q. Y. Zhang",
"R. Y. Zhang",
"S. H. Zhang",
"Shulei Zhang",
"X. M. Zhang",
"X. Y Zhang",
"X. Y. Zhang",
"Y. Zhang",
"Y. Zhang",
"Y. T. Zhang",
"Y. H. Zhang",
"Y. M. Zhang",
"Yan Zhang",
"Z. D. Zhang",
"Z. H. Zhang",
"Z. L. Zhang",
"Z. Y. Zhang",
"Z. Y. Zhang",
"Z. Z. Zhang",
"G. Zhao",
"J. Y. Zhao",
"J. Z. Zhao",
"L. Zhao",
"Lei Zhao",
"M. G. Zhao",
"N. Zhao",
"R. P. Zhao",
"S. J. Zhao",
"Y. B. Zhao",
"Y. X. Zhao",
"Z. G. Zhao",
"A. Zhemchugov",
"B. Zheng",
"B. M. Zheng",
"J. P. Zheng",
"W. J. Zheng",
"Y. H. Zheng",
"B. Zhong",
"X. Zhong",
"H. Zhou",
"J. Y. Zhou",
"L. P. Zhou",
"S. Zhou",
"X. Zhou",
"X. K. Zhou",
"X. R. Zhou",
"X. Y. Zhou",
"Y. Z. Zhou",
"Z. C. Zhou",
"A. N. Zhu",
"J. Zhu",
"K. Zhu",
"K. J. Zhu",
"K. S. Zhu",
"L. Zhu",
"L. X. Zhu",
"S. H. Zhu",
"T. J. Zhu",
"W. D. Zhu",
"Y. C. Zhu",
"Z. A. Zhu",
"J. H. Zou",
"J. Zu"
] | hep-ex | [
"hep-ex"
] |
§ ABSTRACT
In the effective field theory, the massless dark photon γ' can only couple with the Standard Model particle through operators of dimension higher than four, thereby offering a high sensitivity to the new physics energy scale.
Using 7.9 fb^-1 of e^+e^- collision data collected at √(s)=3.773 GeV with the BESIII detector at the BEPCII collider, we measure the effective flavor-changing neutral current coupling of cuγ' in D^0→ωγ' and D^0→γγ' processes to search for the massless dark photon.
No significant signals are observed, and the upper limits at the 90% confidence level on the massless dark photon branching fraction are set to be 1.1×10^-5 and 2.0×10^-6 for D^0→ωγ' and D^0→γγ', respectively. These results provide the most stringent constraint on the new physics energy scale associated with cuγ' coupling in the world, with the new physics energy scale related parameter |ℂ|^2+|ℂ_5|^2<8.2×10^-17^-2 at the 90% confidence level, playing a unique role in the dark sector search with the charm sector.
-0.2cm
-0.2cm
Searching for the massless dark photon in c→ uγ'
M. Ablikim^1, M. N. Achasov^4,c, P. Adlarson^76, O. Afedulidis^3, X. C. Ai^81, R. Aliberti^35, A. Amoroso^75A,75C, Y. Bai^57, O. Bakina^36, I. Balossino^29A, Y. Ban^46,h, H.-R. Bao^64, V. Batozskaya^1,44, K. Begzsuren^32, N. Berger^35, M. Berlowski^44, M. Bertani^28A, D. Bettoni^29A, F. Bianchi^75A,75C, E. Bianco^75A,75C, A. Bortone^75A,75C, I. Boyko^36, R. A. Briere^5, A. Brueggemann^69, H. Cai^77, X. Cai^1,58, A. Calcaterra^28A, G. F. Cao^1,64, N. Cao^1,64, S. A. Cetin^62A, X. Y. Chai^46,h, J. F. Chang^1,58, G. R. Che^43, Y. Z. Che^1,58,64, G. Chelkov^36,b, C. Chen^43, C. H. Chen^9, Chao Chen^55, G. Chen^1, H. S. Chen^1,64, H. Y. Chen^20, M. L. Chen^1,58,64, S. J. Chen^42, S. L. Chen^45, S. M. Chen^61, T. Chen^1,64, X. R. Chen^31,64, X. T. Chen^1,64, Y. B. Chen^1,58, Y. Q. Chen^34, Z. J. Chen^25,i, Z. Y. Chen^1,64, S. K. Choi^10, G. Cibinetto^29A, F. Cossio^75C, J. J. Cui^50, H. L. Dai^1,58, J. P. Dai^79, A. Dbeyssi^18, R. E. de Boer^3, D. Dedovich^36, C. Q. Deng^73, Z. Y. Deng^1, A. Denig^35, I. Denysenko^36, M. Destefanis^75A,75C, F. De Mori^75A,75C, B. Ding^67,1, X. X. Ding^46,h, Y. Ding^40, Y. Ding^34, J. Dong^1,58, L. Y. Dong^1,64, M. Y. Dong^1,58,64, X. Dong^77, M. C. Du^1, S. X. Du^81, Y. Y. Duan^55, Z. H. Duan^42, P. Egorov^36,b, Y. H. Fan^45, J. Fang^59, J. Fang^1,58, S. S. Fang^1,64, W. X. Fang^1, Y. Fang^1, Y. Q. Fang^1,58, R. Farinelli^29A, L. Fava^75B,75C, F. Feldbauer^3, G. Felici^28A, C. Q. Feng^72,58, J. H. Feng^59, Y. T. Feng^72,58, M. Fritsch^3, C. D. Fu^1, J. L. Fu^64, Y. W. Fu^1,64, H. Gao^64, X. B. Gao^41, Y. N. Gao^46,h, Yang Gao^72,58, S. Garbolino^75C, I. Garzia^29A,29B, L. Ge^81, P. T. Ge^19, Z. W. Ge^42, C. Geng^59, E. M. Gersabeck^68, A. Gilman^70, K. Goetzen^13, L. Gong^40, W. X. Gong^1,58, W. Gradl^35, S. Gramigna^29A,29B, M. Greco^75A,75C, M. H. Gu^1,58, Y. T. Gu^15, C. Y. Guan^1,64, A. Q. Guo^31,64, L. B. Guo^41, M. J. Guo^50, R. P. Guo^49, Y. P. Guo^12,g, A. Guskov^36,b, J. Gutierrez^27, K. L. Han^64, T. T. Han^1, F. Hanisch^3, X. Q. Hao^19, F. A. Harris^66, K. K. He^55, K. L. He^1,64, F. H. Heinsius^3, C. H. Heinz^35, Y. K. Heng^1,58,64, C. Herold^60, T. Holtmann^3, P. C. Hong^34, G. Y. Hou^1,64, X. T. Hou^1,64, Y. R. Hou^64, Z. L. Hou^1, B. Y. Hu^59, H. M. Hu^1,64, J. F. Hu^56,j, Q. P. Hu^72,58, S. L. Hu^12,g, T. Hu^1,58,64, Y. Hu^1, G. S. Huang^72,58, K. X. Huang^59, L. Q. Huang^31,64, X. T. Huang^50, Y. P. Huang^1, Y. S. Huang^59, T. Hussain^74, F. Hölzken^3, N. Hüsken^35, N. in der Wiesche^69, J. Jackson^27, S. Janchiv^32, J. H. Jeong^10, Q. Ji^1, Q. P. Ji^19, W. Ji^1,64, X. B. Ji^1,64, X. L. Ji^1,58, Y. Y. Ji^50, X. Q. Jia^50, Z. K. Jia^72,58, D. Jiang^1,64, H. B. Jiang^77, P. C. Jiang^46,h, S. S. Jiang^39, T. J. Jiang^16, X. S. Jiang^1,58,64, Y. Jiang^64, J. B. Jiao^50, J. K. Jiao^34, Z. Jiao^23, S. Jin^42, Y. Jin^67, M. Q. Jing^1,64, X. M. Jing^64, T. Johansson^76, S. Kabana^33, N. Kalantar-Nayestanaki^65, X. L. Kang^9, X. S. Kang^40, M. Kavatsyuk^65, B. C. Ke^81, V. Khachatryan^27, A. Khoukaz^69, R. Kiuchi^1, O. B. Kolcu^62A, B. Kopf^3, M. Kuessner^3, X. Kui^1,64, N. Kumar^26, A. Kupsc^44,76, W. Kühn^37, L. Lavezzi^75A,75C, T. T. Lei^72,58, Z. H. Lei^72,58, M. Lellmann^35, T. Lenz^35, C. Li^47, C. Li^43, C. H. Li^39, Cheng Li^72,58, D. M. Li^81, F. Li^1,58, G. Li^1, H. B. Li^1,64, H. J. Li^19, H. N. Li^56,j, Hui Li^43, J. R. Li^61, J. S. Li^59, K. Li^1, K. L. Li^19, L. J. Li^1,64, L. K. Li^1, Lei Li^48, M. H. Li^43, P. R. Li^38,k,l, Q. M. Li^1,64, Q. X. Li^50, R. Li^17,31, S. X. Li^12, T. Li^50, W. D. Li^1,64, W. G. Li^1,a, X. Li^1,64, X. H. Li^72,58, X. L. Li^50, X. Y. Li^1,64, X. Z. Li^59, Y. G. Li^46,h, Z. J. Li^59, Z. Y. Li^79, C. Liang^42, H. Liang^1,64, H. Liang^72,58, Y. F. Liang^54, Y. T. Liang^31,64, G. R. Liao^14, Y. P. Liao^1,64, J. Libby^26, A. Limphirat^60, C. C. Lin^55, C. X. Lin^64, D. X. Lin^31,64, T. Lin^1, B. J. Liu^1, B. X. Liu^77, C. Liu^34, C. X. Liu^1, F. Liu^1, F. H. Liu^53, Feng Liu^6, G. M. Liu^56,j, H. Liu^38,k,l, H. B. Liu^15, H. H. Liu^1, H. M. Liu^1,64, Huihui Liu^21, J. B. Liu^72,58, J. Y. Liu^1,64, K. Liu^38,k,l, K. Y. Liu^40, Ke Liu^22, L. Liu^72,58, L. C. Liu^43, Lu Liu^43, M. H. Liu^12,g, P. L. Liu^1, Q. Liu^64, S. B. Liu^72,58, T. Liu^12,g, W. K. Liu^43, W. M. Liu^72,58, X. Liu^38,k,l, X. Liu^39, Y. Liu^38,k,l, Y. Liu^81, Y. B. Liu^43, Z. A. Liu^1,58,64, Z. D. Liu^9, Z. Q. Liu^50, X. C. Lou^1,58,64, F. X. Lu^59, H. J. Lu^23, J. G. Lu^1,58, X. L. Lu^1, Y. Lu^7, Y. P. Lu^1,58, Z. H. Lu^1,64, C. L. Luo^41, J. R. Luo^59, M. X. Luo^80, T. Luo^12,g, X. L. Luo^1,58, X. R. Lyu^64, Y. F. Lyu^43, F. C. Ma^40, H. Ma^79, H. L. Ma^1, J. L. Ma^1,64, L. L. Ma^50, L. R. Ma^67, M. M. Ma^1,64, Q. M. Ma^1, R. Q. Ma^1,64, T. Ma^72,58, X. T. Ma^1,64, X. Y. Ma^1,58, Y. M. Ma^31, F. E. Maas^18, I. MacKay^70, M. Maggiora^75A,75C, S. Malde^70, Y. J. Mao^46,h, Z. P. Mao^1, S. Marcello^75A,75C, Z. X. Meng^67, J. G. Messchendorp^13,65, G. Mezzadri^29A, H. Miao^1,64, T. J. Min^42, R. E. Mitchell^27, X. H. Mo^1,58,64, B. Moses^27, N. Yu. Muchnoi^4,c, J. Muskalla^35, Y. Nefedov^36, F. Nerling^18,e, L. S. Nie^20, I. B. Nikolaev^4,c, Z. Ning^1,58, S. Nisar^11,m, Q. L. Niu^38,k,l, W. D. Niu^55, Y. Niu ^50, S. L. Olsen^10,64, S. L. Olsen^64, Q. Ouyang^1,58,64, S. Pacetti^28B,28C, X. Pan^55, Y. Pan^57, A. Pathak^34, Y. P. Pei^72,58, M. Pelizaeus^3, H. P. Peng^72,58, Y. Y. Peng^38,k,l, K. Peters^13,e, J. L. Ping^41, R. G. Ping^1,64, S. Plura^35, V. Prasad^33, F. Z. Qi^1, H. Qi^72,58, H. R. Qi^61, M. Qi^42, T. Y. Qi^12,g, S. Qian^1,58, W. B. Qian^64, C. F. Qiao^64, X. K. Qiao^81, J. J. Qin^73, L. Q. Qin^14, L. Y. Qin^72,58, X. P. Qin^12,g, X. S. Qin^50, Z. H. Qin^1,58, J. F. Qiu^1, Z. H. Qu^73, C. F. Redmer^35, K. J. Ren^39, A. Rivetti^75C, M. Rolo^75C, G. Rong^1,64, Ch. Rosner^18, M. Q. Ruan^1,58, S. N. Ruan^43, N. Salone^44, A. Sarantsev^36,d, Y. Schelhaas^35, K. Schoenning^76, M. Scodeggio^29A, K. Y. Shan^12,g, W. Shan^24, X. Y. Shan^72,58, Z. J. Shang^38,k,l, J. F. Shangguan^16, L. G. Shao^1,64, M. Shao^72,58, C. P. Shen^12,g, H. F. Shen^1,8, W. H. Shen^64, X. Y. Shen^1,64, B. A. Shi^64, H. Shi^72,58, H. C. Shi^72,58, J. L. Shi^12,g, J. Y. Shi^1, Q. Q. Shi^55, S. Y. Shi^73, X. Shi^1,58, J. J. Song^19, T. Z. Song^59, W. M. Song^34,1, Y. J. Song^12,g, Y. X. Song^46,h,n, S. Sosio^75A,75C, S. Spataro^75A,75C, F. Stieler^35, S. S Su^40, Y. J. Su^64, G. B. Sun^77, G. X. Sun^1, H. Sun^64, H. K. Sun^1, J. F. Sun^19, K. Sun^61, L. Sun^77, S. S. Sun^1,64, T. Sun^51,f, W. Y. Sun^34, Y. Sun^9, Y. J. Sun^72,58, Y. Z. Sun^1, Z. Q. Sun^1,64, Z. T. Sun^50, C. J. Tang^54, G. Y. Tang^1, J. Tang^59, M. Tang^72,58, Y. A. Tang^77, L. Y. Tao^73, Q. T. Tao^25,i, M. Tat^70, J. X. Teng^72,58, V. Thoren^76, W. H. Tian^59, Y. Tian^31,64, Z. F. Tian^77, I. Uman^62B, Y. Wan^55, S. J. Wang ^50, B. Wang^1, B. L. Wang^64, Bo Wang^72,58, D. Y. Wang^46,h, F. Wang^73, H. J. Wang^38,k,l, J. J. Wang^77, J. P. Wang ^50, K. Wang^1,58, L. L. Wang^1, M. Wang^50, N. Y. Wang^64, S. Wang^38,k,l, S. Wang^12,g, T. Wang^12,g, T. J. Wang^43, W. Wang^73, W. Wang^59, W. P. Wang^35,58,72,o, X. Wang^46,h, X. F. Wang^38,k,l, X. J. Wang^39, X. L. Wang^12,g, X. N. Wang^1, Y. Wang^61, Y. D. Wang^45, Y. F. Wang^1,58,64, Y. H. Wang^38,k,l, Y. L. Wang^19, Y. N. Wang^45, Y. Q. Wang^1, Yaqian Wang^17, Yi Wang^61, Z. Wang^1,58, Z. L. Wang^73, Z. Y. Wang^1,64, Ziyi Wang^64, D. H. Wei^14, F. Weidner^69, S. P. Wen^1, Y. R. Wen^39, U. Wiedner^3, G. Wilkinson^70, M. Wolke^76, L. Wollenberg^3, C. Wu^39, J. F. Wu^1,8, L. H. Wu^1, L. J. Wu^1,64, X. Wu^12,g, X. H. Wu^34, Y. Wu^72,58, Y. H. Wu^55, Y. J. Wu^31, Z. Wu^1,58, L. Xia^72,58, X. M. Xian^39, B. H. Xiang^1,64, T. Xiang^46,h, D. Xiao^38,k,l, G. Y. Xiao^42, S. Y. Xiao^1, Y. L. Xiao^12,g, Z. J. Xiao^41, C. Xie^42, X. H. Xie^46,h, Y. Xie^50, Y. G. Xie^1,58, Y. H. Xie^6, Z. P. Xie^72,58, T. Y. Xing^1,64, C. F. Xu^1,64, C. J. Xu^59, G. F. Xu^1, H. Y. Xu^67,2, M. Xu^72,58, Q. J. Xu^16, Q. N. Xu^30, W. Xu^1, W. L. Xu^67, X. P. Xu^55, Y. Xu^40, Y. C. Xu^78, Z. S. Xu^64, F. Yan^12,g, L. Yan^12,g, W. B. Yan^72,58, W. C. Yan^81, X. Q. Yan^1,64, H. J. Yang^51,f, H. L. Yang^34, H. X. Yang^1, J. H. Yang^42, T. Yang^1, Y. Yang^12,g, Y. F. Yang^1,64, Y. F. Yang^43, Y. X. Yang^1,64, Z. W. Yang^38,k,l, Z. P. Yao^50, M. Ye^1,58, M. H. Ye^8, J. H. Yin^1, Junhao Yin^43, Z. Y. You^59, B. X. Yu^1,58,64, C. X. Yu^43, G. Yu^1,64, J. S. Yu^25,i, M. C. Yu^40, T. Yu^73, X. D. Yu^46,h, Y. C. Yu^81, C. Z. Yuan^1,64, J. Yuan^34, J. Yuan^45, L. Yuan^2, S. C. Yuan^1,64, Y. Yuan^1,64, Z. Y. Yuan^59, C. X. Yue^39, A. A. Zafar^74, F. R. Zeng^50, S. H. Zeng^63A,63B,63C,63D, X. Zeng^12,g, Y. Zeng^25,i, Y. J. Zeng^59, Y. J. Zeng^1,64, X. Y. Zhai^34, Y. C. Zhai^50, Y. H. Zhan^59, A. Q. Zhang^1,64, B. L. Zhang^1,64, B. X. Zhang^1, D. H. Zhang^43, G. Y. Zhang^19, H. Zhang^81, H. Zhang^72,58, H. C. Zhang^1,58,64, H. H. Zhang^59, H. H. Zhang^34, H. Q. Zhang^1,58,64, H. R. Zhang^72,58, H. Y. Zhang^1,58, J. Zhang^59, J. Zhang^81, J. J. Zhang^52, J. L. Zhang^20, J. Q. Zhang^41, J. S. Zhang^12,g, J. W. Zhang^1,58,64, J. X. Zhang^38,k,l, J. Y. Zhang^1, J. Z. Zhang^1,64, Jianyu Zhang^64, L. M. Zhang^61, Lei Zhang^42, P. Zhang^1,64, Q. Y. Zhang^34, R. Y. Zhang^38,k,l, S. H. Zhang^1,64, Shulei Zhang^25,i, X. M. Zhang^1, X. Y Zhang^40, X. Y. Zhang^50, Y. Zhang^73, Y. Zhang^1, Y. T. Zhang^81, Y. H. Zhang^1,58, Y. M. Zhang^39, Yan Zhang^72,58, Z. D. Zhang^1, Z. H. Zhang^1, Z. L. Zhang^34, Z. Y. Zhang^77, Z. Y. Zhang^43, Z. Z. Zhang^45, G. Zhao^1, J. Y. Zhao^1,64, J. Z. Zhao^1,58, L. Zhao^1, Lei Zhao^72,58, M. G. Zhao^43, N. Zhao^79, R. P. Zhao^64, S. J. Zhao^81, Y. B. Zhao^1,58, Y. X. Zhao^31,64, Z. G. Zhao^72,58, A. Zhemchugov^36,b, B. Zheng^73, B. M. Zheng^34, J. P. Zheng^1,58, W. J. Zheng^1,64, Y. H. Zheng^64, B. Zhong^41, X. Zhong^59, H. Zhou^50, J. Y. Zhou^34, L. P. Zhou^1,64, S. Zhou^6, X. Zhou^77, X. K. Zhou^6, X. R. Zhou^72,58, X. Y. Zhou^39, Y. Z. Zhou^12,g, Z. C. Zhou^20, A. N. Zhu^64, J. Zhu^43, K. Zhu^1, K. J. Zhu^1,58,64, K. S. Zhu^12,g, L. Zhu^34, L. X. Zhu^64, S. H. Zhu^71, T. J. Zhu^12,g, W. D. Zhu^41, Y. C. Zhu^72,58, Z. A. Zhu^1,64, J. H. Zou^1, J. Zu^72,58
(BESIII Collaboration)
^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China
^2 Beihang University, Beijing 100191, People's Republic of China
^3 Bochum Ruhr-University, D-44780 Bochum, Germany
^4 Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia
^5 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
^6 Central China Normal University, Wuhan 430079, People's Republic of China
^7 Central South University, Changsha 410083, People's Republic of China
^8 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China
^9 China University of Geosciences, Wuhan 430074, People's Republic of China
^10 Chung-Ang University, Seoul, 06974, Republic of Korea
^11 COMSATS University Islamabad, Lahore Campus, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan
^12 Fudan University, Shanghai 200433, People's Republic of China
^13 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany
^14 Guangxi Normal University, Guilin 541004, People's Republic of China
^15 Guangxi University, Nanning 530004, People's Republic of China
^16 Hangzhou Normal University, Hangzhou 310036, People's Republic of China
^17 Hebei University, Baoding 071002, People's Republic of China
^18 Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany
^19 Henan Normal University, Xinxiang 453007, People's Republic of China
^20 Henan University, Kaifeng 475004, People's Republic of China
^21 Henan University of Science and Technology, Luoyang 471003, People's Republic of China
^22 Henan University of Technology, Zhengzhou 450001, People's Republic of China
^23 Huangshan College, Huangshan 245000, People's Republic of China
^24 Hunan Normal University, Changsha 410081, People's Republic of China
^25 Hunan University, Changsha 410082, People's Republic of China
^26 Indian Institute of Technology Madras, Chennai 600036, India
^27 Indiana University, Bloomington, Indiana 47405, USA
^28 INFN Laboratori Nazionali di Frascati , (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN Sezione di Perugia, I-06100, Perugia, Italy; (C)University of Perugia, I-06100, Perugia, Italy
^29 INFN Sezione di Ferrara, (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara, I-44122, Ferrara, Italy
^30 Inner Mongolia University, Hohhot 010021, People's Republic of China
^31 Institute of Modern Physics, Lanzhou 730000, People's Republic of China
^32 Institute of Physics and Technology, Peace Avenue 54B, Ulaanbaatar 13330, Mongolia
^33 Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica 1000000, Chile
^34 Jilin University, Changchun 130012, People's Republic of China
^35 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany
^36 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia
^37 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany
^38 Lanzhou University, Lanzhou 730000, People's Republic of China
^39 Liaoning Normal University, Dalian 116029, People's Republic of China
^40 Liaoning University, Shenyang 110036, People's Republic of China
^41 Nanjing Normal University, Nanjing 210023, People's Republic of China
^42 Nanjing University, Nanjing 210093, People's Republic of China
^43 Nankai University, Tianjin 300071, People's Republic of China
^44 National Centre for Nuclear Research, Warsaw 02-093, Poland
^45 North China Electric Power University, Beijing 102206, People's Republic of China
^46 Peking University, Beijing 100871, People's Republic of China
^47 Qufu Normal University, Qufu 273165, People's Republic of China
^48 Renmin University of China, Beijing 100872, People's Republic of China
^49 Shandong Normal University, Jinan 250014, People's Republic of China
^50 Shandong University, Jinan 250100, People's Republic of China
^51 Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China
^52 Shanxi Normal University, Linfen 041004, People's Republic of China
^53 Shanxi University, Taiyuan 030006, People's Republic of China
^54 Sichuan University, Chengdu 610064, People's Republic of China
^55 Soochow University, Suzhou 215006, People's Republic of China
^56 South China Normal University, Guangzhou 510006, People's Republic of China
^57 Southeast University, Nanjing 211100, People's Republic of China
^58 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China
^59 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China
^60 Suranaree University of Technology, University Avenue 111, Nakhon Ratchasima 30000, Thailand
^61 Tsinghua University, Beijing 100084, People's Republic of China
^62 Turkish Accelerator Center Particle Factory Group, (A)Istinye University, 34010, Istanbul, Turkey; (B)Near East University, Nicosia, North Cyprus, 99138, Mersin 10, Turkey
^63 University of Bristol, (A)H H Wills Physics Laboratory; (B)Tyndall Avenue; (C)Bristol; (D)BS8 1TL
^64 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
^65 University of Groningen, NL-9747 AA Groningen, The Netherlands
^66 University of Hawaii, Honolulu, Hawaii 96822, USA
^67 University of Jinan, Jinan 250022, People's Republic of China
^68 University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom
^69 University of Muenster, Wilhelm-Klemm-Strasse 9, 48149 Muenster, Germany
^70 University of Oxford, Keble Road, Oxford OX13RH, United Kingdom
^71 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China
^72 University of Science and Technology of China, Hefei 230026, People's Republic of China
^73 University of South China, Hengyang 421001, People's Republic of China
^74 University of the Punjab, Lahore-54590, Pakistan
^75 University of Turin and INFN, (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy
^76 Uppsala University, Box 516, SE-75120 Uppsala, Sweden
^77 Wuhan University, Wuhan 430072, People's Republic of China
^78 Yantai University, Yantai 264005, People's Republic of China
^79 Yunnan University, Kunming 650500, People's Republic of China
^80 Zhejiang University, Hangzhou 310027, People's Republic of China
^81 Zhengzhou University, Zhengzhou 450001, People's Republic of China
^a Deceased
^b Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia
^c Also at the Novosibirsk State University, Novosibirsk, 630090, Russia
^d Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia
^e Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany
^f Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China
^g Also at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE) and Institute of Modern Physics, Fudan University, Shanghai 200443, People's Republic of China
^h Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, People's Republic of China
^i Also at School of Physics and Electronics, Hunan University, Changsha 410082, China
^j Also at Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China
^k Also at MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China
^l Also at Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, People's Republic of China
^m Also at the Department of Mathematical Sciences, IBA, Karachi 75270, Pakistan
^n Also at Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
^o Also at Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany
September 9, 2024
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Although the standard model (SM) has achieved great success in high-energy physics, some questions like, e.g., dark matter, matter and anti-matter asymmetry, fermion mass hierarchy remain unresolved. So-called “dark" sectors have been theorized, named as such due to their assumed extremely faint interactions with the visible sector. Searching for these dark sectors offers a unique opportunity to uncover new physics (NP) beyond the SM, which would require extension to accommodate for these new particles and their interaction with SM matter. In the SM, the Abelian gauge group U(1)_Y (with hypercharge Y) describes the electromagnetic interaction and produces the associated gauge boson, the SM photon γ.
A minimum extension of the SM with an extra Abelian gauge group, U(1)_D (D denoting dark) produces another gauge boson, the dark photon <cit.>.
These two Abelian gauge fields could have a kinetic mixing because the field strengths can be multiplied to form a dimension-four operator <cit.>.
The dark photon may be a portal between the SM matter and the dark sector matter, and it could be massive or massless. If the symmetry of U(1)_D is broken spontaneously, the dark photon will acquire mass, which will be called A'. If the symmetry of U(1)_D remains unbroken, it will produce the massless dark photon, γ' <cit.>.
This massless dark photon could provide a natural explanation for the fermion mass hierarchy puzzle <cit.>.
While the massive dark photon couples to SM matter via the coupling strength eε through the kinetic mixing, where ε is the mixing parameter <cit.>, the massless dark photon has no direct interaction with the SM particle in the dimension-four operator <cit.>.
In the massless dark photons case, the SM photon can couple with the dark sector particle, which is usually called milli-charge particle <cit.>.
The massive dark photon can be produced in many processes by replacing the SM photon, such as e^+e^-→γ A', a process that has not been confirmed yet despite extensive search. A stringent constraint is set on the mixing parameter ε with different m_A' <cit.>, indicating that the value of ε must be small.
Since the massless dark photon has no interaction with SM particles within the dimension-four operator, the restrictions obtained from the massive dark photon do not apply to the massless dark photon.
A dimension-six operator has been proposed to provide a connection between SM matter and the massless dark photon <cit.>:
ℒ_NP= 1/Λ^2_NP ( C^U_jkq̅_j σ^μν u_k H̃ + C^D_jkq̅_j σ^μν d_k H
+ C^L_jkl̅_j σ^μν e_k H + h.c. ) F'_μν,
where Λ_NP is the effective mass, indicating the NP energy scale, C^U_jk, C^D_jk, and C^L_jk are the up-type, down-type, and charged-lepton-type dimensionless coefficients, respectively, depending on the NP and not necessarily related to one another, j(k)=1,2,3 is the generation tag of the SM particle. More details can be found in Ref. <cit.>.
The first three terms in this equations are the couplings between the massless dark photon and the up-type quarks, down-type quarks, and charged leptons, where the flavors of the two quarks or leptons could be identical or different, differing to flavor diagonal of the tree-level couplings of the massive dark photon.
This effective operator may cover some new dark-sector particles with very heavy mass in the NP energy scale Λ_NP <cit.>. Up to now, no new particles have been found up to the mass of ∼ 1 TeV, but some anomalies require an energy scale above the electroweak energy scale Λ_EW (∼ 100 GeV).
Similar to the β decay observed in low energy experiments <cit.>, where a missing neutrino within the four-fermion effective coupling <cit.> can predict the electroweak energy scale at about 100 GeV <cit.>, the massless dark photon could provide a portal for exploring the NP energy scale Λ_NP beyond the TeV magnitude.
In this Letter, we focus on the first item of the dimension-six operator in Eq. (<ref>), which causes the cuγ' coupling in the flavor-changing neutral current (FCNC) process of a charm quark with j=1,k=2.
In the SM, the FCNC processes are strongly suppressed by the Glashow-Iliopoulos-Maiani mechanism <cit.>, stating that these processes are forbidden at the tree level and can only happen through a loop diagram. In the SM, the branching fraction (BF) of the FCNC charm process is expected to be smaller than 10^-9 <cit.>. But for the cuγ' coupling, its FCNC process originates from the NP coupling in the NP energy scale, which is different from the SM.
In the charm sector, the massless dark photon can be searched for in D meson or Λ^+_c baryon decays, such as D→ Vγ', D→γγ', or Λ^+_c→ p γ', where V is a vector particle like ρ or ω. The BFs of these processes are directly related to |ℂ|^2+|ℂ_5|^2 <cit.>. Here ℂ=Λ_NP^-2(C^U_12+C^U*_12)ν/√(8) and ℂ_5=Λ_NP^-2(C^U_12-C^U*_12)ν/√(8) with the Higgs vacuum expectation value ν=246.2 GeV <cit.>, which are determined by the NP energy scale Λ_NP and the up-type dimensionless coefficient C^U_12. From the constraint of the dark matter (DM) and the vacuum stability (VS) in the universe <cit.>, the allowed BF of the massless dark photon in charm FCNC processes can be enhanced to 10^-7∼10^-5 <cit.>. Previously, BESIII searched for Λ^+_c→ p γ' and set the upper limit (UL) of its decay BF to be 8×10^-5 at the 90% confidence level (CL) <cit.>. This, however, is still above the allowed region obtained from DM and VS <cit.>. In this Letter, we search for the massless dark photon and probe the NP energy scale through the FCNC processes D^0→ωγ' and D^0→γγ' for the first time, which can be mediated via Feynman diagrams shown in FIG <ref>, by analyzing e^+e^- collision data of 7.9 fb^-1 at a center-of-mass energy of √(s)=3.773 GeV with the BESIII detector.
Details about the BESIII detector design and performance are provided elsewhere <cit.>. The simulated Monte Carlo (MC) samples, also described in Ref. <cit.>, are used to determine detection efficiencies and to estimate backgrounds.
The generator of the signal MC samples is parameterized by the helicity amplitudes same as the radiative decay of the D meson <cit.>.
At √(s)=3.773 GeV, the D^0D̅^0 meson pairs are produced from ψ(3770) decays without accompanying hadrons, which provide an ideal opportunity to study invisible massless dark photon decays of D mesons using the double tag (DT) method <cit.>. The D̅^0 mesons are first tagged with the main hadronic-decay modes D̅^0→ K^+π^-, D̅^0→ K^+π^-π^0 and D̅^0→ K^+π^-π^+π^-, and the selected candidates are referred to as the single tag (ST) sample. Here and throughout this letter, charge conjugations are always implied. Then, the signal processes D^0→ωγ' and D^0→γγ' are searched for in the system recoiling against the ST D̅^0 meson, and the selected candidates are denoted as the DT sample. Here, ω is reconstructed through its decay ω→π^+π^-π^0, π^0→γγ, and γ' is missing in the detector. The BFs of D^0→ωγ' and D^0→γγ' are calculated by
ℬ(D^0→ω(γ) γ')= N_sig/ϵ̂/ℬ_int∑_i N_i^ST,
with ∑_i N_i^ST=(6306.7±2.8)×10^3 <cit.> and the effective efficiency
ϵ̂=∑_i ϵ_i^DT/ϵ_i^ST×N_i^ST/∑_i N_i^ST,
where i indicates each mode of D̅^0→hadrons, N_sig is the signal yield of the massless dark photon in data, N_i^ST is the ST yield of D̅^0 meson samples in data, ϵ_i^ST is the ST efficiency of D̅^0→hadrons, ϵ_i^DT is the DT efficiency of D̅^0→hadrons, D^0→ω(γ)γ', ℬ_int=ℬ(ω→π^+π^-π^0)×ℬ(π^0→γγ) is obtained from Particle Data Group <cit.> for D^0→ωγ' and ℬ_int=1 for D^0→γγ'.
The selection criteria of ST samples, the ST yield N_i^ST, and the ST efficiency ϵ_i^ST can be found in Ref. <cit.>. The selection criteria of D^0→ωγ' and D^0→γγ', based on the tagged D̅^0 meson samples, are described below. To select D^0→γγ', no additional charged track is allowed.
The good charged track, particle identification (PID), and photon candidates are selected with the same strategy as outlined in Ref. <cit.>.
To select D^0→ωγ', only events with exactly two selected charged tracks, both identified as pions with zero net charge, are retained for further analysis.
There should be at least one photon with energy larger than 0.5 GeV for D^0→γγ' and at least two photons for D^0→ωγ', where the two photons with minimum χ^2 value of the kinematic fit <cit.> constraining M_γγ to the nominal π^0 mass are regarded as the correct photons from the π^0 meson. To select the ω meson in the data samples, the invariant mass of the two photons M_γγ before the kinematic fit must be in the region of [0.115, 0.150], and the invariant mass M_π^+π^-π^0 of the ω candidate is required to be in the region of [0.700, 0.850]. To further reduce the non-ω background, a kinematic fit <cit.> constraining M_γγ to the nominal π^0 mass and M_π^+π^-π^0 to the nominal ω mass is performed to obtain the χ^2_2C value which is required to be less than 44, optimized with the Punzi-optimization method <cit.>. To suppress the background with additional photons or π^0, the total energy of photon candidates other than those from the π^0 (γ) and the D̅^0 meson (E^tot_oth.γ) is required to be less than 0.1 GeV for D^0→ωγ' (D^0→γγ'). After these selections, there may still be some background particles flying to the endcap of the detector that cannot be effectively detected <cit.>, so the recoiling angle of D̅^0ω (D̅^0γ) is applied to veto these associated background events. The cosine of the recoiling angle is defined as cosθ^reccoil_D̅ω(γ)=|p⃗_cms - p⃗_D̅^0 - p⃗_ω(γ)|_z/|p⃗_cms - p⃗_D̅^0 - p⃗_ω(γ)|, where p⃗_cms is the momentum of the center-of-mass in e^+e^- collision, p⃗_D̅^0 is the reconstructed momentum of the D̅^0 meson, p⃗_ω(γ) is the reconstructed momentum of ω (γ), and the subscript z refers to the z-component. To suppress these background events, a requirement of |cosθ_D̅ω(γ)^recoil|<0.7 is applied.
With the above selection criteria,
the effective efficiency is estimated from the MC samples, which is ϵ̂=(15.98±0.02)% for D^0→ωγ' and ϵ̂=(52.18±0.05)% for D^0→γγ'.
The main background after the selections comes from the K^0_L associated background events, such as D^0→ω K^0_L for D^0→ωγ' and D^0→π^0 K^0_L for D^0→γγ'.
The signals of the massless dark photon are extracted from an unbinned extended maximum likelihood fit on the distribution of the square of the missing mass, M^2_miss, defined as
M^2_miss=|p_cms - p_D̅^0 - p_ω(γ)|^2/c^4,
where p_cms is the four-momentum of the e^+e^- center-of-mass system in the laboratory frame, p_ω(γ) is the kinematic fitted (reconstructed) four-momentum of ω(γ), p_D̅^0 is the four-momentum of the D̅^0 meson, achieved by the kinematic fit <cit.> constraining M_γγ to the nominal π^0 mass and M_K^+π^-, M_K^+π^-π^0, M_K^+π^-π^+π^- to the nominal D̅^0 meson mass. In the fit, the background is separated into the K^0_L-related and non-K^0_L backgrounds <cit.>.
The background shape is derived from the inclusive MC sample <cit.>, with the number of non-K^0_L background events assumed to follow a Gaussian distribution and constrained by the MC simulation (referred to as the Gaussian constraint), while the number of K^0_L-related background events is left as a floating parameter.
The signal is derived by the simulated shape convolved with a Gaussian function G(μ,σ), where μ and σ are restrained to the values obtained from the control samples D^0→ω K^0_S (D^0→π^0 K^0_S) for D^0→ωγ' (D^0→γγ'). The fit results are shown in FIG <ref>, with the massless dark photon signal yield N_sig=-15±8 for D^0→ωγ' and N_sig=-6±4 for D^0→γγ'.
The systematic uncertainty sources for the BF measurement include ST yield, intermediate BF, signal generator, DT signal efficiency, and signal extraction. With the DT method, several systematic uncertainties associated with the ST selection can be canceled without impacting the BF measurement. The uncertainty of ST yield is assigned as 0.1% <cit.>.
The uncertainty of the generator is estimated from the efficiency difference compared with a flat angular generation of γ' (phase space model), which is 1.3% (0.6%) for D^0→ωγ' (D^0→γγ').
The uncertainty of the BF of ω→π^+π^-π^0 is 0.8% and that of π^0→γγ is negligible <cit.>.
The uncertainty of photon detection is assigned as 1.0% per photon <cit.>. The uncertainties of pion tracking and PID are studied from the control sample J/ψ→π^+π^-π^0, which is assigned as 0.9% for tracking and 1.1% for PID of the two pions.
The uncertainties of other selections are estimated from the control sample D^0→ω K^0_S (D^0→π^0 K^0_S) for D^0→ωγ' (D^0→γγ'), where the K^0_S meson is regarded as a missing particle. For D^0→ωγ', the uncertainty is 0.2% for the M_γγ selection, negligible for the M_π^+π^-π^0 selection, 3.1% for the χ^2_2C selection, 5.9% for the E^tot_oth.γ selection and 1.1% for the cosθ^recoil_D̅ω selection, respectively. For D^0→γγ', the uncertainty is 2.4% for the E^tot_oth.γ selection and is 0.6% for the cosθ^recoil_D̅γ selection, respectively. The total systematic uncertainty is calculated by summing up all sources in quadrature, yielding 7.4% for D^0→ωγ' and 2.7% for D^0→γγ'.
For the uncertainty from the signal extraction, the signal shape is convolved with a Gaussian function to describe the difference where the parameter of the Gaussian function is in a Gaussian constraint within its uncertainty, the K^0_L background yield is floating in the fit, and the non-K^0_L background yield is also floating in a Gaussian constraint within its uncertainty. The uncertainty of signal extraction is negligible.
Since no significant excess of signal above the background is observed, a UL on the BF is set using a Bayesian approach following Ref. <cit.>, where the BF is calculated by Eq. (<ref>) and the systematic uncertainty is estimated with the method in Refs. <cit.>. The UL on the BF at the 90% CL is calculated by integrating the likelihood distribution with different signal assumptions to the 90% region, which is ℬ(D^0→ωγ')<1.1×10^-5 and ℬ(D^0→γγ')<2.0×10^-6.
Note that in the D^0→ωγ' measurement, the non-ω contribution can not be fully removed, and the current UL of ℬ(D^0→ωγ') is a conservative estimation.
Since the BF of massless dark photon production is related to |ℂ|^2+|ℂ_5|^2 and directly includes the NP energy scale <cit.>, the constraint on |ℂ|^2+|ℂ_5|^2 can be performed as well.
The UL of |ℂ|^2+|ℂ_5|^2 is shown in FIG <ref>. For D^0→ωγ', one sees that |ℂ|^2+|ℂ_5|^2<8.2×10^-17^-2, reaching the DM and VS allowed region <cit.> for the first time.
The channel D^0→γγ' has a better UL of the BF but a worse constraint on cuγ' coupling due to an additional electromagnetic vertex in FIG <ref>(b).
The two-dimensional constraint on the NP energy scale Λ_NP and the up-type dimensionless coefficient C^U_12 is given in FIG <ref> (b). Our result currently resemble the best constraint on the NP parameter space. Assuming |C^U_12|=1, our results can exclude NP energy scales below 138 TeV in the dark sector, which is approximately ten times the energy reached at the Large Hadron Collider <cit.>, suggesting a challenge of directly detecting superheavy particles at the NP energy scale associated with the cuγ' coupling in the present collider settings. Note that the value of |C^U_12| is model-dependent.
In the dimension-six operator of the massless dark photon (Eq. (<ref>)), there are three terms with the same NP energy scale but different types of dimensionless coefficients C^U_jk, C^D_jk, C^L_jk. For each type, the massless dark photon can couple with a pair of SM particles involving identical or different flavors. In principle, these dimensionless coefficients are not necessarily related to one another.
To compare the difference between different couplings, we follow the method from Ref. <cit.>, constructing a uniform variable F_NP=Λ_NP/√(|C^i_jk|ν/√(2m^2))
to perform the comparison on the first two generations, where i=U,D,L, |C^i_jk|ν/√(2m^2) indicates the strength of the massless dark photon coupled with the SM particle. Here, m is the mass of the heavier SM particle in the coupling, and F_NP represents the NP energy scale when |C^i_jk| equals to the Higgs-fermions coupling √(2)m/v <cit.>. The summary of the constraints in different couplings is shown in FIG <ref>, depicting only the best constraint on the coupling.
The constraints on the massless dark photon coupled with the same-flavor SM particles are mainly from astrophysics and cosmology <cit.>, while the constraints on the different flavors are mainly from laboratory physics <cit.>. This Letter provides the best constraint for the cuγ' coupling, which plays a unique role in the dark sector of the charm sector.
In summary,
we search for the massless dark photon and constraint the NP scale through the cuγ' coupling in D^0→ωγ' and D^0→γγ' for the first time. Based on 7.9 fb^-1 of e^+e^- collision data at √(s)=3.773 GeV, no significant signals are observed. The constraints on the BF and the NP energy scale of the massless dark photon production are given. The result of D^0→ωγ' gives the most stringent constraint on cuγ' coupling to date, exploring the DM and VS allowed space for the first time. The result of D^0→γγ' has a 5.5 times better UL of the BF than D^0→ωγ' but a worse constraint on cuγ' coupling.
Acknowledgement
The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key R&D Program of China under Contracts Nos. 2023YFA1606000, 2020YFA0406400, 2020YFA0406300; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11635010, 11735014, 11935015, 11935016, 11935018, 12025502, 12035009, 12035013, 12061131003, 12175321, 12192260, 12192261, 12192262, 12192263, 12192264, 12192265, 12221005, 12225509, 12235017, 12361141819; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contract No. U1832207, U1932101; 100 Talents Program of CAS; The Institute of Nuclear and Particle Physics (INPAC) and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contracts Nos. FOR5327, GRK 2149; Istituto Nazionale di Fisica Nucleare, Italy; Knut and Alice Wallenberg Foundation under Contracts Nos. 2021.0174, 2021.0299; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Research Foundation of Korea under Contract No. NRF-2022R1A2C1092335; National Science and Technology fund of Mongolia; National Science Research and Innovation Fund (NSRF) via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation of Thailand under Contracts Nos. B16F640076, B50G670107; Polish National Science Centre under Contract No. 2019/35/O/ST2/02907; Swedish Research Council under Contract No. 2019.04595; The Swedish Foundation for International Cooperation in Research and Higher Education under Contract No. CH2018-7756; U. S. Department of Energy under Contract No. DE-FG02-05ER41374.
Other Fund Information
To be inserted with an additional sentence into papers that are relevant to the topic of special funding for specific topics. Authors can suggest which to Li Weiguo and/or the physics coordinator.
Example added sentence: This paper is also supported by the NSFC under Contract Nos. 10805053, 10979059, ....National Natural Science Foundation of China (NSFC), 10805053, PWANational Natural Science Foundation of China (NSFC), 10979059, Lund弦碎裂强子化模型及其通用强子产生器研究National Natural Science Foundation of China (NSFC), 10775075, National Natural Science Foundation of China (NSFC), 10979012, baryonsNational Natural Science Foundation of China (NSFC), 10979038, charmoniumNational Natural Science Foundation of China (NSFC), 10905034, psi(2S)->B BbarNational Natural Science Foundation of China (NSFC), 10975093, D 介子National Natural Science Foundation of China (NSFC), 10979033, psi(2S)->VPNational Natural Science Foundation of China (NSFC), 10979058, hcNational Natural Science Foundation of China (NSFC), 10975143, charmonium rare decays
apsrev4-1
|
http://arxiv.org/abs/2409.02574v1 | 20240904094827 | Solving Video Inverse Problems Using Image Diffusion Models | [
"Taesung Kwon",
"Jong Chul Ye"
] | cs.CV | [
"cs.CV",
"cs.AI",
"stat.ML"
] |
BMI Prediction from Handwritten English Characters Using a Convolutional Neural Network
N. T. Diba1, N. Akter2, S. A. H. Chowdhury3
Dept. of Electronics & Telecommunication Engineering
Rajshahi University of Engineering & Technology
Rajshahi, Bangladesh
[email protected],
[email protected], [email protected]
J. E. Giti4
Dept. of Electrical & Electronic Engineering
Rajshahi University of Engineering & Technology
Rajshahi, Bangladesh
[email protected]
September 9, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Recently, diffusion model-based inverse problem solvers (DIS) have emerged as state-of-the-art approaches for addressing inverse problems, including image super-resolution, deblurring, inpainting, etc.
However, their application to video inverse problems arising from spatio-temporal degradation remains largely unexplored due to the challenges in training video diffusion models.
To address this issue, here we introduce an innovative video inverse solver that leverages only image diffusion models.
Specifically, by
drawing inspiration from the success of the recent decomposed diffusion sampler (DDS),
our method treats the time dimension of a video as the batch dimension of image diffusion models and solves spatio-temporal optimization problems within denoised spatio-temporal batches derived from each image diffusion model.
Moreover, we introduce a batch-consistent diffusion sampling strategy that encourages consistency across batches by synchronizing the stochastic noise components in image diffusion models.
Our approach synergistically combines batch-consistent sampling with simultaneous optimization of denoised spatio-temporal batches at each reverse diffusion step, resulting in a novel and efficient diffusion sampling strategy for video inverse problems.
Experimental results demonstrate that our method effectively addresses various spatio-temporal degradations in video inverse problems, achieving state-of-the-art reconstructions.
Project page:
§ INTRODUCTION
Diffusion models <cit.> represent the state-of-the-art generative modeling by learning the underlying data distribution p() to produce realistic and coherent data samples from the learned distribution p_θ().
In the context of Bayesian inference, the parameterized prior distribution p_θ() can be disentangled from the likelihood p(|), which denotes the probability of observing given . This seperation facilitates the derivation of the posterior distribution p_θ(|) ∝ p_θ()p(|).
Diffusion model-based inverse problem solvers (DIS) <cit.> leverage this property, enabling the unconditional diffusion models to solve a wide range of inverse problems.
They achieve this by conditional sampling from the posterior distribution p_θ(|), effectively integrating information from both the forward physics model and the measurement .
This approach allows for sophisticated and precise solutions to complex inverse problems, introducing the power and flexibility of diffusion models in practical applications.
Despite extensive DIS research on a wide range of image inverse problems such as super-resolution, colorization, inpainting, compressed sensing, deblurring, and so on <cit.>, the application of these approaches to video inverse problems, particularly those involving spatio-temporal degradation, has received relatively less attention.
Specifically,
in time-varying data acquisition systems, various forms of motion blur often arise due to the camera or object motions <cit.>, which can be modeled as a temporal PSF convolution of motion dynamics. These are often associated with spatial degradation caused by noise, camera defocus, and other factors.
Specifically, the spatio-temporal degradation process can be formulated as:
= ()+
with
= [ [1] ⋯ [N] ], = [ [1] ⋯ [N] ], = [ [1] ⋯ [N] ],
where [n],[n] and [n] denote the n-th frame ground-truth image,
measurement, and additive noise, respectively; N is the number of temporal frames, and
refers to the operator that describes the spatio-temporal degradation process.
The spatio-temporal degradation introduces complexities that image diffusion priors cannot fully capture, as image diffusion priors are primarily designed to handle spatial features rather than temporal dynamics.
Employing video diffusion models <cit.> could address these issues, but
poses significant implementation challenges for video inverse problems, due to the difficulty
of training video diffusion models for various applications.
Contrary to the common belief that a pre-trained video diffusion model is necessary for solving video inverse problems, here we propose a radically different method that addresses video inverse problems using only image diffusion models.
Inspired by the success of the decomposed diffusion sampler (DDS) <cit.>, which simplifies DIS by formulating it as a Krylov subspace-based optimization problem for denoised images via Tweedie's formula at each reverse sampling step,
we treat the time dimension of a video as the batch dimension of image diffusion models
and solve spatio-temporal optimization problems using the batch of denoised temporal frames from image diffusion models.
However, treating each frame of the video as a separate sample in the batch dimension can lead to inconsistencies between temporal frames. To mitigate this, we introduce the batch-consistent sampling strategy
that controls the stochastic directional component (e.g., initial noise or additive noise) of each image diffusion model during the reverse sampling process, encouraging the temporal consistency along the batch dimension.
By synergistically combining batch-consistent sampling with the simultaneous optimization of the spatio-temporal denoised batch, our approach effectively addresses a range of spatio-temporal inverse problems, including spatial deblurring, super-resolution, and inpainting.
Our contribution can be summarized as follows.
* We introduce an innovative video inverse problem solver using pre-trained image diffusion models by solving spatio-temporal optimization problems within the batch of denoised frames.
* We develop a batch-consistent sampling strategy to ensure temporal consistency by synchronizing stochastic noise components in image diffusion models.
* Extensive experiments confirm that our method generates state-of-the-art results for various video inverse problems.
§ BACKGROUND
Diffusion models.
Diffusion models <cit.> attempt to model the data distribution p_data() based on a latent variable model
p_θ(_0) = ∫ p_θ(_0:T) d_1:T, where p_θ(_0:T) := p_θ(_T) ∏_t=1^T p_θ^(t)(_t-1|_t)
where the _1:T are noisy latent variables defined by the Markov chain with Gaussian transitions
q(_t|_t-1) = 𝒩(_t|√(β_t)_t-1, (1 - β_t) I),
q(_t|_0) = 𝒩(_t|√(α̅_t)_0, (1 - α̅_t) I).
Here, the noise schedule β_t is an increasing sequence of t, with α̅_t := ∏_i=1^t α_i, α_i := 1 - β_i.
Training of diffusion models amounts to training a multi-noise level residual denoiser:
min_θ__t ∼ q(_t|_0), _0 ∼ p_data(_0), ϵ∼𝒩( 0, I)[ϵ_θ^(t)(_t) - ϵ_2^2] .
Then, sampling from (<ref>) can be implemented by ancestral sampling, which iteratively performs
_t-1 = 1/√(α_t)(_t - 1 - α_t/√(1 - α̅_t)ϵ_θ^*^(t)(_t) ) + β̃_tϵ
where β̃_t := 1 - α̅_t-1/1 - α̅_tβ_t
and
θ^* refers to the optimized parameter from (<ref>).
On the other hand, DDIM <cit.> accelerates the sampling based on
non-Markovian assumption. Specifically, the sampling iteratively performs
_t-1 = √(α̅_t-1)_t + √(1-α̅_t-1)
where
_t :=
1/√(α̅_t)(_t - √(1 - α̅_t)ϵ_θ^*^(t)(_t) ), := √(1-α̅_t-1-η^2β̃^2_t)ϵ_θ^*^(t)(_t) + ηβ̃_tϵ/√(1-α̅_t-1)
Here, _t is the denoised estimate of _t that is derived from Tweedie’s formula <cit.>.
Accordingly, DDIM sampling can be expressed as a two-step manifold transition: (i) the noisy sample _t∈ℳ_t transits to clean manifold ℳ by deterministic estimation using Tweedie's formula, (ii) a subsequent transition from clean manifold to next noisy manifold ℳ_t-1 occurs by adding noise ,
which is composed of the deterministic noise ϵ_θ^*^(t)(_t) and the stochastic noise ϵ.
Diffusion model-based inverse problem solvers.
For a given loss function ℓ() which often stems from the likelihood
for
measurement consistency, the goal of DIS is to address the following optimization problem
min_∈ℓ()
where
represents the clean data manifold sampled from unconditional distribution p_0().
Consequently, it is essential to find a way that minimizes cost while also identifying the correct manifold.
Recently, <cit.> proposed a general technique called diffusion posterior sampling (DPS), where the updated estimate from the noisy sample _t ∈_t is constrained to stay on the same noisy manifold _t.
This is achieved by computing the manifold constrained gradient (MCG) <cit.> on a noisy sample _t∈_t.
The resulting algorithm can be stated as follows:
_t-1 = √(α̅_t-1)( - γ_t ∇__tℓ ())+ √(1-α̅_t-1),
where γ_t>0 denotes the step size.
Under the linear manifold assumption <cit.>, this allows precise transition to _t-1.
Unfortunately, the computation of MCG requires computationally expensive backpropagation and is often unstable.
In a subsequent work, <cit.> shows that
under the same linear manifold assumption in DPS,
the one step update by - γ_t ∇_ℓ ()
are guaranteed to remain within a linear subspace, thus obviating the need for explicit computation of the MCG and
leading to a simpler approximation:
_t-1 ≃√(α̅_t-1)( - γ_t ∇_ℓ ())+ √(1-α̅_t-1) .
Furthermore, instead of using a one-step gradient update, <cit.> demonstrated that multi-step update using Krylov subspace methods, such as the conjugate gradient (CG) method, guarantees that the intermediate steps lie in the linear subspace. This approach improves the convergence of the optimization problem without incurring additional neural function evaluations (NFE). This method, often referred to as decomposed diffusion sampling (DDS), bypasses the computation of the MCG and improves the convergence speed, making it stable and suitable for large-scale medical imaging inverse problems <cit.>.
§ VIDEO INVERSE SOLVER USING IMAGE DIFFUSION MODELS
§.§ Problem formulation
Using the forward model (<ref>) and the optimization framework in (<ref>), the video inverse problem can be formulated as
min_∈ℓ():= - ()^2
where denotes the spatio-temporal volume of the clean image composed of N temporal frames as defined in (<ref>), and
represents the clean video manifold sampled from unconditional distribution p_0().
Then, a naive application of the one-step gradient within the DDS framework can be formulated by
_t-1 = √(α̅_t-1)(_t - γ_t ∇__tℓ (_t))+ √(1-α̅_t-1) .
where and refer to Tweedie's formula and noise in the spatio-temporal volume, respectively, which are defined by
:=
1/√(α̅_t)(_t - √(1 - α̅_t)ℰ_θ^*^(t)(_t) ), := √(1-α̅_t-1-η^2β̃^2_t)ℰ_θ^*^(t)(_t) + ηβ̃_tℰ/√(1-α̅_t-1)
Here, _t refers to the spatio-temporal volume at the t-th reverse diffusion step and
ℰ∼∏_i=1^N𝒩(0,).
Although the formula (<ref>) is a direct extension of the image-domain counterpart (<ref>),
the main technical challenge lies in training the video diffusion model ℰ_θ^(t), which is required for the formula (<ref>). Specifically, the video diffusion model is trained by
min_θ__t ∼ q(_t|_0), _0 ∼ p_data(_0), ℰ∼∏_i=1^N𝒩(0,)[ℰ_θ^(t)(_t) - ℰ_2^2] ,
which requires large-scale video training data and computational resources beyond the scale of training image diffusion models.
Therefore, the main research motivation is to propose an innovative method that can bypass the need for computationally extensive video diffusion models.
§.§ Batch-consistent reconstruction with DDS
Consider
a batch of 2D diffusion models along the temporal direction:
E_θ^(t)(_t):= [ ϵ_θ^*^(t)(_t[1]) ⋯ ϵ_θ^*^(t)(_t[N]) ]
where ϵ_θ^*^(t) represents an image diffusion model.
Suppose that E_θ^(t)(_t) is used for (<ref>).
Since unconditional reverse diffusion is entirely determined by (<ref>), the
generated video
is then
fully controlled by the behavior of the image diffusion models.
Thus, we investigate the limitations of using a batch of image diffusion models compared to using a video diffusion model
and explore ways to mitigate these limitations.
Recall that for the reverse sampling of each image diffusion model, the stochastic transitions occur from two sources: (i) the initialization and (ii) re-noising.
Accordingly, in batch-independent sampling, where each image diffusion model is initialized with independent random noise and re-noised with independent additive noise, it is difficult to impose any temporal consistency in video generation so that
each generated temporal frame may represent different content from each other (see Fig. <ref>(a)).
Conversely, in batch-consistent sampling, where each image diffusion model is initialized with the same noise and re-noised with the same additive noise, the generated frames from the unconditional diffusion model should be trivially reduced to identical images (see Fig. <ref>(b)). This dilemma is why separate video diffusion model training using (<ref>) was considered necessary for effective video generation.
One of the most important contributions of this paper is demonstrating that the aforementioned dilemma can be readily mitigated in conditional diffusion sampling originated from inverse problems. Specifically, inspired by the DDS formulation in (<ref>), we propose a method that employs a batch-consistent sampling scheme to ensure temporal consistency and introduces temporal diversity from the conditioning steps.
More specifically, the denoised image for each frame is computed individually using Tweedie's formula via image diffusion models:
^b :=
1/√(α̅_t)(_t - √(1 - α̅_t)E_θ^*^(t)(_t) )
where we use the superscript ^b to represent the batch-consistency and
E_θ^*^(t) is a batch of image diffusion models defined by (<ref>).
Here, the image diffusion models are initialized with the same random noises to ensure temporal consistency.
Subsequently, the denoised spatio-temporal batch is perturbed as a whole by applying the l-step
conjugate gradient (CG) to optimize
the data consistency term from the spatio-temporal degradation.
This can be formally represented by
_t := _∈^b + 𝒦_l - ()^2
where 𝒦_l denotes the l-dimensional Kyrlov subspace
associated with the given inverse problem <cit.>.
The multistep CG can
diversify each temporal frame according to the condition and achieve faster convergence than a single gradient step.
The resulting solution ensures that the loss function from the spatio-temporal degradation process can be minimized with coherent but frame-by-frame distinct reconstructions.
Finally, the reconstructed spatio-temporal volume from the CG is renoised with additive noise as:
_t-1 = √(α̅_t-1)_t + √(1-α̅_t-1)^b .
where
^b:= √(1-α̅_t-1-η^2β̃^2_t)E_θ^*^(t)(_t) + ηβ̃_tℰ^b/√(1-α̅_t-1)
Here, ℰ^b denotes the additive random noise from 𝒩(0,).
In contrast
to ℰ in (<ref>), which is composed of frame-independent random noises,
we impose batch consistency by adding the same random noises to each temporal frame to ensure temporal consistency.
In summary, the proposed batch-consistent sampling and frame-dependent perturbation through multistep CG ensure that the sampling trajectory of each frame, starting from the same noise initialization, gradually diverges from each other during reverse sampling to meet the spatio-temporal
data consistency. The
geometric illustration of the sampling path evolution is shown in Fig. <ref>(c). The detailed illustration of the intermediate sampling process of our method is shown in Fig. <ref>.
Additionally, the pseudocode implementation is given in Algorithm <ref>.
§ EXPERIMENTS
In this section, we conduct thorough comparison studies to demonstrate the efficacy of the proposed method
in addressing spatio-temporal degradations.
Specifically, we consider two types of loss functions for video inverse problems:
ℓ():= - ()^2, ℓ_TV():= - ()^2+λ TV()
where the first loss is from (<ref>)
and TV() denotes the total variation loss along the temporal direction.
Then, classical optimization methods are used as the baselines for comparison to minimize each loss function. Specifically, the stand-alone Conjugate Gradient (CG) method is employed to minimize ℓ(), while the Alternating Direction Method of Multipliers (ADMM) is used to minimize ℓ_TV(). Additionally, diffusion-based methods are utilized as baselines to minimize the loss functions in
(<ref>).
Specifically, DPS <cit.> is used to minimize ℓ(). However, instead of relying on 3D diffusion models, we use 2D image diffusion models, similar to our proposed methods, to ensure that backpropagation for MCG computation can be performed through 2D diffusion models.
Second, we employ DiffusionMBIR <cit.> to minimize ℓ_TV(), also using 2D image diffusion models. Unlike the original DiffusionMBIR, which applies TV along the z-direction, we apply TV along the temporal direction.
To test various spatio-temporal degradations, we select the temporal degradation in time-varying data acquisition systems, which is represented as PSF convolution along temporal dimension <cit.>.
We select three types of PSFs: (i) uniform PSF with widths of 7, (ii) uniform PSF with widths of 13, and (iii) Gaussian PSF with a standard deviation of 1.0. Each kernel is convolved along the temporal dimension with the ground truth video to produce the measurements. Note that convolving uniform PSF with widths of 7 and 13 correspond to averaging 7 and 13 frames, respectively.
Furthermore, we combined temporal degradation and various spatial degradations to demonstrate various combinations of spatio-temporal degradations.
For spatio-temporal degradations, we fix a temporal degradation as a convolving uniform PSF with a width of 7 and add various spatial degradations to the video.
These spatial degradations include (i) deblurring using a Gaussian blur kernel with a standard deviation σ of 2.0, (ii) super-resolution through a 4× average pooling, and (iii) inpainting with random masking at a ratio r of 0.5 (For specific implementation details of degradations, see Appendix <ref>).
We conduct our experiments on the DAVIS dataset <cit.>, which includes a wide variety of videos covering multiple scenarios. The pre-trained unconditional 256×256 image diffusion model from ADM <cit.> is used directly without fine-tuning and additional networks. All videos were normalized to the range [0, 1] and split into 16-frame samples of size 256×256. A total of 338 video samples were used for evaluation. More preprocessing details are described in the Appendix <ref>.
For quantitative comparison, we focus on the following two widely used standard metrics: peak signal-to-noise-ratio (PSNR) and structural similarity index (SSIM) <cit.> with further evaluations with two perceptual metrics - Learned Perceptual Image Patch Similarity (LPIPS) <cit.> and Fréchet Video Distance (FVD) <cit.>. FVD results are displayed scaled by 10^-3 for easy comparison.
For all proposed methods, we employ l = 5, η = 0.15 for 20 NFE in temporal degradation tasks, and l = 5, η = 0.8 for 100 NFE in spatio-temporal degradation tasks unless specified otherwise.
§.§ Results
We present the quantitative results of the temporal degradation tasks in Table. <ref>. The table shows that the proposed method outperforms the baseline methods by large margins in all metrics.
The large margin improvements in FVD indicate that the proposed method successfully solves inverse problems with temporally consistent reconstruction.
Fig. <ref> shows the qualitative reconstruction results for temporal degradations . The proposed method restores much finer details compared to the baselines and demonstrates robustness across various temporal PSFs.
In contrast, as shown in Fig. <ref>, while DPS performs well in reconstructing uniform PSFs with a kernel width of 7, it fails to accurately reconstruct frame intensities as the kernel becomes wider or more complex as shown in the bottom figures, leading to significant drops in Table <ref>.
DiffusionMBIR ensures temporal consistency and performs well for static scenes, but it struggles with dynamic scenes in the video.
In the same context, ADMM-TV produces unsatisfactory results for dynamic scenes.
The results of the spatio-temporal degradations are presented in Table <ref> and Fig. <ref>.
Even with additional spatial degradations, the proposed method consistently outperforms baseline methods.
On the other hand, DPS often produces undesired details, as shown in Fig. <ref>.
DiffusionMBIR fails to restore fine details in dynamic scenes.
Specifically, in the 3^rd row of Fig. <ref>, DiffusionMBIR restores the static mural painting but fails to capture the motion of the person.
This is because
TV regularizer often disrupts the restoration of dynamic scenes.
In this context, our method ensures temporal consistency without the need for a TV regularizer.
Furthermore, thanks to the consistent performance even at low NFE, the proposed method achieves a dramatic 10× to 50× acceleration in reconstruction time.
For handling temporal degradation with 20 NFE, the proposed diffusion model-based inverse problem solver can now achieve speeds exceeding 1 FPS.
§.§ Ablation study
Effect of CG updates. Experimental results demonstrate the tangential CG updates in video space on the denoised manifold are key elements in solving spatio-temporal degradations.
Here, we compare the proposed method with a stand-alone CG method to demonstrate its impact within the solver.
We applied the same CG iterations as in the proposed method but excluded the diffusion updates. As shown in Fig. <ref>, while the stand-alone CG method nearly solves the video inverse problem, it leaves residual artifacts, as seen in the first row, or fails to fully resolve spatial degradation, as shown in the second row.
In contrast, the proposed method generates natural and fully resolved frames.
This indicates that the diffusion update in the proposed method refines the unnatural aspects of the CG updates.
Effect of batch-consistent sampling.
Fig. <ref> illustrates the inter-batch difference within the denoised manifold ℳ during the reverse diffusion process. The blue plot shows results from our full method, while the green and orange plots represent results without stochasticity control and with gradient descent (GD) updates instead of conjugate gradient (CG) updates, respectively. Notably, GD converges more slowly than CG.
Our method consistently achieves low inter-batch difference (i.e., high inter-batch similarity), ensuring batch-consistent reconstruction and precise reconstructions. In contrast, the absence of stochasticity control or the use of GD updates results in higher difference (i.e., lower similarity), leading to less consistent sampling. The intermediate samples in Fig. <ref> and reconstruction results in Table <ref> further confirm that our method outperforms the others in producing batch-consistent results.
Further experimental results and ablation studies are illustrated in Appendix <ref>.
§ CONCLUSION
In this work, we introduce an innovative video inverse problem solver that utilizes only image diffusion models. Our method leverages the time dimension of video as the batch dimension in image diffusion models, integrating video inverse optimization within the Tweedie denoised manifold. We combine batch-consistent sampling with video inverse optimization at each reverse diffusion step, resulting in a novel and efficient solution for video inverse problems. Extensive experiments on temporal and spatio-temporal degradations demonstrate that the proposed method achieves superior quality while being faster than previous DIS methods, even reaching speeds exceeding 1 FPS.
iclr2024_conference
§ EXPERIMENTAL DETAILS
§.§ Implementation of Degradations
For spatio-temporal degradations, we applied temporal degradation followed by spatial degradation sequentially.
We utilize spatial degradation operations for super-resolution, inpainting, and deblurring as specified in the official implementation from <cit.> and <cit.>. For super-resolution, we employ 4× average pooling as the forward operator . For inpainting, we use a random mask to eliminate half of the pixels for both the forward operator . In deblurring, we apply a Gaussian blur with a standard deviation (σ) of 2.0 and a kernel width of 13 as the forward operator .
§.§ Data preprocessing details
We conducted every experiment using train/val sets of DAVIS 2017 dataset <cit.>. 480p resolution dataset has a spatial resolution of 480×640. Therefore, to avoid spatial distortion, the frames were first center cropped to 480×480, then resized to a resolution of 256×256. The resizing was performed using the `resize' function from the `cv2' library. After that, all videos were normalized to the range [0, 1].
In the temporal dimension, the video was segmented into chunks of 16 frames starting from the first frame. Any remaining frames that did not form a complete set of 16 were dropped. Through this process, a total of 338 video samples were obtained. The detailed data preprocessing code and the preprocessed Numpy files have all been open-sourced.
§.§ Comparative methods
DiffusionMBIR <cit.>.
For DiffusionMBIR,
we use the same pre-trained image diffusion model <cit.> with 1000 NFE sampling.
The optimal ρ and λ values are obtained
through grid search within the ranges [0.001, 10] and [0.0001, 1], respectively. The values are set to (ρ, λ) = (0.1, 0.001) for temporal degradation, and (ρ, λ) = (0.01, 0.01) for spatio-temporal degradation.
DPS <cit.>.
For DPS,
we use the same pre-trained image diffusion model <cit.> with 1000 NFE sampling.
The optimal step size ζ is obtained through grid search within the range [0.01, 100].
The value is set to ζ = 30 for both temporal degradation and spatio-temporal degradation.
Memory issues exist when performing DPS sampling more than 5 batch sizes in NVIDIA GeForce RTX 4090 GPU with VRAM 24GB. Therefore, we divide 16-frame videos into 4-frame videos and use them for all DPS experiments.
ADMM-TV. Following the protocol of <cit.>, we optimize the following objective
^∗ = argmin1/2 - ^2_2 + λ_1
where = [_t, _h, _w], which corresponds to the classical TV. t, h, and w represent temporal, height, and width directions, respectively. The outer iterations of ADMM are solved with 30 iterations and the inner iterations of CG are solved with 20 iterations, which are identical settings to <cit.>. We perform a grid search to find the optical parameter values that produce the most visually pleasing solution. The parameter is set to (ρ, λ) = (1, 0.001). We set initial as zeros.
§ FURTHER EXPERIMENTAL RESULTS
§.§ VRAM-efficient Sampling
The proposed method is VRAM-efficient, treating video frames as batches in the image diffusion model for sampling. As shown in Table <ref>, the method can reconstruct an 8-frame video at 256x256 resolution using less than 11GB of VRAM, which is feasible on GPUs like the GTX 1080Ti or RTX 2080Ti (11GB VRAM). With a single RTX 4090 GPU (24GB VRAM), it can reconstruct a 32-frame video at the same resolution.
§.§ Ablation study of stochasticity
Experimental results show that synchronizing stochastic noise along batch direction enables batch-consistent reconstruction, offering an effective solution for video inverse problems.
While it is theoretically possible to achieve batch-consistent sampling with η set to 0 (by eliminating stochastic noise), our empirical findings, as shown in Table. <ref>, indicate that incorporating stochastic noise is beneficial for video reconstruction, particularly in cases involving spatio-temporal degradations.
Consequently, in our experiments, the optimal η value was determined through a grid search.
§.§ Detailed visualizations of experimental results
|
http://arxiv.org/abs/2409.02417v1 | 20240904034248 | Generation of Scalable Genuine Multipartite Gaussian Entanglement with a Parametric Amplifier Network | [
"Saesun Kim",
"Sho Onoe",
"Alberto M. Marino"
] | quant-ph | [
"quant-ph"
] |
[email protected]
[email protected], [email protected]
^1Homer L. Dodge Department of Physics and Astronomy, The University of Oklahoma, Norman, OK 73019, USA
^2Center for Quantum Research and Technology, University of Oklahoma, Norman, OK, 73019, USA
^3Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics,
The University of Queensland, St. Lucia, Queensland, 4072, Australia
^4femtoQ Lab, Department of Engineering Physics, Polytechnique Montréal, Montréal, QC H3T 1JK, Canada
^5Quantum Information Science Section, Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USAThis manuscript has been authored in part by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The
publisher acknowledges the US government license to provide public access under the DOE Public Access Plan (http://energy.gov/downloads/doe-publicaccess-plan).
^6Quantum Science Center, Oak Ridge National Laboratory, Oak Ridge, TN 37381, USA
§ ABSTRACT
Genuine multipartite entanglement is a valuable resource in quantum information science, as it exhibits stronger non-locality compared to bipartite entanglement. This non-locality can be exploited in various quantum information protocols, such as teleportation, dense coding, and quantum interferometry. Here, we propose a scheme to generate scalable genuine multipartite continuous-variable entangled states of light using a parametric amplifier network. We verify the presence of genuine quadripartite, hexapartite, and octapartite entanglement through a violation of the positive partial transpose (PPT) criteria. Additionally, we use α-entanglement of formation to demonstrate the scalability of our approach to an arbitrary number of 2N genuinely entangled parties by taking advantage of the symmetries present in our scheme.
Generation of Scalable Genuine Multipartite Gaussian Entanglement
with a Parametric Amplifier Network
Alberto M. Marino^1,2,5,6
======================================================================================================
§ INTRODUCTION
Genuine multipartite entanglement (GME) is the strongest form of non-locality that a quantum state with a given number of parties can have. It is characterized by entanglement among all possible bi-partitions of the modes that make up the state. Given this high degree of non-locality, it is considered a valuable resource for quantum information science <cit.> and has been proposed as a resource for teleportation and dense coding <cit.>, to enhance the rate of quantum key distribution <cit.>, and to obtain the highest possible sensitivities for quantum interferometry <cit.>.
Most of the work to date on the generation of multipartite entanglement has focused on discrete variable (DV) systems, in which information can be encoded in the discrete eigenstates of the system, such as the two-dimensional Hilbert space of the polarization of a photon or the spin of a particle <cit.>. Another possibility is the use of continuous variable (CV) systems, which are characterized by a continuous distribution of eigenstates that can be used to encode information, such as the amplitude and phase quadratures of the field. CV systems offer significant advantages, as they allow for the efficient realization of quantum information protocols <cit.>, offer an efficient interaction with atomic ensembles <cit.>, and can deterministically generate large-scale multipartite entanglement <cit.>. GME in CV systems has been realised experimentally and approaches to scale up to larger number of parties through the use of an array of beam splitters <cit.> or arrays of parametric amplifiers <cit.> have been proposed. However, as the number of parties increases, the structure of the correlations between the different modes becomes more complex.
Here, we propose a new scheme for generating genuine multipartite entangled states of light based on a parametric amplifier network. The proposed scheme consists of two stages of parametric amplifiers and does not require an array of optical elements, which significantly reduces the complexity of the system. The resulting entanglement structure is simple and symmetric, making it easy to characterize the correlations between all the entangled modes. This makes the proof of GME when extending to a large number of parties tractable.
We first show that the proposed parametric amplifier network can generate genuine quadripartite, hexapartite, and octapartite entanglement through a violation of the positive partial transpose (PPT) criteria <cit.>. By verifying the inseparability of all possible bi-partitions of the system, we gain insight into the structure of the correlations and the symmetries of the generated multipartite entangled state. We then utilize this understanding to simplify the system through a reduction method that involves unitary transformations, which makes it possible to analytically prove the presence of GME. We do so through the use of α-entanglement of formation to show that the Von Neumann entropy of all possible partitions is larger than or equal to the one of the reduced system. This shows that our system is scalable and is capable of generating genuine 2N-partite entanglement.
§ PARAMETRIC AMPLIFIER NETWORK
The proposed scheme is based on a two-stage network of parametric amplifiers connected through an optical system that routes the modes between the two stages in what we refer to as a switchboard operation. We start with a description of the system to extend from two entangled modes with a single parametric amplifier to four entangled modes with the proposed scheme. As shown in Fig. <ref>(a), four modes {a, b, c, d} serve as inputs to the first stage of the network composed of two parametric amplifiers. This first stage entangles mode pairs a ↔ b and c ↔ d. After the first stage, the switchboard operation swaps modes b and d while directly transmitting modes a and c. The outputs of the switchboard serve as the inputs for the second stage of the network, which is composed of two additional parametric amplifiers. The second stage entangles mode pairs a ↔ d and c ↔ b. Figure <ref>(b) shows the graphical representation of the connections between the four output modes with a square geometry. The green lines (labeled 1) and the black lines (labeled 2) correspond to the connections introduced by the first or second stage of parametric amplifiers, respectively.
To verify the presence of GME in this quadripatite system, we first construct the covariance matrix (CM). Given that the system is Gaussian, the CM provides a complete characterization of the quantum properties of the system. The CM is determined by second-order moments of the form σ=⟨ξξ^T ⟩, where ξ is the quadrature element vector ξ=[X̂_a, Ŷ_a, X̂_b, Ŷ_b, X̂_c, Ŷ_c, X̂_d, Ŷ_d] with the quadrature operators defined as X̂_a=(â^†+â) and Ŷ_a=i(â^†-â) in terms of the annihilation, â, and creation, â^†, operators. Thus, the elements of the CM take the form ⟨σ_ij⟩=⟨X̂_i X̂_j+ X̂_j X̂_i⟩/2-⟨X̂_i⟩⟨X̂_j⟩ for all possible combinations of quadratures and modes.
Since the field operators follow the bosonic commutation relation [â,â^†]=1, the quadratures satisfy the commutator relation [X̂_a,Ŷ_a]=2i. As a result, due to the uncertainty principle, a physical CM must satisfy the condition σ+i Ω≥0 <cit.>, where the symplectic matrix Ω is given by
Ω=Nj=0⊕J with
J=(
0 -1
1 0
).
Similarly, for a partially transposed CM, which we denote as σ, this condition takes the form σ+i Ω≥0, which is equivalent to Δ≡-(Ωσ)^2≥1 for a pure quantum state. This condition establishes a necessary and sufficient criterion for the separability of the system <cit.>. Violation of this criterion determines that the system is inseparable, with a stronger violation indicating a stronger degree of entanglement. Thus, we can verify the presence of entanglement by calculating the smallest symplectic eigenvalue of matrix Δ. We can confirm the presence of GME using the PPT criteria if it can be shown that all possible bi-partitions are inseparable <cit.>.
To construct the CM of the generated state, shown in Fig. <ref>(b), we describe the operation performed by the parametric amplifiers with the two-mode squeezing operator Ŝ_ab (ζ)=exp(ζ^* âb̂-ζâ^†b̂^†), where ζ=s exp(i θ) with s and θ representing the degree of squeezing and the squeezing angle, respectively. We choose the squeezing angle that maximizes the entanglement in the system, which corresponds to θ=π. The corresponding CM, given in App. <ref>, is then used to obtain an analytical expression for the smallest symplectic eigenvalue for all possible bi-partitions.
We start with bi-partitions of the form 1×3, shown in Fig. <ref>(a), for which there are four possible groupings given by a|bcd, b|acd, c|abd, and d|abc. In this case the PPT criteria for all bi-partitions are identical due to the symmetry of the state and take the form
Δ^(1×3) =α _-^2 (β _+^2-β _-^2)+α _+^2 (β _-^2+β _+^2)
-2 α _+ β _+ √(α _-^2 (β _+^2-β _-^2)+α _+^2 (β _-^2+β _+^2)-α _+^2 β _+^2),
where Δ^(1×3) is the smallest symplectic eigenvalue of the PPT matrix Δ, α _± = (e^-2 s_1± e^2 s_1)/2, and β _±= (e^-2 s_2± e^2 s_2)/2. As the graphical representation in Fig. <ref>(a) shows, the 1×3 bi-partitions have the same connections between the two partitions, as indicated by the green and black lines that corresponds to the first and second stage of PAs, respectively. Figure <ref>(a) shows a contour plot of the smallest symplectic eigenvalue, Δ^(1×3), as a function of the squeezing parameters of the first and the second stages. As can be seen, Δ^(1×3)<1 as long as s_1>0 or s_2>0.
Next, we consider the bi-partitions of the form 2×2, shown in Figs. <ref>(b) through <ref>(d), which correspond to groupings ab|cd, ad|bc, and ac|bd. For these bi-partitions, the smallest symplectic eigenvalues are all different and take the form (α _+^2-α _-^2) (β _-+β _+)^2 for ab|cd, (α _-+α _+)^2 (β _-+β _+)^2 for ad|bc, and (α _-+α _+)^2(β _+^2-β _-^2) for ac|bd.
The expressions for groupings ab|cd and ad|bc simplify to e^-2 s_2 and e^-2 s_1, respectively. Thus, they are independent of the first or second stage, as can be seen in Figs. <ref>(b) and <ref>(c), given that the two partitions are only connected by correlations introduced by either the second or first PA stage. On the other hand, grouping ac|bd depends on the gain of both stages, as shown in Fig. <ref>(d), given that modes a and c are indirectly connected through both PA stages. This makes it the most entangled bi-partition of the four-mode case. As can be seen from Figs. <ref>(b) through <ref>(d), all the bi-partitions of the form 2×2 have smallest symplectic eigenvalues less than one when s_1>0 and s_2>0.
For all bi-partition, in the limit in which s_1→ 0 and s_2→0 (no parametric amplification in either stage of the network) we have that α _+=β _+→ 1 and α _-=β _-→ 0, such that the smallest symplectic eigenvalues all tend to 1. This implies that the generated state is separable when there is no squeezing from any of the PAs, as expected given that there are no interactions (through the PAs) between any of the four independent input modes. On the other hand, as soon as the squeezing parameters from both stages are greater than 0, the smallest symplectic eigenvalues of all possible bi-partition become less than one. These results show that all the bi-partitions are inseparable, which implies that the generated state is a genuine quadrapartite entangled state.
§ GENUINE HEXAPARTITE AND OCTAPARTITE ENTANGLEMENT
We now show that through a careful choice of the switchboard operation and the addition of parametric amplifiers in each stage, it is possible to extend the proposed scheme to generate genuine hexapartite and octapartite entanglement. Figures <ref>(a) and <ref>(b) show the proposed schemes for their generation and the corresponding graphical representation of the states.
As the number of modes increases, the total number of possible bi-partitions grows rapidly. For an N-mode system, the total number of bi-partitions that need to be considered to exhibit a violation of the PPT criteria is given by 2^N-1-1 <cit.>. Furthermore, for bi-partitions of the form k × (N-k), the possible number of groupings can be calculated through the binomial coefficient
N-0.5muC_k=N(N-1)(N-2)⋯ (N-k+1)/k(k-1) ⋯ 1.
For example, for the hexapartite system there are a total of 2^5-1=31 bi-partitions that need to be verified for a violation of the PPT criteria. More specifically, for bi-partitions of the form 1×5 there are 6-0.5muC_1=6 groupings, for bi-partitions of the form 2×4 there are 6-0.5muC_2=15 groupings, and for bi-partition of the form 3×3 there are 6-0.5muC_3/2=10 groupings. Note that for bi-partition of the form 3×3 it is necessary to divide the binomial coefficient by two to prevent double counting. In what follows, we use the SNEG package <cit.> in Mathematica to calculate the smallest symplectic eigenvalues for all possible bi-partions for a hexapartite and an octapartite system.
We start by considering the hexapartite system shown in Fig. <ref>(a). For bi-partitions of the form 1×5 the possible groupings are a|bcdef, b|acdef, c|abdef, d|abcef, e|abdf, and f|abcde. Figure <ref>(a) shows the smallest symplectic eigenvalues for these bi-partitions for the case in which both stages of the PA network have the same squeezing parameter (s_1=s_2=s). As can be seen, all the smallest symplectic eigenvalues are identical. This is expected due to the symmetry of the connections between the resulting partitions, as shown schematically in Fig. <ref>(a). The smallest symplectic eigenvalues without the assumption of equal squeezing in both stages of the PA netowrk are shown in App. <ref>.
Bi-partitions of the form 2×4 and 3×3 are more complex, as can be seen by the smallest symplectic eigenvalues shown in Figs. <ref>(b) and <ref>(c), respectively. For bi-partitions of the form 2×4 there are four unique cases that are shown schematically in Figs. <ref>(b) through <ref>(d). The bi-partitions between second nearest neighbors and the rest of the system (ae|bcdf, bf|acde, ce|abdf, df|abce, ac|bdef, bd|acef) show the strongest violation of the PPT criteria, as can be seen by the orange trace in Fig. <ref>(b). This indicates that these bi-partitions, represented in Fig. <ref>(b), show the strongest entanglement for bi-partitions of the form 2×4. For this bi-partitioning of the system, modes such as a and e are indirectly connected through one PA from the first stage and one from the second stage. Since these are the bi-partitions that group modes that are the closest possible while breaking connections from both PA stages, they have the strongest possible correlations. Indirect connections through more or less PA processes will make the correlations between the modes weaker.
The bi-partitions with the second-to-highest degree of entanglement, given by the black trace in Fig. <ref>(b), correspond to bi-partitions in which the elements of one of the partitions are indirectly connected by two PAs from the first stage and one from the second stage or by two from the second stage and one from the first stage (af|bcde, bc|adef, de|abcf), as shown in Fig. <ref>(d). While these bi-partitions break connections from both PA stages, the two modes are indirectly connected through more PAs that the bi-partitions shown in Fig. <ref>(b) and thus exhibit weaker entanglement.
The bi-partitions that shown the lowest degree of entanglement, given by the green and blue traces in Fig. <ref>(b), are those for which only connections from a single parametric amplifier stage are broken. These correspond to bi-partitions between nearest neighbors and the rest of the system, with the neighbors connected by a first stage PA (ab|cdef, ef|abcd, cd|abef) or a second stage PA (ad|bcef, be|acdf, cf|abde), as shown schematically in Fig. <ref>(c). For these cases, partitions that have elements with nearest neighbors with a first stage connection, left diagram in Fig. <ref>(c), show more entanglement than ones with a second stage connection, right diagram.
This can be understood by considering the operations that connect the modes in the partition with two elements. For the left diagram, which corresponds to a bi-partitioning with one of the partitions composed of modes a and b, the modes are connected through an entangling operation in the first stage followed by the addition of amplification noise in the second stage. In contrast, for the right diagram, which corresponds to a bi-partitioning with one of the partitions composed of modes a and d, the modes result from an initial addition of amplification noise in the first stage followed by an entangling operation in the second stage. As demonstrated in App. <ref>, entangling first and amplifying later leads to a larger level of entanglement between the two partitions.
As can be seen in Fig. <ref>(b), the green and blue traces initially follow the same trend as the degree of squeezing of the PAs increases; however, the green trace then tends toward the black one. This is due to the competition between the correlations introduced by both PA stages. As the degree of squeezing increases, the entanglement introduced by the second stage becomes stronger than the one introduced by the first stage. Therefore, in the large squeezing limit, bi-partitions that break connections introduced by the second stage (ab|cdef, ef|abcd, cd|abef) exhibit more entanglement and tend toward bipartitions of the form (af|bcde, bc|adef, de|abcf) given that the correlations introduce by the second stage dominate over the ones introduced by the first stage.
For bi-partitions of the form 3×3, there are also four unique cases, as shown in Fig. <ref>(c) and represented schematically in Fig. <ref>. In this case, partitions consisting if only second nearest neighbor (ace|bdf, bdf|ace) have the largest degree of entanglement, as shown by the green trace in Fig. <ref>(c). Following the same argument as with the bi-partitions of the form 2×4, these bi-partitions group modes that are the closest possible while breaking the connections introduced by the first and the second stages, as shown in Fig. <ref>(a).
The bi-partitions with the second and third largest degrees of entanglement, given by the the blue and orange traces in Fig. <ref>(c), are ones that group two modes with nearest neighbors connected by a first stage PA with one mode that is a second nearest neighbor (abc|def, abf|cde, aef|bcd), as shown in Fig. <ref>(c), and ones that group two modes with nearest neighbors connected by a second stage PA with one mode that is a second nearest neighbor (ade|bcf, adf|bce, bde|acf), as shown in Fig. <ref>(d). There is again a difference between groupings that include two modes connected by a first stage PA or by a second stage PA, with ones with nearest neighbors from the first stage having more entanglement than ones with nearest neighbors from the second stage. Finally, the bi-partitions with the least degree of entanglement, given by the black trace in Fig. <ref>(c), are ones with partitions composed of only nearest neighbors (abe|cdf, bef|acd, cef|abd). While these bi-partitions break connections from both parametric stages, as seen in Fig. <ref>(b), they break the least number of connections for all bi-partitions of the form 3×3. As can be seen in Fig. <ref>, all possible bi-partitions needed to evaluate the PPT criteria have their smallest eigenvalue less than 1 when the squeezing is larger than 0, which implies GME for the generated hexapartite state.
Following the same trend, we can increase the number of entangled modes by adding an additional PA to the first and second stages, as shown in Fig. <ref>(b), to create genuine octapartite entanglement. For the octapartite system there are a total of 2^8-1-1=127 possible bi-partitions that need to be verified. Figures <ref>(a), <ref>(b), <ref>(c), and <ref>(d) show the smallest symplectic eigenvalues for bi-partitions of the form 1×7, 2×6, 3×5, and 4×4, respectively. For bi-partitions of the form 1×7, there are 8-0.5muC_1=8 cases that all exhibit the same behavior, as shown in Fig. <ref>(a), due to symmetry.
Bi-partitions of the form 2×6 for the octapartite system behave relatively similar to bi-partitions of the form 2×4 for the hexapartite system. There are total of 8-0.5muC_2=15 partitions, but only 5 unique behaviors, as shown in Fig. <ref>(b). The smallest symplectic eigenvalues for different forms of bi-partitions for the octapartite case, shown in Fig. <ref>(b), and for the hexapartite case, shown in Fig. <ref>(b), follow the same ordering except for the one with the second-to-highest degree of entanglement, given by the red trace in Fig. <ref>(b). The red trace shows the behavior of the smallest symplectic eigenvalues for bi-partitions between fourth nearest neighbors and the rest of the system (ae|bcdfgh, bf|acdegh, cg|abdefh, dh|abcefg), as shown in Fig. <ref>(a), and ones between third nearest neighbors connected by two first stages and one second stage and the rest of the system (ah|bcdefg, de|abcfgh, gf|abcdeh, bc|adefgh), as shown in the left diagram in Fig. <ref>(b). The groupings between fourth nearest neighbors, shown in Fig. <ref>(a), are all equivalent due to the reflection symmetry of the state.
Since fourth nearest neighbors are always independent, third nearest neighbors and fourth nearest neighbors exhibit the same level of entanglement, which is a new feature of the octapartite system as the quadrapartite and the hexapartite systems do not have any two modes that come from independent processes. This leads to the same degree of entanglement between the left diagram in Fig. <ref>(b) and the ones in Fig. <ref>(a). That is, despite the fact that the distance between the two modes is different, they show the same inseparability. As we will discuss in the next section, this simplifies the structure of the correlations introduced by the system and makes it easier to track them as the number of modes increase.
Additionally, Fig. <ref>(b) shows that the two cases of third nearest neighbors exhibit a different behavior. This can be understood by tracking how the two modes in one of the partitions are connected to each other. The connection shown in Fig. <ref>(a) with one partition composed of the third nearest neighbors with two second stage connections and one first stage connection, is a simple cascade PA configuration where the two stages generate a four-mode entangled state. However, when one of the partitions is composed of third nearest neighbors with two second stage connections and one first stage-connection, as shown in Fig. <ref>(b), the modes in the partition come from completely independent processes.
The remaining bi-partitions of the form 3×5, with 8-0.5muC_3=56 bi-partitions, and 4×4, with 8-0.5muC_4/2=35 bi-partitions, have smallest symplectic eigenvalues shown in Figs. <ref>(c) and <ref>(d), respectively. Although the number of possible bi-partitions increases rapidly, the physics remain the same. As can be seen from Fig. <ref>, all possible bi-partitions for the octapartite system have smallest symplectic eigenvalues less than 1 for a squeezing parameter greater than 0. Therefore, the system generates a GME octapartite state.
§ GENERALIZATION TO 2N-MODES
The results obtained above show that the proposed PA network generates a multipartite quantum state with symmetries that make it possible to understand the correlations between the different modes. These symmetries become more apparent when we consider the CM. Table <ref> shows the structure of the CM for an arbitrary number of modes, with martix element that have the same functional form coded with the same color. The non-zero matrix elements take the form: A=α_+β_- I_2, B = -α_- (β_++1) J/2, B^' = -α_+β_- J, C = α_-β_- I_2/2, and D = -α_- (β_+-1) J/2, where I_2 and J are the 2×2 identity matrix and the diag{1,-1} matrix, respectively. Given that only neighbors with less than a fourth nearest neighbor connection are correlated, as discussed in the Sect. <ref>, most of the elements of the CM are zero with the nonzero ones centered around the diagonal. As can be seen, towards the upper and lower corners of the CM, this leads to some elements wrapping around the edges. Note that the number of modes generated by the proposed PA network is always even, as modes are always added in pairs.
Terms in the CM labeled as A, marked in gray, represent the self-correlation for each mode and correspond to the variances of the quadratures. Since all the modes are amplified equally, they all have the same variance. Terms labeled as B and B^', marked in light blue and blue, respectively, represent the nearest neighbors. As explained in the previous section, these terms are different given that B results from entangling by a PA from the first stage followed by amplification noise added by the second stage; however, B^' results from having two modes with amplification noise after the first stage entangled by a PA from the second stage. Finally, terms labeled as C, marked in red, represent second nearest neighbors, and term labeled as D, marked in green, represent third nearest neighbors. Note that some CM matrix elements for third nearest neighbors are zero, which indicates that there are no correlations between the corresponding modes. As shown in Fig. <ref>(b), these zero terms corresponds to modes, such as a and h, that come from processes that are independent, such that there is not even an indirect "interaction" between them. More details on the matrix elements can be found in App. <ref>.
Extending the PPT criteria analysis performed for the hexapartite and octapartite cases to an arbitary 2N-partite system to verify the presence of genuine 2N-partite entanglement would becomes impossible, as the number of bi-partitions that need to be considered increases exponentially. In order to overcome this challenge, we use a measure known as α-entanglement of formation and take advantage of the symmetry of the CM to show that some partitions are more entangled than others. The genuine 2N-mode entanglement entropy for a pure state |ψ⟩ is calculated as follows <cit.>:
E_2N(|ψ⟩)=P⊂{1,2,...,2N}minS[Tr_P(|ψ⟩⟨ψ|)],
where S[ρ] is the Von Neumann entropy defined as S[ρ]=-Tr(ρlnρ) and the minimization is done over all possible subsystems. The pure state |ψ⟩ has genuine 2N-partite entanglement if E_2N(|ψ⟩)≠ 0. This is similar to the PPT criterion, whereby the PPT criterion must be violated for all possible bi-partitioning.
Due to the symmetry of the Von Neumann entropy (i.e., S[Tr_A(ρ_AB)]=S[Tr_B(ρ_AB)]) <cit.>, it is sufficient to conduct the minimization over the cases where the number of modes in P⊂{1,2,...,2N} is less than N. For Gaussian states, the Von Neumann entropy of an M-mode mixed state ρ_M can be computed via its CM, σ_M, according to <cit.>
S(σ_M)=1/2∑_n=1^M h(ν_n),
h(x)=x_+/2log_2(x_+/2)-x_-/2log_2(x_-/2),
where ν_n is the n^th symplectic eigenvalue of σ_M and x_±=x± 1.
For the CM shown in Table <ref>, the correlations are strongest between nearest neighbors. As a result, the Von-Neumann entropy of the reduced state ρ_P_1 is minimized when the subsystem is composed of consecutive modes, e.g. when P_1={1,2,3,4} (see App. <ref> for more details). Furthermore, we can unitarily reduce the number of consecutive modes by utilizing the following property:
S(Û_P_1^†ρ_P_1Û_P_1)=S(ρ_P_1),
where Û_P_1 is an arbitrary unitary operation on the P_1 subsystem. By taking advantage of this property, it can be shown that the entropy of an arbitrary number of consecutive modes can be reduced down to the entropy of only two, three, or four modes. This reduction is shown in Fig. <ref>, and makes it possible to extend the analysis to an arbitrary number of 2N modes. As an example, we mathematically show this result for a particular case:
S(σ_1234) =S(Û_1234^†σ_1234Û_1234 )
=S(σ_1 ⊗1_2⊗1_3⊗σ_4)
=S(σ_1 ⊗σ_4)
where Û_1234=Ŝ_12^-1Ŝ_34^-1Ŝ_23^-1, with Ŝ_ij representing the two-mode squeezing operation between modes i and j. Following the result of App. <ref> and this reduction, it can be shown that the Von Neumann entropy for all possible bi-partitioning is larger than or equal to at least one of the five possible partitionings shown in Fig. <ref>. Thus, it is sufficient to verify that the minimum entropy of these five possible partitionings is greater than zero to show GME in the 2N-mode state. Their entropies are a function of s_1 and s_2, meaning that the genuine 2N-partite entanglement entropy stays constant for all N>3.
We plot the genuine 2N-partite entanglement entropy in Fig. <ref>. As can be seen, the miniumum entropy for the system is greater than zero for any value of s_1>0 and s_2>0, which shows that the generated state contains genuine 2N-partite entanglement. Note that the simplicity of the reduction method, and the constant entanglement of entropy for N-modes, for N>3, can be attributed to the symmetric set-up of the PA network. As a result, it is possible to show theoretically that the proposed scheme is scalable and can generate genuine 2N-partite entanglement for arbitrarily large N through careful selection of the switchboard operation, as shown in Fig. <ref>(c).
§ CONCLUSION
In conclusion, we propose a novel technique for generating scalable GME using a PA network. Our proposed scheme offers an efficient alternative for generating genuine multipartite entangled states that can scale to a larger number of parties due to the symmetries in the correlations between the different modes. We show the presence of quadripartite, hexapartite, and octapartite GME through a violation of the PPT criteria for all possible bi-partitions. The symmetry in the system make it possible to track the correlations between all the modes and explain the physical mechanisms behind them.
We show that it is possible to generalize the covariance matrix to a 2N-partite system with a high degree of symmetry in which correlations are present between third nearest neighbors at most. Such highly symmetric covariance matrix makes it possible to verify the presence of genuine 2N-partite entanglement through the use α-entanglement of formation by taking advantage of these symmetries to reduce all possible bi-partitons to only 5 cases. By showing that the Von Neumann entropy for these 5 bi-partitions is larger than zero, we are able to show that the proposed scheme generates a scalable genuine 2N-partite entangled state for aribtrarily large N.
The two-stage design of the parametric network, in combination with the proposed swithboard operation, make the proposed scheme amenable to an implementation that takes advantage of the multispatial mode properties of parametric amplifiers <cit.>. Such an approach would lead to a highly scalable and compact source of GME. Furthermore, the proposed technique can be extended to generate quantum states with more complex connections, such as ones with a cubic graphical representation, through the addition of additional PA stages to the network.
§ ACKNOWLEDGEMENTS
This work was supported by the National Science Foundation (NSF grant PHYS-1752938). AMM acknowledges support from the US Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center for finalizing the manuscript.
31
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Dai et al.(2020)Dai,
Dong, Xu, You, Zhang, and Gühne]PhysRevApplied.13.054022
author author Y. Dai, author Y. Dong, author Z. Xu, author
W. You, author C. Zhang, and author O. Gühne, title title
Experimentally accessible lower bounds for genuine multipartite entanglement
and coherence measures, https://doi.org/10.1103/PhysRevApplied.13.054022 journal
journal Phys. Rev. Appl. volume 13, pages 054022 (year 2020)NoStop
[Yeo and Chua(2006)]densec
author author Y. Yeo and author W. K. Chua, title title Teleportation and dense coding with
genuine multipartite entanglement, https://doi.org/10.1103/PhysRevLett.96.060502 journal
journal Phys. Rev. Lett. volume 96, pages 060502 (year 2006)NoStop
[Epping et al.(2017)Epping,
Kampermann, Macchiavello, and Bruß]QKD
author author M. Epping, author H. Kampermann,
author C. Macchiavello, and author D. Bruß, title title Multi-partite entanglement can speed up quantum
key distribution in networks, https://doi.org/10.1088/1367-2630/aa8487 journal journal New J. Phys. volume 19, pages 093012 (year 2017)NoStop
[Tóth(2012)]PhysRevA.85.022322
author author G. Tóth, title title Multipartite entanglement
and high-precision metrology, https://doi.org/10.1103/PhysRevA.85.022322 journal journal Phys. Rev. A volume 85, pages 022322 (year 2012)NoStop
[Hyllus et al.(2012)Hyllus,
Laskowski, Krischek, Schwemmer, Wieczorek, Weinfurter,
Pezzé, and Smerzi]PhysRevA.85.022321
author author P. Hyllus, author W. Laskowski,
author R. Krischek, author C. Schwemmer, author
W. Wieczorek, author
H. Weinfurter, author
L. Pezzé, and author
A. Smerzi, title title Fisher information and multiparticle entanglement, https://doi.org/10.1103/PhysRevA.85.022321 journal journal Phys. Rev. A volume 85, pages 022321 (year 2012)NoStop
[Zhong et al.(2018)Zhong,
Li, Li, Peng, Su, Hu, He, Ding,
Zhang, Li, Zhang,
Wang, You, Wang,
Jiang, Li, Chen,
Liu, Lu, and Pan]PhysRevLett.121.250505
author author H.-S. Zhong, author Y. Li, author W. Li, author
L.-C. Peng, author
Z.-E. Su, author Y. Hu, author Y.-M. He, author X. Ding, author W. Zhang,
author H. Li, author
L. Zhang, author Z. Wang, author L. You, author X.-L. Wang,
author X. Jiang, author L. Li, author
Y.-A. Chen, author
N.-L. Liu, author C.-Y. Lu, and author J.-W. Pan, title title 12-photon
entanglement and scalable scattershot boson sampling with optimal
entangled-photon pairs from parametric down-conversion, https://doi.org/10.1103/PhysRevLett.121.250505 journal
journal Phys. Rev. Lett. volume 121, pages 250505 (year 2018)NoStop
[Pu et al.(2018)Pu,
Wu, Jiang, Chang,
Li, Zhang, and Duan]Pueaar3931
author author Y. Pu, author Y. Wu, author N. Jiang, author
W. Chang, author C. Li, author S. Zhang, and author L. Duan, title title Experimental entanglement of 25
individually accessible atomic quantum interfaces, @noop
journal journal Sci. Adv. volume 4, pages eaar3931 (year
2018)NoStop
[Song et al.(2017)Song,
Xu, Liu, Yang, Zheng, Deng, Xie, Huang,
Guo, Zhang, Zhang,
Xu, Zheng, Zhu, Wang, Chen, Lu, Han, and Pan]PhysRevLett.119.180511
author author C. Song, author K. Xu, author W. Liu, author
C.-p. Yang, author
S.-B. Zheng, author
H. Deng, author Q. Xie, author K. Huang, author Q. Guo, author L. Zhang, author
P. Zhang, author D. Xu, author D. Zheng, author X. Zhu, author H. Wang, author
Y.-A. Chen, author
C.-Y. Lu, author S. Han, and author J.-W. Pan, title title 10-qubit
entanglement and parallel logic operations with a superconducting circuit, https://doi.org/10.1103/PhysRevLett.119.180511 journal
journal Phys. Rev. Lett. volume 119, pages 180511 (year 2017)NoStop
[Mooney et al.(2019)Mooney,
Hill, and Hollenberg]mooney_hill_hollenberg_2019
author author G. J. Mooney, author C. D. Hill, and author L. C. L. Hollenberg, title title Entanglement in a
20-qubit superconducting quantum computer, @noop journal journal Sci. Rep. volume
9, pages 13465 (year 2019)NoStop
[Lloyd and Braunstein(1999)]ef1
author author S. Lloyd and author S. L. Braunstein, title title Quantum computation
over continuous variables, https://doi.org/10.1103/PhysRevLett.82.1784 journal journal Phys. Rev. Lett. volume 82, pages 1784 (year 1999)NoStop
[Bartlett et al.(2002)Bartlett, Sanders, Braunstein, and Nemoto]ef2
author author S. D. Bartlett, author B. C. Sanders, author S. L. Braunstein, and author K. Nemoto, title title Efficient classical
simulation of continuous variable quantum information processes, https://doi.org/10.1103/PhysRevLett.88.097904 journal
journal Phys. Rev. Lett. volume 88, pages 097904 (year 2002)NoStop
[Braunstein and van
Loock(2005)]ef3
author author S. L. Braunstein and author P. van
Loock, title title Quantum information with
continuous variables, https://doi.org/10.1103/RevModPhys.77.513
journal journal Rev. Mod. Phys. volume 77, pages 513 (year
2005)NoStop
[Kim and Marino(2019)]atomSq1
author author S. Kim and author A. M. Marino, title title Atomic resonant
single-mode squeezed light from four-wave mixing through feedforward, https://doi.org/10.1364/OL.44.004630 journal journal Opt. Lett. volume 44, pages
4630 (year 2019)NoStop
[Kim and Marino(2018)]atomSq2
author author S. Kim and author A. M. Marino, title title Generation of ^87Rb
resonant bright two-mode squeezed light with four-wave mixing, https://doi.org/10.1364/OE.26.033366 journal journal Opt. Express volume 26, pages 33366 (year 2018)NoStop
[Menicucci et al.(2008)Menicucci, Flammia, and Pfister]oliver
author author N. C. Menicucci, author S. T. Flammia, and author O. Pfister, title title One-way quantum computing
in the optical frequency comb, https://doi.org/10.1103/PhysRevLett.101.130501 journal
journal Phys. Rev. Lett. volume 101, pages 130501 (year 2008)NoStop
[Yoshikawa et al.(2016)Yoshikawa, Yokoyama, Kaji, Sornphiphatphong, Shiozawa, Makino, and Furusawa]million
author author J.-I. Yoshikawa, author S. Yokoyama,
author T. Kaji, author
C. Sornphiphatphong, author
Y. Shiozawa, author
K. Makino, and author
A. Furusawa, title title Invited article: Generation of one-million-mode continuous-variable
cluster state by unlimited time-domain multiplexing, https://doi.org/10.1063/1.4962732 journal journal APL Photonics volume 1, pages 060801 (year 2016)NoStop
[Teh and Reid(2014)]Reid
author author R. Y. Teh and author M. D. Reid, title title Criteria for genuine N-partite
continuous-variable entanglement and Einstein-Podolsky-Rosen
steering, https://doi.org/10.1103/PhysRevA.90.062337 journal journal Phys. Rev. A volume
90, pages 062337 (year 2014)NoStop
[van Loock and Furusawa(2003)]Furusawa
author author P. van
Loock and author A. Furusawa, title title Detecting genuine
multipartite continuous-variable entanglement, https://doi.org/10.1103/PhysRevA.67.052315 journal journal Phys. Rev. A volume 67, pages 052315 (year 2003)NoStop
[Qin et al.(2014)Qin,
Cao, Wang, Marino,
Zhang, and Jing]cascade
author author Z. Qin, author L. Cao, author H. Wang, author
A. M. Marino, author
W. Zhang, and author
J. Jing, title title Experimental generation of multiple quantum correlated beams from
hot rubidium vapor, https://doi.org/10.1103/PhysRevLett.113.023602
journal journal Phys. Rev. Lett. volume 113, pages 023602 (year
2014)NoStop
[Wang et al.(2016)Wang,
Zheng, Wang, and Jing]cascade2
author author H. Wang, author Z. Zheng,
author Y. Wang, and author J. Jing, title
title Generation of tripartite entanglement from cascaded
four-wave mixing processes, https://doi.org/10.1364/OE.24.023459
journal journal Opt. Express volume 24, pages 23459 (year
2016)NoStop
[Misra and Kumar(2022)]Misra_2022
author author G. Misra and author A. Kumar, title title Continuous variable multipartite
entanglement in cascaded nonlinearities, https://doi.org/10.1088/2040-8986/ac7057 journal journal J. Opt. volume 24, pages
074004 (year 2022)NoStop
[Lami et al.(2018)Lami,
Serafini, and Adesso]Lami_2018
author author L. Lami, author A. Serafini, and author G. Adesso, title title Gaussian entanglement revisited, https://doi.org/10.1088/1367-2630/aaa654 journal
journal New J. Phys. volume 20, pages 023030 (year 2018)NoStop
[Simon et al.(1994)Simon,
Mukunda, and Dutta]Simon3
author author R. Simon, author N. Mukunda, and author B. Dutta, title title Quantum-noise matrix for multimode systems: U(n)
invariance, squeezing, and normal forms, https://doi.org/10.1103/PhysRevA.49.1567 journal journal Phys. Rev. A volume 49, pages 1567 (year 1994)NoStop
[Devlin(1979)]Devlin_1979
author author K. J. Devlin, @noop title Fundamentals of
contemporary set theory (publisher Springer-Verlag, year 1979)NoStop
[ ŽŽitko(2011)]ZITKO
author author R. ŽŽitko, title title SNEG - Mathematica package for symbolic calculations with
second-quantization-operator expressions, https://doi.org/https://doi.org/10.1016/j.cpc.2011.05.013 journal journal Comput. Phys. Commun. volume 182, pages 2259 (year
2011)NoStop
[Szalay(2015)]SzalayMultipartite
author author S. Szalay, title title Multipartite entanglement
measures, https://doi.org/10.1103/PhysRevA.92.042329 journal journal Phys. Rev. A volume
92, pages 042329 (year 2015)NoStop
[Onoe et al.(2020)Onoe,
Tserkis, Lund, and Ralph]OnoeMultipartite
author author S. Onoe, author S. Tserkis,
author A. P. Lund, and author T. C. Ralph, title title Multipartite Gaussian entanglement of
formation, https://doi.org/10.1103/PhysRevA.102.042408 journal journal Phys. Rev. A volume
102, pages 042408 (year 2020)NoStop
[Weedbrook et al.(2012)Weedbrook, Pirandola, García-Patrón,
Cerf, Ralph, Shapiro, and Lloyd]WeedbrookGQI
author author C. Weedbrook, author S. Pirandola, author R. García-Patrón, author N. J. Cerf, author T. C. Ralph, author J. H. Shapiro, and author S. Lloyd, title title Gaussian quantum information, https://doi.org/10.1103/RevModPhys.84.621 journal journal Rev. Mod. Phys. volume 84, pages 621 (year 2012)NoStop
[Boyer et al.(2008)Boyer,
Marino, Pooser, and Lett]Boyer08
author author V. Boyer, author A. M. Marino,
author R. C. Pooser, and author P. D. Lett, title title Entangled images from four-wave mixing, https://doi.org/10.1126/science.1158275 journal journal Science volume 321, pages
544 (year 2008)NoStop
[Kumar et al.(2021)Kumar,
Nirala, and Marino]ashok
author author A. Kumar, author G. Nirala, and author A. M. Marino, title title
Einstein–Podolsky–Rosen paradox with
position–momentum entangled macroscopic twin beams, https://doi.org/10.1088/2058-9565/ac1b69 journal journal Quantum Sci. Technol. volume 6, pages 045016 (year 2021)NoStop
[Zhang et al.(2020)Zhang,
Wang, Liu, Pan, Du, Lou, Yu, Lv,
Treps, Fabre, and Jing]PhysRevLett.124.090501
author author K. Zhang, author W. Wang,
author S. Liu, author
X. Pan, author J. Du, author Y. Lou, author S. Yu, author S. Lv, author
N. Treps, author C. Fabre, and author J. Jing, title title
Reconfigurable hexapartite entanglement by spatially multiplexed four-wave
mixing processes, https://doi.org/10.1103/PhysRevLett.124.090501
journal journal Phys. Rev. Lett. volume 124, pages 090501 (year
2020)NoStop
§ COVARIANCE MATRICES FOR THE QUADRIPARTITE, HEXAPARTITE, AND OCTAPARTITE SYSTEMS
The CM for the quadrapartite system is an 8×8 matrix with two components, one for each quadrature, per mode. Taking the definition of the CM as σ=<ξξ^T>, with ξ=[X̂_a,Ŷ_a,X̂_b,Ŷ_b,X̂_c,Ŷ_c,X̂_d,Ŷ_d] for the four mode system, and the two-mode squeezing transformation Ŝ_ij (ζ)=exp(ζ^* îĵ-ζî^†ĵ^†) between input modes i and j performed by each PA, the matrix elements of the CM in terms of the quadratures of the four modes takes the form
σ_4 = (
[ ⟨X̂_a^2⟩ 0 ⟨X̂_a X̂_b⟩ 0 ⟨X̂_a X̂_c⟩ 0 ⟨X̂_a X̂_d⟩ 0; 0 ⟨Ŷ_a^2⟩ 0 ⟨Ŷ_a Ŷ_b⟩ 0 ⟨Ŷ_a Ŷ_c⟩ 0 ⟨Ŷ_a Ŷ_d⟩; ⟨X̂_a X̂_b⟩ 0 ⟨X̂_b^2⟩ 0 ⟨X̂_b X̂_c⟩ 0 ⟨X̂_b X̂_d⟩ 0; 0 ⟨Ŷ_a Ŷ_b⟩ 0 ⟨Ŷ_b^2⟩ 0 ⟨Ŷ_b Ŷ_c⟩ 0 ⟨Ŷ_b Ŷ_d⟩; ⟨X̂_a X̂_c⟩ 0 ⟨X̂_b X̂_c⟩ 0 ⟨X̂_c^2⟩ 0 ⟨X̂_c X̂_d⟩ 0; 0 ⟨Ŷ_a Ŷ_c⟩ 0 ⟨Ŷ_b Ŷ_c⟩ 0 ⟨Ŷ_c^2⟩ 0 ⟨Ŷ_c Ŷ_d⟩; ⟨X̂_a X̂_d⟩ 0 ⟨X̂_b X̂_d⟩ 0 ⟨X̂_c X̂_d⟩ 0 ⟨X̂_d^2⟩ 0; 0 ⟨Ŷ_a Ŷ_d⟩ 0 ⟨Ŷ_b Ŷ_d⟩ 0 ⟨Ŷ_c Ŷ_d⟩ 0 ⟨Ŷ_d^2⟩; ])
≡ (
[ α _+ β _+ 0 α _- β _+ 0 α _- β _- 0 β _- α _+ 0; 0 α _+ β _+ 0 -α _- β _+ 0 α _- β _- 0 -β _- α _+; α _- β _+ 0 α _+ β _+ 0 β _- α _+ 0 α _- β _- 0; 0 -α _- β _+ 0 α _+ β _+ 0 -β _- α _+ 0 α _- β _-; α _- β _- 0 β _- α _+ 0 α _+ β _+ 0 α _- β _+ 0; 0 α _- β _- 0 -β _- α _+ 0 α _+ β _+ 0 -α _- β _+; β _- α _+ 0 α _- β _- 0 α _- β _+ 0 α _+ β _+ 0; 0 -β _- α _+ 0 α _- β _- 0 -α _- β _+ 0 α _+ β _+; ]).
where α _± = (e^-2 s_1± e^2 s_1)/2, and β _±= (e^-2 s_2± e^2 s_2)/2. Note that the two-mode squeezing transformation does not lead to correlations between the amplitude and phase quadratures of the modes, which results in the zero matrix elements of the CM.
When we extend the parametric amplifier network to a hexapartite system, the CM becomes a 12×12 matrix, with a total of 12 quadratures to consider. Expanding on the results of the quadrapartite case and using the notation introduced in Sect. <ref>, the CM takes the form
σ_6=
gray A blue!10 B red!25 C green!25 D red!25 C blue!25 B^'
blue!10 B gray A blue!25 B^' red!25 C green!25 D red!25 C
red!25 C blue!25 B^' gray A blue!10 B red!25 C green!25 D
green!25D red!25C blue!10B grayA blue!25B^' red!25 C
red!25C green!25D red!25C blue!25B^' grayA blue!10 B
blue!25B^' red!25C green!25D red!25C blue!10B gray A
,
where each of the blocks is a two dimensional submatrix composed of an amplitude and an a phase quadrature for the different modes. Simarly, the results can be extended to the case of eight modes, for the which the CM is a 16×16 matrix of the form
σ_8=
gray A blue!10 B red!25 C 0 0 green!25 D red!25 C blue!25 B^'
blue!10 B gray A blue!25 B^' red!25 C green!25 D 0 0 red!25 C
red!25 C blue!25 B^' gray A blue!10 B red!25 C 0 0 green!25D
0 red!25 C blue!25 B^' gray A blue!10 B red!25 C green!25D 0
0 green!25D red!25C blue!25B^' grayA blue!10 B red!25C 0
green!25D 0 0 red!25C blue!10B grayA blue!25B^' red!25 C
red!25C 0 0 green!25D red!25C blue!25B^' grayA blue!10 B
blue!25B^' red!25C green!25D 0 0 red!25C blue!10B gray A
,
where the presence of connection higher to third nearest neighbor start to appear and the general structure of the CM as the system scales to a large number of modes becomes evident.
§ PPT CRITERIA FOR HEXAPARTITE SYSTEM
In the main text we limited the discussion of the PPT criteria to the case in which both parametric stages have the same level of squeezing, s_1=s_2. Here, we show the smallest symplectic eigenvalues for all possible bi-partitions as a function of the squeezing parameters of the first and the second stages. Figure <ref>(a) shows bi-partitions of form 1×5 (a|bcdef, b|acdef, c|abdef, d|abcef, e|abdf, f|abcde). As described in the main text, due to symmetry they all exhibit the same behavior.
Figures <ref>(b) through <ref>(e) show the four cases of bi-partitions of form 3×3 with: one of the partitions composed only of nearest neighbors (abe|cdf, bef|acd, cef|abd), one of the partitions that group nearest neighbors connected through a second stage and a second nearest neighbor mode (ade|bcf, adf|bce, bde|acf), one of the partitions that group nearest neighbors connected through a first stage and a second nearest neighbor mode (abc|def, abf|cde, aef|bcd), and one of the partitions composed only of second nearest neighbors (ace|bdf, bdf|ace), respectively. Note that in Fig. <ref>(b) the plot exhibits a discontinuous behavior due to the fact that we are choosing the smallest symplectic eigenvalue and there is a competition between PAs from the first stage and the second stage.
Figures <ref>(f) through <ref>(i) show four cases of bi-partitions of the form 2×4: bi-partitions between nearest neighbors connected through a first stage and the rest of the system (ab|cdef, ef|abcd, cd|abef), bi-partitions between nearest neighbors connected with a second stage and the rest of the system (ad|bcef, be|acdf, cf|abde), bi-partitions between third nearest neighbors and the rest of the system (af|bcde, bc|adef, de|abcf), and bi-partitions between second nearest neighbors and the rest of the system (ae|bcdf, bf|acde, ce|abdf, df|abce, ac|bdef, bd|acef).
As was the case for equal level of squeezing in both PA stages, the smallest symplectic eigenvalues for all possible bi-partitions are always less than 1 when s_1>0 and s_2>0, which implies a violation of the PPT criteria for all cases. This shows that even in the general case of different squeezing parameters, the system always generates genuine hexapartite entanglement.
§ TWO DISTINCT CASES FOR HEXAPARTITE SYSTEM
As explained in Sect. <ref>, for the hexapartite system we find that bi-partitions of the form 2×4 with one of the partitions composed of nearest neighbors with a first stage connection show more entanglement than ones with a second stage connection. As we have discussed in Sect. <ref>, the ones with a first stage connection have an entangling operation applied first followed by the addition of amplification noise, while the ones with a second stage connection have the amplification noise added first followed by an entangling operation, as shown in Fig. <ref>.
We calculate the variance of the photon number difference between the two relevant modes for the cases in which the entangling operation occurs first and the one for which amplification noise is added first and find that they take the form,
V_ab =2 sinh ^2(s_2) [ cosh ^4(s_1) cosh ^2(s_2)+sinh ^4(s_1) cosh ^2(s_2)+sinh ^2(s_1) cosh ^2(s_1) sinh^2(s_2)],
V_ad =2 sinh ^2(s_1) cosh ^2(s_1),
where the variance is defined as V_ij=<Δ(n̂_i-n̂_j)^2> and n̂ is the number operator. While the variance between the two modes that are entangled in the first stage shows a dependence on the squeezing parameters of the first and the second stages, the variance between the two modes that are entangled in the second stage only shows a dependence on the squeezing parameter of the first stage.
For the case in which we assume the same squeezing parameter for the first and second PA stages, s_1=s_2≡ s, the ratio between the two variance is given by V_ab/V_ad=2 sinh ^4(s)+cosh ^4(s). This ratio increases exponentially as the squeezing parameter increase, which shows that breaking the connection introduced by the second stage, see Fig. <ref>(a), leads to more excess noise than breaking a connection introduced by the first stage, see Fig. <ref>(b). This is why we find a stronger violation of the PPT criteria, and thus a higher degree of entanglement, for bi-partitions of the form shown on the left of Fig. <ref>(c).
§ GENUINE 2N-PARTITE ENTANGLEMENT
We now show that the entanglement entropy is minimized for subsystems that consists of consecutive modes (i.e. when P_C={M,M+1,M+2,...,M+M'}, where M,M' ∈ℤ^+). As an example P_1={1,2,3,4} consists of consecutive modes, while P_1={1,2,3,5} does not.
The first observation is that the correlations only exist between modes that are, at most, three modes apart from each other. This means that when the subsystem consists of groups of modes that are four modes apart from each other, then we can write ρ_{M,...,M+M', M+M'+4,...}=ρ_{M,...,M+M'}⊗ρ_{M+M'+4,...}. As a result, we have that S(ρ_{M,...,M+M',M+M'+4,...})=S(ρ_{M,...,M+M'})+S(ρ_{M+M'+4,...}).
The second observation that is made, is that matrices B, B', and D are all proportional to diag{1,-1}, which corresponds to the squeezing correlations between the two modes (i.e. quantum correlations arising from ⟨âb̂|$⟩ and⟨â^†b̂^†|$⟩ terms). From a simple calculation, we find that the values of B and B' are larger than those of D for all possible squeezing parameters, that is
√(|B|)-√(|D|) =sinh (2 s_1) ≥ 0,
√(|B^'|)-√(|D|) =1/2sinh (s_2) [3 cosh (2 s_1-s_2)+cosh (2 s_1+s_2)]≥0.
This implies that the entropy of a state P_1=p_1⊕ p_2 is always smaller than P_1=p_1⊕ (p_2+2⃗), where p_2={N_1,N_2,N_3,...} and p_2+2⃗={N_1+2,N_2+2,N_3+2,...}. As a result, adding “islands”, by which we mean groups of consecutive modes, that are more than 2 modes apart from the partitioning will always increase the entropy.
The third observation is that the whole state ρ=|ψ⟩⟨ψ| is a pure state. The entropy of the subsystem ρ_P_C thus comes from the entanglement between subsystem P_C and the rest of the system. From Table <ref> in Sect. <ref> we can see that correlations between subsystem P_C and the rest of the system only exists for the edge entries M, M+1, M+2, M+M'-2, M+M'-1 and M+M'. This means that tracing out any other modes can only increase the entropy, as it does not remove the entanglement between ρ_P_C and the rest of the system. For example, adding consecutive modes will result in an entropy less than adding “islands” that are more than 2 modes apart.
The fourth observation is that matrix C of the CM is proportional to diag{1,1}, which corresponds to classical correlation (i.e. photon counting correlations arising from ⟨âb̂^†|$⟩ and⟨â^†b̂|$⟩ terms) between the two modes. Adding modes that only have classical correlations that can be unitarily removed can only add to the level of entropy.
From these observations, we analytically conclude that the entropy of the consecutive partitioning will increase when an “island” that is more than 2 modes away or with more than 2 consecutive modes is added to the partitioning. We numerically show that the entropy also increases when an “island” that is 2 modes away with 2 or less consecutive modes is added to the partitioning in Figs. <ref> and <ref>.
|
http://arxiv.org/abs/2409.02790v1 | 20240904150444 | Symmetry based efficient simulation of dissipative quantum many-body dynamics in subwavelength quantum emitter arrays | [
"Raphael Holzinger",
"Oriol Rubies-Bigorda",
"Susanne F. Yelin",
"Helmut Ritsch"
] | quant-ph | [
"quant-ph",
"physics.atom-ph",
"physics.comp-ph"
] | |
http://arxiv.org/abs/2409.02178v1 | 20240903180002 | Modular flavored dark matter | [
"Alexander Baur",
"Mu-Chun Chen",
"V. Knapp-Perez",
"Saul Ramos-Sanchez"
] | hep-ph | [
"hep-ph",
"hep-th"
] |
UCI-TR-2024-12
TUM-HEP 1520/24
Alexander Baur^a,b[[email protected]],
Mu–Chun Chen^c,[[email protected]],
V. Knapp–Pérez^c,[[email protected]],
Saúl Ramos–Sánchez^a,[[email protected]],
^a Instituto de Física, Universidad Nacional Autónoma de México, Cd. de México C.P. 04510, México
^b Physik Department, Technische Universität München,
James-Franck-Straße 1, 85748 Garching, Germany
^c Department of Physics and Astronomy, University of California, Irvine, CA 92697-4575 USA
§ ABSTRACT
Discrete flavor symmetries have been an appealing approach for explaining the observed flavor structure, which is not justified in the SM. Typically, these models require a so-called flavon field in order to give rise to the flavor structure upon the breaking of the flavor symmetry by the VEV of the flavon. Generally, in order to obtain the desired vacuum alignment, a flavon potential that includes additional so-called driving fields is required. On the other hand, allowing the flavor symmetry to be modular leads to a structure where the couplings are all holomorphic functions that depend only on a complex modulus, thus greatly reducing the number of parameters in the model. We show that these elements can be combined to simultaneously explain the flavor structure and DM relic abundance. We present a modular model with flavon vacuum alignment that allows for realistic flavor predictions while providing a successful fermionic DM candidate.
§ INTRODUCTION
The origin of the mass hierarchies and mixings among the three generations of fermions is unexplained in the SM. One possible solution to address this so-called flavor puzzle
is the introduction of traditional flavor symmetries that allow for transformations among fermions of different flavors.
These symmetries are independent of moduli and are known to provide explanations for the flavor structure of both the lepton and quark
sectors <cit.>.
To achieve these results, models endowed with traditional flavor symmetries require the introduction
of two kinds of extra SM singlet scalars: i) flavons, whose VEVs are responsible for the non-trivial flavor structure of fermions,
and ii) driving fields <cit.>
that help to shape a suitable potential for obtaining the desired VEV pattern. The traditional flavor
symmetry framework based on non-Abelian discrete symmetries has also proven helpful for DM, where the flavor
symmetry plays the role of a stabilizer symmetry <cit.>. Furthermore, (not necessarily discrete)
flavor symmetries can be successfully combined to explain both DM and flavor anomalies
from SM decays <cit.>.
Another promising approach to explain the flavor parameters without introducing many scalars is provided by so-called modular flavor symmetries <cit.>. This approach has produced
good fits for leptons <cit.>
and quarks <cit.>.
In this scenario, instead of depending on flavon VEVs, Yukawa couplings are replaced
by multiplets of modular forms depending on the half-period ratio τ, which can be
considered a complex modulus. The basic problem then reduces to explaining why
τ stabilizes at its best-fit value ⟨τ⟩,
for which being close to the symmetry-enhanced points τ = , ^2π/3, ∞ can be advantageous <cit.>. Although this scheme has the potential to avoid the need for any
additional scalar field, the presence of flavons in addition to the modulus can be useful too (see, e.g., Model 1 of <cit.> and <cit.>). Similarly to the traditional flavor case, modular flavor symmetries can also
serve as stabilizer symmetries for DM candidates <cit.>
Modular flavor symmetries arise naturally in top-down constructions, such as magnetized extra dimensions <cit.> or heterotic string orbifolds <cit.>, where they combine with traditional flavor symmetries, building an eclectic flavor group <cit.>. Remarkably, these top-down approaches provide a natural scheme where not only realistic predictions arise <cit.>,
but also the modulus τ can be stabilized close to symmetry-enhanced points <cit.>.
Motivated by these observations, we propose a new supersymmetric model combined with
modular flavor symmetries, which simultaneously accomplishes the following:
* Addressing the flavor puzzle, specifically the origin of the lepton masses and mixing parameters;
* Achieving the vacuum alignment for the flavons;
* Providing a suitable DM candidate with the correct observed DM abundance,
Ω_DM = 0.265(7) <cit.>.
In order to tackle these issues, we propose a simple supersymmetric model based on a Γ_3≅ A_4 modular flavor symmetry. The model resembles Model 1 of <cit.>, where the neutrino masses arise from a Weinberg operator and the charged-lepton Yukawa couplings are given by the VEV of a flavon. It also resembles the proposal of <cit.>, which studies a DM candidate in a non-modular A_4 model without fitting flavor parameters. In our model, the flavon potential is fixed by the flavor symmetry together with a 1_R2 symmetry, which determines the couplings between the driving field and flavon superfields. The model gives a very good fit to the leptonic flavor parameters, with a low value of χ^2.
Finally, we identify a Dirac fermion composed by the Weyl fermionic parts of both the driving field and flavon superfields; we perform a parameter scan for a correct DM abundance.
The goal of this model is to present a “proof of principle” that driving fields
in modular supersymmetric flavor models can account for both the flavon VEV ⟨ϕ⟩ and DM.
Our paper is organized as follows. In <Ref>, we review the basics of modular symmetries and its application to solving the flavor puzzle. In <Ref>, we define our model. In <Ref>, we present the numerical fit for the lepton flavor parameters. In <Ref>, we analyze the relevant terms for DM production. We also argue why we need freeze-in (as opposed to the more traditional freeze-out) mechanism for our model to work. We then present a parameter scan for the available parameter space for our DM candidate.
Finally, in <Ref> we summarize our results and future directions for further constraints.
§ MODULAR SYMMETRY
§.§ Modular groups and modular forms
The modular group Γ:=2, is given by
Γ = {[ a b; c d ] | a,b,c,d ∈ & ad -bc = 1 } ,
and can be generated by
S = [ 0 1; -1 0 ] and
T = [ 1 1; 0 1 ] ,
which satisfy the general presentation of Γ, ⟨ S,T | S^4 = (ST)^3 = , S^2T=T S^2⟩.
The principle congruence subgroups of level N of Γ are defined as
Γ(N) := {γ∈Γ | γ= N } ,
which are infinite normal subgroups of Γ with finite index. We can also define the
inhomogeneous modular group Γ := Γ / {±}≅PSL(2,) and
its subgroups Γ(N) with Γ(2) := Γ(2)/{±} and
Γ(N) := Γ(N) for N>2 (as - does not belong to Γ(N)).
An element γ of a modular group acts on the half-period ratio, modulus τ, as
γ τ = a τ + b/cτ + d , where τ∈ℋ
and ℋ is the upper complex half-plane,
ℋ := {τ∈C | τ > 0 } .
Modular forms f(τ) of positive modular weight k and level N are complex functions of τ, holomorphic in ℋ, and transform as
f(τ) γ⟶ f(γ τ ) = (cτ + d)^k f(τ) , γ∈Γ(N) .
In this work, we restrict ourselves to even modular weights, k∈2N, although it is known that modular weights can be odd <cit.> or fractional <cit.> in certain scenarios.
Interestingly, modular forms with fixed weight k and level N build finite-dimensional vector spaces,
which close under the action of Γ. It follows then that they must build representations of a finite
modular group that, for even modular weights, result from the quotient
Γ_N := Γ / Γ(N) .
Then, under a finite modular transformation γ∈Γ, modular forms of weight k are n-plets
(which are called vector-valued modular forms <cit.>)
Y(τ):=(f_1(τ),f_2(τ),…,f_n(τ))^T of Γ_N, transforming as
Y(τ) γ⟶ Y(γ τ ) = (cτ+d)^k ρ(γ) Y(τ) ,
where ρ(γ)∈Γ_N is a representation of γ.
§.§.§ Finite modular group Γ_3≅ A_4
As our model is based on Γ_3≅ A_4, let us discuss some general features of this group and its modular forms.
Γ_3 is defined by the presentation
Γ_3 = ⟨ S,T | S^2 = (ST)^3 = T^3 = ⟩ .
It has order 12 and the irreducible representations (in the complex basis) are given in <Ref>.
Besides 1^a⊗1^b=1^c with c=a+b 3 and 1^a⊗3=3, where a,b,c=0,1,2 count the number of primes,
we have the nontrivial product rule 3⊗3=1⊕1'⊕1”⊕3_S⊕3_A,
where S and A stand respectively for symmetric and antisymmetric. Considering two triplets ρ = ( ρ_1 , ρ_2 , ρ_3 )^T
and ψ = (ψ_1 , ψ_2 , ψ_3 )^T, in our conventions the Clebsch-Gordan coefficients of ρ⊗ψ are
(ρ⊗ψ )_1 = ρ_1ψ_1 + ρ_2ψ_3 + ρ_3ψ_2 ,
(ρ⊗ψ )_1' = ρ_1ψ_2 + ρ_2ψ_1 + ρ_3ψ_3 ,
(ρ⊗ψ )_1” = ρ_1ψ_3 + ρ_2ψ_2 + ρ_3ψ_1 ,
(ρ⊗ψ )_3_S = 1/√(3)[ 2 ρ_1ψ_1 - ρ_2ψ_3 - ρ_3ψ_2; 2 ρ_3ψ_3 - ρ_1ψ_2 - ρ_2ψ_1; 2 ρ_2ψ_2 - ρ_3ψ_1 - ρ_1ψ_3 ],
(ρ⊗ψ )_3_A = [ ρ_2 ψ_3 - ρ_3 ψ_2; ρ_1 ψ_2 - ρ_2 ψ_1; ρ_3 ψ_1 - ρ_1 ψ_3; ].
The lowest-weight modular forms of Γ_3 furnish a triplet Y=(Y_1,Y_2,Y_3)^T
of weight k_Y=2, whose components are given by <cit.>
Y_1(τ ) = /2π[ η'(τ/3)/η(τ/3) + η'(τ +1 /3)/η(τ +1 /3) + η'(τ +2/3)/η(τ +2 /3) - 27 η'(3τ )/η (3 τ )] ,
Y_2 (τ ) = -/π[ η'(τ/3)/η(τ/3) + ω^2η'(τ +1 /3)/η(τ +1 /3) + ωη'(τ +2/3)/η(τ +2 /3) ] ,
Y_3 (τ ) = -/π[ η'(τ/3)/η(τ/3) + ωη'(τ +1 /3)/η(τ +1 /3) + ω^2 η'(τ +2/3)/η(τ +2 /3) ] ,
where η( τ ) is the so-called Dedekind η function
η ( τ ) = q^1/24∏_n = 1^∞(1 - q^n )
with q := ^2πτ .
Higher-weight modular forms can be constructed
from the tensor products of the weight 2 modular forms given in <Ref>.
§.§ Modular supersymmetric theories
We consider models with 𝒩=1 global SUSY, defined by the Lagrange density
ℒ = ∫^2 θ ^2θ̅ K(Φ, Φ) + (∫^2 θ W (Φ ) + h.c.) ,
where K (Φ, Φ ) is the Kähler potential, W (Φ) is the superpotential, and
Φ denotes collectively all matter superfields φ^i of the theory and the modulus τ.
Under an element of the modular symmetry γ∈Γ, τ transforms according to <Ref>, and matter superfields
are assumed to transform as
φ^i γ (cτ + d)^-k_iρ_i(γ) φ^i ,
where k_i are also called modular weights of the field φ^i, which transform as Γ_N multiplets. Modular weights k_i are not restricted to be positive integers because
φ^i are not modular forms. Analogous to <Ref>, the matrix ρ_i(γ) is a representation of the
finite modular flavor group Γ_N.
For simplicity, we assume a minimal Kähler potential[In principle, there could be further
terms in the Kähler potential with an impact on the flavor predictions <cit.>, which are ignored here.]
of the form
K (Φ, Φ ) = -log( -τ + τ̅ ) + ∑_i(-τ + τ̅ )^-k_i |φ^i |^2 .
Making use of <Ref>, we see that K(Φ, Φ) transforms
under a modular transformation γ∈Γ as
K (Φ, Φ ) γ K (Φ, Φ ) + log (cτ +d ) + log (cτ̅ +d ) .
Thus, realizing that the Kähler potential is left invariant up to a global supersymmetric Kähler transformation,
in order for the Lagrange density of <Ref> to be modular invariant, we need the superpotential
to be invariant under modular transformations, i.e.
W (Φ ) γ W (Φ ) .
The superpotential W (Φ) has the general form
W (Φ ) = μ_ij(τ )φ^iφ^j + Y_ijk(τ)φ^iφ^jφ^k + G_ijkℓ(τ)φ^iφ^jφ^kφ^ℓ ,
where μ_ij(τ ), Y_ijk(τ ) and G_ijk(τ ) are modular forms of level N.
Because of <Ref>, each term of <Ref> must be modular invariant.
Let us illustrate how we can achieve this by taking the trilinear coupling
Y_ijk(τ)φ^iφ^jφ^k. The Yukawa coupling Y_ijk transforms
under a modular transformation γ∈Γ as
Y_ijk(τ ) γ (cτ + d)^k_Yρ_Y (γ) Y_ijk(τ ) ,
where k_Y is the even integer modular weight of the modular form Y_ijk(τ). Then, for Y_ijk(τ)φ^iφ^jφ^k
to be invariant and using the superfield transformations of <Ref>,
we must demand that k_Y = k_i+k_j + k_k and that the product
ρ_Y⊗ρ_i⊗ρ_j⊗ρ_k contains an invariant singlet.
Since we shall be concerned with SUSY breaking, let us briefly discuss the soft-SUSY breaking terms in the Lagrange density.
They are given by
ℒ_soft = -1/2( M_aλ̂^aλ̂^a + h.c)
- m̃_i^2φ̅̂̅^iφ̂^i
- (A_ijkφ̂^iφ̂^jφ̂^k + B_ijφ̂^iφ̂^j + h.c.) ,
where M_a are the gaugino masses, λ̂^a are the canonically normalized gaugino fields, and m̃_i are the soft-masses.
We use a notation, where φ̂^i stands for both the canonically normalized chiral superfield and its scalar component <cit.>. We do not assume any specific source of SUSY breaking, and thus the parameters in <Ref> are free, in principle. However, in our model B-terms are forbidden at tree level and the A-terms do not play a role in DM production at tree-level, see <Ref>.
§ DARK MODULAR FLAVON MODEL
We propose a model of leptons, governed by the modular flavor group
Γ_3≅ A_4. In our model, the lepton doublets L transform as a flavor triplet and charged-lepton singlets E^c_i transform as three distinct singlets under Γ_3. The particle content of our model is summarized in <Ref>.
In our model, neutrino masses arise from the Weinberg operator
W_ν = 1/Λ( H_u L H_u L Y(τ ) )_1 ,
where Λ is the neutrino-mass scale and Y(τ) is the modular-form
triplet Y=(Y_1,Y_2,Y_3)^T of weight 2 given by <Ref>.
Note that there exists no other modular multiplet of weight 2 in Γ_3≅ A_4. Hence, under our assumptions, the neutrino sector obtained from <Ref> is highly predictive as it only depends on the parameter τ, the VEV v_u of H_u, and Λ.
The charged-lepton superpotential at leading order is
W_CL = α_1/Λ_ϕ E_1^c H_d (L ϕ_3)_1+ α_2/Λ_ϕ E_2^c H_d (L ϕ_3)_1'+α_3/Λ_ϕ E_3^c H_d (L ϕ_3)_1” ,
where α_i, i=1,2,3, are dimensionless parameters, Λ_ϕ denotes the flavor breaking scale, and the subindices refer to the respective Γ_3 singlet components of tensored matter states in the parentheses. The charged-lepton mass matrix can be determined by the Γ_3 triplet flavon VEV ⟨ϕ_3⟩ as in Model 1 of <cit.>. However, we have taken a different value of ⟨ϕ_3⟩, which eventually leads to a better fit.
The flavon superpotential is given by
W_ϕ = Λ_ϕβ_1 ζ_3 ϕ_3 + β_2ζ_3 ϕ_3 ϕ_3 + β_3/Λ_ϕζ_3 ϕ_3 ϕ_3 ϕ_3 + Λ_ϕβ_4 ζ_1”ϕ_1' + β_5/Λ_ϕζ_1”ϕ_1'ϕ_3ϕ_3
+ β_6/Λ_ϕζ_3 ϕ_3 ϕ_1'ϕ_1' ,
where β_i, i=1,… ,6, are dimensionless couplings. This superpotential gives rise to the desired VEV pattern with driving superfields ζ_r, r = 1”, 3, and an extra flavon ϕ_1', where the subindices label the respective A_4 representations. To fix the flavon superpotential, we impose a symmetry 1_R2, similarly to <cit.>. As usual, the 1_R R-symmetry forbids the renormalizable terms in the superpotential that violate lepton and/or baryon numbers.
We remark that the flavon superpotential <Ref> is important for both finding the correct vacuum alignment for the flavons, as discussed below, and for identifying a viable DM candidate as described in <Ref>.
The flavon VEV is then attained by demanding that SUSY remains unbroken at a first stage, i.e. we require vanishing F-terms.
Recall that the F-term scalar potential in a global supersymmetric theory is given schematically by
V = F^iK_iF̅^ ,
where
F^i = -∂ W/∂Φ^i ,
F̅^ = F^j* and
K^i = ( K_i)^-1 = (-τ + τ)^k_iδ_ij̅ .
Recall that a superfield Φ can be expanded in its components as <cit.>
Φ = Φ (x) - θσ^μθ̅∂_μΦ (x) - 1/4θ^2 θ̅^2∂^2 Φ (x) + √(2)θψ_Φ (x)
+ /√(2)θ^2∂_μψ_Φ (x)σ^μθ̅+θ^2 F_Φ (x) ,
where we have used the notation that Φ represents both the superfield and its scalar component, ψ_Φ is the fermionic component and F_Φ is the F-term.
For SUSY to be preserved, we must have ⟨ F_Φ⟩ = 0. We assume that the only possible sources of SUSY breaking are given either by τ or a hidden sector. Thus, we demand ⟨ F_φ^i⟩ = 0, for all i, where φ^i represents the matter fields in our model (cf. <Ref>).
We solve these F-term equations at the VEV's of the flavons ϕ_3 and ϕ_1' and Higgs fields,
⟨ϕ_3 ⟩ =: v_3 ( 1, a, b)^T, ⟨ϕ_1'⟩ =: v_1' , ⟨ H_u ⟩ =: v_u and ⟨ H_d ⟩ =: v_d .
where we assume a,b,v_3,v_1'∈R. All F-term equations are trivially satisfied except the ones corresponding to the driving fields given by
⟨ F_ζ_3i⟩ = 0 and ⟨ F_ζ_1”⟩ = 0 ,
where ζ_3i, i=1,2,3, are the three components of the triplet. Thus, we obtain the following relations
β_2 = c_2(a,b) β_1 Λ_ϕ/v_3 , β_3 = c_3(a,b) β_1Λ_ϕ^2 /v_3^2 ,
β_5 = c_5(a,b) β_4Λ_ϕ^2 /v_3^2 , β_6 = c_6(a,b) β_1Λ_ϕ^2 /v_1'^2 ,
where c_i(a,b), for i=2,3,5,6, are coefficients that depend only on the values of a,b. The numerical values of a,b dictate the charged lepton mass matrix. Hence, they are determined by the fit to the flavor parameters that we do in <Ref>. Through the flavor parameter fit, we have found values of ⟨ϕ_3⟩, ⟨ϕ_1'⟩ that satisfy simultaneously <Ref>, thus yielding vacuum alignment.
Finally, we assume that the flavon VEV scale from <Ref> is below Λ_ϕ, such that
v_3 = v_1' = 0.1 Λ_ϕ .
Furthermore, we identify the DM candidate as a Dirac fermion built as a combination of the Weyl components of ζ_i and ϕ_i with the scalar component of the flavon ϕ_i serving as a mediator.
The parameters in <Ref> shall play an important role in finding the current DM abundance since they set the couplings in <Ref>.
§ FLAVOR FIT
Having defined our model of modular flavored dark matter, we next assess its capability to reproduce the experimentally observed charged-lepton masses, neutrino squared mass differences, and the mixing parameters of the PMNS matrix
while providing predictions for yet undetermined observables, such as the three absolute neutrino masses, the Dirac phase, and the two Majorana phases.
The explicit neutrino mass matrix can be determined from <Ref>. By calculating the tensor products to obtain the symmetry invariant part,
we find that the neutrino mass matrix is predicted to be
M_ν = v_u^2/Λ[ 2 Y_1 -Y_3 -Y_2; -Y_3 2 Y_2 -Y_1; -Y_2 -Y_1 2 Y_3 ].
The charged-lepton mass matrix M_CL arises from <Ref>. Substituting the flavon VEVs as defined in <Ref> and calculating the tensor products,
we arrive at
M_CL = v_d v_3/Λ_ϕ[ α_1 α_1 b α_1 a; α_2 a α_2 α_2 b; α_3 b α_3 a α_3 ] = 0.1 v_d[ α_1 α_1 b α_1 a; α_2 a α_2 α_2 b; α_3 b α_3 a α_3 ] ,
where we have used <Ref>. The values of v_u and v_d are determined by the Higgs VEV, v = √(v_u^2 + v_d^2) =246 GeV, and tanβ = v_u/v_d, which we assume to be tanβ = 60. The neutrino mass scale is determined by Λ, while we choose α_3 to set the mass scale of charged leptons. By using the standard procedure (see e.g. ref. <cit.>), one arrives at the lepton masses and the PMNS mixing matrix.
For our model, the resulting 12 flavor observables depend on 6 real dimensionless parameters τ, τ, a, b, α_1/α_3, and α_2/α_3, as well as two dimensionful overall mass scales v_u^2/Λ and 0.1 v_d α_3.
To show that the model can accommodate the observed flavor structure of the SM lepton sector,
we scan its parameter space and compare the resulting flavor observables to experimental data,
with the best-fit values shown in <Ref>.
As an approximate measurement of the goodness of our fit, we introduce a χ^2 function
χ^2 = ∑_iχ_i^2 ,
consisting of a quadratic sum of one-dimensional chi-square projections for each observable.
Here, we assume that the uncertainties of the fitted observables are independent of each other
and do not account for the small correlations among experimental errors of sin^2θ_23 and other quantities.
For the mixing angles and the neutrino squared mass differences, we determine the value of χ_i^2
directly from the one-dimensional projections determined by the global analysis NuFIT v5.3 <cit.>,
which are available on their website. This is necessary
to account for the strong non-Gaussianities in the uncertainties of the mixing parameters in the PMNS matrix.
For these observables, we refrain from considering corrections from renormalization group running,
given that their contribution is expected to be small compared to the size of the experimental errors.
For the charged-lepton masses, we determine the value of χ_i^2 by
χ_i = μ_i,exp. - μ_i,modelσ_i ,
where μ_i,model denotes the resulting value for the ith observable of the model,
while μ_i,exp. and σ_i refer to its experimentally observed central value
and the size of the 1σ uncertainty interval given in <Ref>, respectively.
The total value of χ^2 for all considered observables may then be interpreted to indicate an
agreement with the experimental data at a √(χ^2) σ confidence level (C.L.).
To scan the parameter space of the model and minimize the χ^2 function, we use the dedicated code
<cit.>. We find that the model is
in agreement with current experimental observations.
The best-fit point in the parameter space of our model is at
τ = -0.0119 + 1.005 , a =-0.392 , b =0.380 , 0.1 α_3 v_d = 0.130 GeV
α_1/α_3 = -22.0 , α_2/α_3 =4.78 10^-3 , v_u^2/Λ =0.0221 eV ,
where we obtain χ^2=0.08, meaning that all resulting observables are within their experimental 1σ interval, cf. also <Ref>.
In <Ref>, we present the regions in moduli space that yield results with χ^2≤ 25.
For the specific values given in <Ref>, the relations among the couplings of the flavon superpotential of <Ref> read
β_2 = 4.27 β_1 , β_3 = 83.5 β_1 , β_5 = 142 β_4 , β_6 = 121 β_1 .
Any values of β_1 and β_4 then solve the F-term equations of the driving fields, cf. <Ref>, and ensure the specific vacuum alignment of <Ref>.
The resulting observables at the best-fit point given in <Ref> lie well within the 1σ intervals of the experimental data shown in <Ref>, and read
m_e/m_μ = 0.00473 , m_μ/m_τ = 0.0451 , y_τ = 0.795 ,
sin^2θ_12 = 0.306 , sin^2θ_13 = 0.02231 , sin^2θ_23 = 0.568 ,
δ_^ℓ/π = 1.52 rad , η_1/π = 1.41 rad, η_2/π = 0.351 rad ,
m_1 = 49 meV , m_2 = 50 meV , m_3 = 0.75 meV .
Moreover, the resulting sum of neutrino masses, the neutrino mass observable in ^3H beta decay, and the effective neutrino mass for neutrinoless double beta decay, are
∑ m_i = 100 meV ,
m_β = 50 meV , and
m_ββ = 48 meV ,
which are consistent with their latest experimental bounds
∑ m_i < 120 meV <cit.>, m_β < 800 meV <cit.>, and m_ββ < 156 meV <cit.>.
It is to be noted that our predicted value of the effective neutrino mass m_ββ is challenged by experimental bounds determined with certain nuclear matrix element estimates <cit.>, as illustrated in <Ref>.
We remark that the model can be consistent with both octants of θ_23, while only being compatible with Dirac -violating phases in the range of 1.36 < δ_^ℓ < 1.55 at a 3σ C.L., as shown in <Ref>.
Moreover, the inverted-ordering neutrino masses are predicted to lie within the narrow ranges
48 meV < m_1 < 50 meV ,
49 meV < m_2 < 51 meV ,
0.72 meV < m_3 < 0.78 meV ,
at a 3σ C.L. The numerical analysis suggests that the model prefers a neutrino spectrum with inverted ordering.
For a normal-ordering spectrum we only obtain a match with experimental data just barely in the 3σ interval with χ^2≈ 7.
§ DM ABUNDANCE
Let us start by identifying the DM candidate. Since the flavon field only couples to charged leptons,
the phenomenological implications for DM in our model are determined by the charged-lepton Yukawa interactions and the flavon potential.
The interactions between the SM charged leptons and the flavon scalar ϕ_3 are given by
ℒ_CL = -∑_i=1^3v_d/2α_i ψ_E_i^C(ψ_Eϕ_3)_1 +h.c
⟶ -∑_i=1^3v_d/2α_i ψ_E_i^C(ψ_E(ϕ_3 +v_3 (1,a,b)^T))_1 +h.c. ,
where ψ_E denotes the fermionic part of the charged-lepton component of L. In the last line, we develop the flavon ϕ_3 in the vacuum with its VEV given by v_3 (1,a,b)^T, see <Ref>. From the leading flavon superpotential terms in <Ref>,[We ignore the terms suppressed by Λ_ϕ.] we obtain
ℒ_ϕ = -1/2Λ_ϕβ_1 ( ψ_ζ_3ψ_ϕ_3)_1 -1/2β_2 (ψ_ζ_3ψ_ϕ_3ϕ_3)_1 -1/2Λ_ϕβ_4 (ψ_ζ_1”ψ_ϕ_1')_1+h.c.
→ -1/2Λ_ϕβ_1 ( ψ_ζ_3ψ_ϕ_3)_1 -1/2β_2 (ψ_ζ_3ψ_ϕ_3(ϕ_3 +v_3 (1,a,b)^T))_1
-1/2Λ_ϕβ_4 (ψ_ζ_1”ψ_ϕ_1')_1+h.c.
= -1/2Λ_ϕβ_1 ( ψ_ζ_3ψ_ϕ_3)_1 -4.27/2β_1 (ψ_ζ_3ψ_ϕ_3(ϕ_3 +v_3 (1,a,b)^T))_1
-1/2Λ_ϕβ_4 (ψ_ζ_1”ψ_ϕ_1')_1+h.c. ,
where ϕ_3 is again expanded around its VEV in the second line and we use the relations of <Ref> in the last line.
The DM candidate is the lowest mass-state Dirac fermion built as a linear combination of the Weyl components of the driving fields and flavon fields, ψ_ζ_i, ψ_ϕ_i, whereas the mediator is a linear combination of the flavon scalars. The particle content relevant for DM production is outlined in <Ref>. After the Higgs and the flavons acquire VEVs as given in <Ref>,[The reheating temperature is chosen to be T_R =150 GeV such that DM production begins after EW symmetry breaking. Recall that the EW symmetry breaking temperature of crossover is around 160 GeV with a width of 5 GeV <cit.>. We acknowledge that higher reheating temperatures are also possible, in which case, the production of DM happens before EW symmetry breaking. We leave this possibility for future work.] the symmetry group of <Ref> gets broken down to 1_EM×1_R.
Interactions in <Ref> allow for processes as shown in the diagrams in <Ref>.
Furthermore, we also consider the scalar potential
V = ∂ W/∂φ_iK^i∂ W^*/∂φ_^* + ∑_i=1,2,3 | m_ϕϕ_3,i |^2 + | m_ϕϕ_1'|^2 ,
where we have added the soft-masses m_ϕ for the flavon scalars. We assume that m_ϕ is the same for the four scalars and should be of order of the SUSY breaking scale. It has been shown in <cit.> that for modular A_4 supersymmetric models, the SUSY breaking scale is constrained to be above 6 TeV.
Therefore, we choose values of order m_ϕ =20-1000 TeV. Furthermore, the A-term in <Ref> does not play a significant role in DM production at tree-level since the production of DM through flavon scalar annihilations are suppressed due to the low reheating temperature we consider; therefore, we also ignore the A-term in our analysis.
It turns out that the correct DM relic abundance can be obtained with a freeze-in scenario <cit.> in this model (cf. <cit.> where a freeze-out production mechanism is utilized). This can be seen as follows. First, note that the effective scalar flavon mass m_ϕ is obtained by diagonalizing the mass matrix obtained from <Ref>. If we assume that m_ϕ≫ m_E_i,m_ψ_ϕζ, where m_E_i, m_ψ_ϕζ=m_DM∼Λ_ϕβ_1 represent the mass of the charged leptons and the DM respectively, then the mediator mass (m_ϕ∼m_ϕ + m_ψ_ϕζ) is much larger than the DM and the charged-lepton masses. Therefore, the diagram <Ref> reduces to the effective 4-fermion operator as indicated in <Ref> with a coupling of ∼β_2 α_m/m_ϕ^2. In a freeze-out mechanism, if the rate ⟨σ v ⟩ at which DM is annihilated decreases, then the amount of DM relic abundance increases. Since we must require β_2 ≪ 4 π to retain perturbativity, and we expect α_1 = -6.97, as explained at the end <Ref>, then
⟨σ v ⟩∼|β_2 α_m/m_ϕ^2|^2 ≪|88/m_ϕ^2|^2 .
Using 5.3.41 <cit.>, we find that for the chosen values of Λ_ϕ and m_ϕ, too much DM is produced. Since increasing m_ϕ or lowering β_2 would only decrease ⟨σ v ⟩, the DM abundance can not be decreased to the observed DM abundance in a freeze-out scenario.[If we had chosen high enough reheating temperatures, then the A-terms of <Ref> would have dominated the production of DM. In this case, the connection between DM and flavor parameters is weakened.] On the other hand, for a freeze-in scenario, we have the opposite behavior. Specifically, if ⟨σ v ⟩ decreases, the amount of produced DM also decreases. So, we can choose smaller β_2 or larger m_ϕ values to obtain the observed relic abundance of DM in the Universe.
We now proceed to present the predictions of our model for the DM abundance after performing a parameter scan. We use 5.3.41 <cit.> for the DM abundance computation and FeynRules 2.0 <cit.> to create the <cit.> model files. As mentioned earlier, we assume tanβ = 60 and a low reheating temperature of T_R =150 GeV.
From our discussion we see that we have 4 free parameters to determine the DM in our model:
* the scalar flavon soft-mass m_ϕ,
* the flavor breaking scale Λ_ϕ,
* the coupling β_1 in <Ref>, and
* the coupling β_4 in <Ref>.
We fix 10^-6≤β_1, β_4 ≤ 10^-4 and 20 TeV ≤m_ϕ, Λ_ϕ≤ 1000 TeV.
These bounds for the couplings β_1, β_4 respect the perturbativity of all the couplings β_i (cf. <Ref>).
In <Ref> we show the prediction for DM relic abundance as a function of β_1 and β_4, for Λ_ϕ > m_ϕ (<Ref>), m_ϕ = Λ_ϕ (<Ref>), and m_ϕ < Λ_ϕ (<Ref>). The unshaded region indicates the excluded parameter space where too much DM is produced. We see that all plots in <Ref> exhibit similar behavior. It is possible to have a coupling up to β_4, β_1 = 10^-4, but not at the same time. This is consistent with the fact that freeze-in normally requires small couplings <cit.>. Furthermore, the abundance increases as we increase either β_1 or β_4, which is consistent with the fact that the DM relic abundance increases with ⟨σ v ⟩.
<Ref> shows the predicted DM relic abundance as a function of m_ϕ and Λ_ϕ for β_1 > β_4 (<Ref>), β_1 = β_4 (<Ref>), and β_1 < β_4 (<Ref>). The unshaded space represents the excluded parameter space. We observe a similar behavior for the three plots in <Ref>. The DM abundance decreases if either m_ϕ or Λ_ϕ grows. Furthermore, a soft-mass of m_ϕ= 20 TeV and a simultaneous flavon breaking-scale Λ_ϕ= 20 TeV is excluded in all cases for the chosen values of other parameters.
Finally, in <Ref> we show the correlation between the minimum scalar flavon mass m_ϕ_MIN and DM mass m_DM for fixed values of β_1 > β_4 (<Ref>), β_1 = β_4 (<Ref>), and β_1 < β_4 (<Ref>). By comparing <Ref>, we find that for a given value of the soft-mass m_ϕ, the minimum scalar flavon mass m_ϕ_MIN constrained by the DM relic abundance can be derived. We see that the scalar flavon mass can be as low as ∼63 TeV for all cases illustrated in <Ref>. Moreover, the DM mass can be as low as ∼16 GeV for β_1 = 6× 10^-5 and β_4 =10^-4.
§ CONCLUSION
We constructed a flavor model based on the finite modular group Γ_3≅ A_4 that simultaneously explains the flavor parameters in the lepton sector and accounts for the observed DM relic abundance in the Universe. The 12 lepton flavor parameters are determined by 8 real parameters: the two components of the modulus VEV, the four flavon VEVs, ⟨ϕ_i⟩, and two dimensionful parameters that set the mass scales of charged-lepton and neutrino masses. We identify a DM candidate composed of the fermionic parts of the flavon superfields and the driving superfields. The mediator is the scalar part of the flavon superfield which interacts with the charged-lepton sector of the SM. We obtain a good fit to the lepton flavor parameters (3 charged lepton masses, 3 mixing angles, 1 phase, 2 neutrino squared mass differences) with χ^2 = 0.08 for an inverted-hierarchy neutrino spectrum. The lepton flavor fit fixes the couplings of the charged leptons to the DM mediator as well as the flavon VEVs ⟨ϕ_i⟩.
These VEVs satisfy the F-term equations to retain supersymmetry at high energies, determining thereby the coupling between our DM candidate and the mediator. Interestingly, our model exhibits 4 additional degrees of freedom that are left free and serve to achieve a DM relic abundance which does not exceed the observed value Ω_CDM = 0.265. These parameters are 2 dimensionless couplings β_1, β_4, the flavor breaking scale Λ_ϕ, and the soft-mass for the flavon m_ϕ. We find that if the mediator mass is assumed to be much larger than the DM and the charged-lepton masses, then the appropriate DM production mechanism is freeze-in rather than freeze-out. We observe that a viable DM relic abundance can be generated in regions of the parameter space constrained by 10^-6≤β_1,β_4 ≤ 10^-4, 20 TeV ≤m_ϕ, Λ_ϕ≤ 1000 TeV, tanβ = 60, and T_R = 150 GeV.
Although some amount of tuning is necessary in our model to identify the best parameter values, we point out that it is the choice of charges and modular weights that render the right flavon superpotential, which in turn delivers the alignment of the flavon VEVs ⟨ϕ_i⟩. Further, the phenomenologically viable value of the modulus VEV ⟨τ⟩ might be achieved through mechanisms as in <cit.>.
As a feasible outlook of our findings, it would be interesting to study the possibility of applying this new scenario in top-down models such as <cit.>. Another interesting aspect to explore would be the study of additional constraints on the DM free parameters β_1, β_4, Λ_ϕ and m_ϕ. This could be done by using constraints from electron scattering searches like XENON1T <cit.>, SENSEI <cit.> or DAMIC <cit.>. Furthermore, this model only accounts for the lepton sector, but it should be extended to the quark sector. In this case, we could also use searches from nucleon scattering like LUX <cit.>, DEAP-3600 <cit.>, PandaX-II <cit.>, DarkSide <cit.> or EDELWEISS <cit.>. We leave these intriguing questions for upcoming work.
§.§ Acknowledgments
The work of M.-C.C. and V.K.-P. was partially supported by the U.S. National Science Foundation under Grant No. PHY-1915005.
The work of S.R.-S. was partly supported by UNAM-PAPIIT grant IN113223 and Marcos Moshinsky Foundation. This work was also supported by UC-MEXUS-CONACyT grant No. CN-20-38.
V.K.-P. would like to thank Kevork N. Abazajian, Max Fieg, Xueqi Li, Xiang-Gan Liu, Gopolang Mohlabeng, Michael Ratz and Miša Toman for fruitful discussions. M-C.C.and V.K.-P. would also like to thank Instituto de Física, UNAM, for the hospitality during their visit. A.B. would like to thank the Department of Physics and Astronomy at UCI for the hospitality during his visit. V.K.-P. also thanks the opportunity to use the cluster at Instituto de Física, UNAM. This work utilized the infrastructure for high-performance and high-throughput computing, research data storage and analysis, and scientific software tool integration built, operated, and updated by the Research Cyberinfrastructure Center (RCIC) at the University of California, Irvine (UCI). The RCIC provides cluster-based systems, application software, and scalable storage to directly support the UCI research community. https://rcic.uci.edu
=
100
Altarelli:2010gt
G. Altarelli and F. Feruglio, Rev. Mod. Phys. 82 (2010), 2701,
[hep-ph].
Ishimori:2010au
H. Ishimori, T. Kobayashi, H. Ohki, Y. Shimizu, H. Okada, and M. Tanimoto,
Prog. Theor. Phys. Suppl. 183 (2010), 1,
[hep-th].
Hernandez:2012ra
D. Hernández and A. Y. Smirnov, Phys. Rev. D 86 (2012), 053014,
[hep-ph].
King:2013eh
S. F. King and C. Luhn, Rept. Prog. Phys. 76 (2013), 056201,
[hep-ph].
King:2014nza
S. F. King, A. Merle, S. Morisi, Y. Shimizu, and M. Tanimoto, New J. Phys.
16 (2014), 045018, [hep-ph].
King:2017guk
S. F. King, Prog. Part. Nucl. Phys. 94 (2017), 217,
[hep-ph].
Feruglio:2019ybq
F. Feruglio and A. Romanino, Rev. Mod. Phys. 93 (2021), no. 1, 015007,
[hep-ph].
Altarelli:2005yx
G. Altarelli and F. Feruglio, Nucl. Phys. B 741 (2006), 215,
.
Feruglio:2009iu
F. Feruglio, C. Hagedorn, and L. Merlo, JHEP 03 (2010), 084,
[hep-ph].
Ding:2013hpa
G.-J. Ding, S. F. King, C. Luhn, and A. J. Stuart, JHEP 05 (2013),
084, [hep-ph].
Li:2013jya
C.-C. Li and G.-J. Ding, Nucl. Phys. B 881 (2014), 206,
[hep-ph].
Muramatsu:2016bda
Y. Muramatsu, T. Nomura, and Y. Shimizu, JHEP 03 (2016), 192,
[hep-ph].
King:2019tbt
S. F. King and Y.-L. Zhou, JHEP 05 (2019), 217,
[hep-ph].
CarcamoHernandez:2019eme
A. E. Cárcamo Hernández and S. F. King, Nucl. Phys. B 953 (2020),
114950, [hep-ph].
Chen:2021prl
M.-C. Chen, V. Knapp-Pérez, M. Ramos-Hamud, S. Ramos-Sánchez, M. Ratz,
and S. Shukla, Phys. Lett. B 824 (2022), 136843,
[hep-ph].
Hirsch:2010ru
M. Hirsch, S. Morisi, E. Peinado, and J. W. F. Valle, Phys. Rev. D 82
(2010), 116003, [hep-ph].
He:2024iju
X.-G. He, X.-D. Ma, M. A. Schmidt, G. Valencia, and R. R. Volkas,
[hep-ph].
Acaroglu:2023phy
H. Acaroğlu, M. Blanke, J. Heisig, M. Krämer, and L. Rathmann, JHEP
06 (2024), 179, [hep-ph].
Acaroglu:2021qae
H. Acaroğlu and M. Blanke, JHEP 05 (2022), 086,
[hep-ph].
Feruglio:2017spp
F. Feruglio, Are neutrino masses modular forms?, 227, 2019, pp. 227–266.
Criado:2018thu
J. C. Criado and F. Feruglio, SciPost Phys. 5 (2018), no. 5, 042,
[hep-ph].
Kobayashi:2018vbk
T. Kobayashi, K. Tanaka, and T. H. Tatsuishi, Phys. Rev. D 98 (2018),
no. 1, 016004, [hep-ph].
deAnda:2018ecu
F. J. de Anda, S. F. King, and E. Perdomo, Phys. Rev. D 101 (2020),
no. 1, 015028, [hep-ph].
Okada:2018yrn
H. Okada and M. Tanimoto, Phys. Lett. B 791 (2019), 54,
[hep-ph].
Ding:2019xna
G.-J. Ding, S. F. King, and X.-G. Liu, Phys. Rev. D 100 (2019),
no. 11, 115005, [hep-ph].
Kobayashi:2019xvz
T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, and T. H. Tatsuishi, Phys.
Rev. D 100 (2019), no. 11, 115045,
[hep-ph], [Erratum: Phys.Rev.D 101, 039904 (2020)].
Asaka:2019vev
T. Asaka, Y. Heo, T. H. Tatsuishi, and T. Yoshida, JHEP 01 (2020),
144, [hep-ph].
Liu:2019khw
X.-G. Liu and G.-J. Ding, JHEP 08 (2019), 134,
[hep-ph].
Ding:2023htn
G.-J. Ding and S. F. King, Rept. Prog. Phys. 87 (2024), no. 8, 084201,
[hep-ph].
Kobayashi:2018scp
T. Kobayashi, N. Omoto, Y. Shimizu, K. Takagi, M. Tanimoto, and T. H.
Tatsuishi, JHEP 11 (2018), 196, [hep-ph].
Novichkov:2018nkm
P. P. Novichkov, J. T. Penedo, S. T. Petcov, and A. V. Titov, JHEP 04
(2019), 174, [hep-ph].
Novichkov:2018ovf
P. P. Novichkov, J. T. Penedo, S. T. Petcov, and A. V. Titov, JHEP 04
(2019), 005, [hep-ph].
Novichkov:2018yse
P. P. Novichkov, S. T. Petcov, and M. Tanimoto, Phys. Lett. B 793
(2019), 247, [hep-ph].
Penedo:2018nmg
J. T. Penedo and S. T. Petcov, Nucl. Phys. B 939 (2019), 292,
[hep-ph].
Criado:2019tzk
J. C. Criado, F. Feruglio, and S. J. D. King, JHEP 02 (2020), 001,
[hep-ph].
Ding:2019zxk
G.-J. Ding, S. F. King, and X.-G. Liu, JHEP 09 (2019), 074,
[hep-ph].
Kobayashi:2019mna
T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, and T. H. Tatsuishi, JHEP
02 (2020), 097, [hep-ph].
Ding:2020msi
G.-J. Ding, S. F. King, C.-C. Li, and Y.-L. Zhou, JHEP 08 (2020), 164,
[hep-ph].
Novichkov:2020eep
P. P. Novichkov, J. T. Penedo, and S. T. Petcov, Nucl. Phys. B 963
(2021), 115301, [hep-ph].
Wang:2020lxk
X. Wang, B. Yu, and S. Zhou, Phys. Rev. D 103 (2021), no. 7, 076005,
[hep-ph].
Li:2021buv
C.-C. Li, X.-G. Liu, and G.-J. Ding, JHEP 10 (2021), 238,
[hep-ph].
Kobayashi:2019rzp
T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, and T. H. Tatsuishi, PTEP
2020 (2020), no. 5, 053B05, [hep-ph].
Ding:2020zxw
G.-J. Ding, F. Feruglio, and X.-G. Liu, JHEP 01 (2021), 037,
[hep-th].
Lu:2019vgm
J.-N. Lu, X.-G. Liu, and G.-J. Ding, Phys. Rev. D 101 (2020), no. 11,
115020, [hep-ph].
King:2020qaj
S. J. D. King and S. F. King, JHEP 09 (2020), 043,
[hep-ph].
Liu:2020akv
X.-G. Liu, C.-Y. Yao, and G.-J. Ding, Phys. Rev. D 103 (2021), no. 5,
056013, [hep-ph].
Okada:2020ukr
H. Okada and M. Tanimoto, Phys. Rev. D 103 (2021), no. 1, 015005,
[hep-ph].
Chen:2021zty
P. Chen, G.-J. Ding, and S. F. King, JHEP 04 (2021), 239,
[hep-ph].
Ding:2021eva
G.-J. Ding, S. F. King, and J.-N. Lu, JHEP 11 (2021), 007,
[hep-ph].
Zhao:2021jxg
Y. Zhao and H.-H. Zhang, JHEP 03 (2021), 002,
[hep-ph].
Arriaga-Osante:2023wnu
C. Arriaga-Osante, X.-G. Liu, and S. Ramos-Sánchez, JHEP 05
(2024), 119, [hep-ph].
Kikuchi:2023jap
S. Kikuchi, T. Kobayashi, K. Nasu, S. Takada, and H. Uchida, JHEP 07
(2023), 134, [hep-ph].
Kikuchi:2023dow
S. Kikuchi, T. Kobayashi, K. Nasu, S. Takada, and H. Uchida, JHEP 04
(2024), 045, [hep-ph].
Feruglio:2021dte
F. Feruglio, V. Gherardi, A. Romanino, and A. Titov, JHEP 05 (2021),
242, [hep-ph].
Feruglio:2022koo
F. Feruglio, Phys. Rev. Lett. 130 (2023), no. 10, 101801,
[hep-ph].
Petcov:2022fjf
S. T. Petcov and M. Tanimoto, Eur. Phys. J. C 83 (2023), no. 7, 579,
[hep-ph].
Abe:2023ilq
Y. Abe, T. Higaki, J. Kawamura, and T. Kobayashi, Eur. Phys. J. C 83
(2023), no. 12, 1140, [hep-ph].
Feruglio:2023mii
F. Feruglio, JHEP 03 (2023), 236, [hep-ph].
Nomura:2019jxj
T. Nomura and H. Okada, Phys. Lett. B 797 (2019), 134799,
[hep-ph].
Behera:2020lpd
M. K. Behera, S. Singirala, S. Mishra, and R. Mohanta, J. Phys. G 49
(2022), no. 3, 035002, [hep-ph].
Hutauruk:2020xtk
P. T. P. Hutauruk, D. W. Kang, J. Kim, and H. Okada, Phys. Dark Univ.
44 (2024), 101440, [hep-ph].
Kobayashi:2021ajl
T. Kobayashi, H. Okada, and Y. Orikasa, Phys. Dark Univ. 37 (2022),
101080, [hep-ph].
Zhang:2021olk
X. Zhang and S. Zhou, JCAP 09 (2021), 043,
[hep-ph].
Kobayashi:2016ovu
T. Kobayashi, S. Nagamoto, and S. Uemura, PTEP 2017 (2017), no. 2,
023B02, [hep-th].
Kobayashi:2018rad
T. Kobayashi, S. Nagamoto, S. Takada, S. Tamba, and T. H. Tatsuishi, Phys. Rev.
D 97 (2018), no. 11, 116002, [hep-th].
Kobayashi:2018bff
T. Kobayashi and S. Tamba, Phys. Rev. D 99 (2019), no. 4, 046001,
[hep-th].
Kariyazono:2019ehj
Y. Kariyazono, T. Kobayashi, S. Takada, S. Tamba, and H. Uchida, Phys. Rev. D
100 (2019), no. 4, 045014, [hep-th].
Hoshiya:2020hki
K. Hoshiya, S. Kikuchi, T. Kobayashi, Y. Ogawa, and H. Uchida, PTEP
2021 (2021), no. 3, 033B05, [hep-th].
Kikuchi:2020frp
S. Kikuchi, T. Kobayashi, S. Takada, T. H. Tatsuishi, and H. Uchida, Phys. Rev.
D 102 (2020), no. 10, 105010, [hep-th].
Kikuchi:2020nxn
S. Kikuchi, T. Kobayashi, H. Otsuka, S. Takada, and H. Uchida, JHEP 11
(2020), 101, [hep-th].
Ohki:2020bpo
H. Ohki, S. Uemura, and R. Watanabe, Phys. Rev. D 102 (2020), no. 8,
085008, [hep-th].
Almumin:2021fbk
Y. Almumin, M.-C. Chen, V. Knapp-Pérez, S. Ramos-Sánchez, M. Ratz, and
S. Shukla, JHEP 05 (2021), 078, [hep-th].
Kikuchi:2021ogn
S. Kikuchi, T. Kobayashi, and H. Uchida, Phys. Rev. D 104 (2021),
no. 6, 065008, [hep-th].
Tatsuta:2021deu
Y. Tatsuta, JHEP 10 (2021), 054, [hep-th].
Kikuchi:2022bkn
S. Kikuchi, T. Kobayashi, K. Nasu, H. Uchida, and S. Uemura, Phys. Rev. D
105 (2022), no. 11, 116002, [hep-th].
Kikuchi:2022lfv
S. Kikuchi, T. Kobayashi, K. Nasu, and H. Uchida, JHEP 08 (2022), 256,
[hep-th].
Kikuchi:2022psj
S. Kikuchi, T. Kobayashi, K. Nasu, S. Takada, and H. Uchida, JHEP 06
(2023), 013, [hep-th].
Baur:2019kwi
A. Baur, H. P. Nilles, A. Trautner, and P. K. S. Vaudrevange, Phys. Lett. B
795 (2019), 7, [hep-th].
Baur:2019iai
A. Baur, H. P. Nilles, A. Trautner, and P. K. S. Vaudrevange, Nucl. Phys. B
947 (2019), 114737, [hep-th].
Nilles:2020kgo
H. P. Nilles, S. Ramos-Sánchez, and P. K. S. Vaudrevange, Nucl. Phys. B
957 (2020), 115098, [hep-ph].
Nilles:2020tdp
H. P. Nilles, S. Ramos-Sánchez, and P. K. S. Vaudrevange, Phys. Lett. B
808 (2020), 135615, [hep-th].
Baur:2020jwc
A. Baur, M. Kade, H. P. Nilles, S. Ramos-Sánchez, and P. K. S. Vaudrevange,
JHEP 02 (2021), 018, [hep-th].
Nilles:2020gvu
H. P. Nilles, S. Ramos–Sánchez, and P. K. S. Vaudrevange, Nucl.
Phys. B 966 (2021), 115367, [hep-th].
Baur:2020yjl
A. Baur, M. Kade, H. P. Nilles, S. Ramos-Sánchez, and P. K. S. Vaudrevange,
Phys. Lett. B 816 (2021), 136176,
[hep-th].
Baur:2021mtl
A. Baur, M. Kade, H. P. Nilles, S. Ramos-Sánchez, and P. K. S. Vaudrevange,
JHEP 06 (2021), 110, [hep-th].
Nilles:2021glx
H. P. Nilles, S. Ramos-Sánchez, A. Trautner, and P. K. S. Vaudrevange,
Nucl. Phys. B 971 (2021), 115534,
[hep-th].
Baur:2021bly
A. Baur, H. P. Nilles, S. Ramos-Sánchez, A. Trautner, and P. K. S.
Vaudrevange, Phys. Rev. D 105 (2022), no. 5, 055018,
[hep-th].
Baur:2022hma
A. Baur, H. P. Nilles, S. Ramos-Sánchez, A. Trautner, and P. K. S.
Vaudrevange, JHEP 09 (2022), 224,
[hep-ph].
Baur:2024qzo
A. Baur, H. P. Nilles, S. Ramos-Sánchez, A. Trautner, and P. K. S.
Vaudrevange, [hep-th].
Nilles:2020nnc
H. P. Nilles, S. Ramos-Sánchez, and P. K. S. Vaudrevange, JHEP 02
(2020), 045, [hep-ph].
Font:1990nt
A. Font, L. E. Ibanez, D. Lust, and F. Quevedo, Phys. Lett. B 245
(1990), 401.
Nilles:1990jv
H. P. Nilles and M. Olechowski, Phys. Lett. B 248 (1990), 268.
Cvetic:1991qm
M. Cvetic, A. Font, L. E. Ibanez, D. Lust, and F. Quevedo, Nucl. Phys. B
361 (1991), 194.
Kachru:2003aw
S. Kachru, R. Kallosh, A. D. Linde, and S. P. Trivedi, Phys. Rev. D 68
(2003), 046005, .
Intriligator:2006dd
K. A. Intriligator, N. Seiberg, and D. Shih, JHEP 04 (2006), 021,
.
Leedom:2022zdm
J. M. Leedom, N. Righi, and A. Westphal, JHEP 02 (2023), 209,
[hep-th].
Knapp-Perez:2023nty
V. Knapp-Pérez, X.-G. Liu, H. P. Nilles, S. Ramos-Sánchez, and M. Ratz,
Phys. Lett. B 844 (2023), 138106,
[hep-th].
ParticleDataGroup:2020ssz
Particle Data Group, P. A. Zyla et al., PTEP 2020 (2020), no. 8,
083C01.
Nomura:2024abu
T. Nomura, Y. Shimizu, and T. Takahashi, [hep-ph].
Liu:2020msy
X.-G. Liu, C.-Y. Yao, B.-Y. Qu, and G.-J. Ding, Phys. Rev. D 102
(2020), no. 11, 115035, [hep-ph].
Liu:2021gwa
X.-G. Liu and G.-J. Ding, JHEP 03 (2022), 123,
[hep-ph].
Chen:2019ewa
M.-C. Chen, S. Ramos-Sánchez, and M. Ratz, Phys. Lett. B 801 (2020),
135153, [hep-ph].
Ding:2022nzn
G.-J. Ding, X.-G. Liu, and C.-Y. Yao, JHEP 01 (2023), 125,
[hep-ph].
Terning:2006bq
J. Terning, Modern supersymmetry: Dynamics and duality, 2006.
Workman:2022ynf
Particle Data Group, R. L. Workman and Others, PTEP 2022 (2022),
083C01.
Esteban:2020cvm
I. Esteban, M. C. González-García, M. Maltoni, T. Schwetz, and A. Zhou,
JHEP 09 (2020), 178, [hep-ph],
.
Antusch:2013jca
S. Antusch and V. Maurer, JHEP 11 (2013), 115,
[hep-ph].
FlavorPy
A. Baur, FlavorPy, 2024,
.
Planck:2018vyg
Planck, N. Aghanim et al., Astron. Astrophys. 641 (2020), A6,
[astro-ph.CO], [Erratum: Astron.Astrophys. 652, C4
(2021)].
KATRIN:2021uub
KATRIN, M. Aker et al., Nature Phys. 18 (2022), no. 2, 160,
[hep-ex].
KamLAND-Zen:2022tow
KamLAND-Zen, S. Abe et al., Phys. Rev. Lett. 130 (2023), no. 5,
051801, [hep-ex].
GAMBITCosmologyWorkgroup:2020rmf
GAMBIT Cosmology Workgroup, P. Stöcker et al., Phys. Rev. D 103
(2021), no. 12, 123508, [astro-ph.CO].
DOnofrio:2015gop
M. D'Onofrio and K. Rummukainen, Phys. Rev. D 93 (2016), no. 2,
025003, [hep-ph].
Tanimoto:2021ehw
M. Tanimoto and K. Yamamoto, JHEP 10 (2021), 183,
[hep-ph].
Hall:2009bx
L. J. Hall, K. Jedamzik, J. March-Russell, and S. M. West, JHEP 03
(2010), 080, [hep-ph].
Belanger:2018ccd
G. Bélanger, F. Boudjema, A. Goudelis, A. Pukhov, and B. Zaldivar, Comput.
Phys. Commun. 231 (2018), 173, [hep-ph].
Alguero:2022inz
G. Alguero, G. Belanger, S. Kraml, and A. Pukhov, SciPost Phys. 13
(2022), 124, [hep-ph].
Belanger:2021smw
G. Belanger et al., JHEP 02 (2022), 042,
[hep-ph].
Alloul:2013bka
A. Alloul, N. D. Christensen, C. Degrande, C. Duhr, and B. Fuks, Comput. Phys.
Commun. 185 (2014), 2250, [hep-ph].
Belyaev:2012qa
A. Belyaev, N. D. Christensen, and A. Pukhov, Comput. Phys. Commun.
184 (2013), 1729, [hep-ph].
XENON:2019gfn
XENON, E. Aprile et al., Phys. Rev. Lett. 123 (2019), no. 25, 251801,
[hep-ex].
SENSEI:2019ibb
SENSEI, O. Abramoff et al., Phys. Rev. Lett. 122 (2019), no. 16,
161801, [hep-ex].
DAMIC:2019dcn
DAMIC, A. Aguilar-Arevalo et al., Phys. Rev. Lett. 123 (2019), no. 18,
181802, [astro-ph.CO].
LUX:2018akb
LUX, D. S. Akerib et al., Phys. Rev. Lett. 122 (2019), no. 13, 131301,
[astro-ph.CO].
DEAP:2019yzn
DEAP, R. Ajaj et al., Phys. Rev. D 100 (2019), no. 2, 022004,
[astro-ph.CO].
PandaX-II:2017hlx
PandaX-II, X. Cui et al., Phys. Rev. Lett. 119 (2017), no. 18, 181302,
[astro-ph.CO].
DarkSide:2018bpj
DarkSide, P. Agnes et al., Phys. Rev. Lett. 121 (2018), no. 8, 081307,
[astro-ph.HE].
EDELWEISS:2019vjv
EDELWEISS, E. Armengaud et al., Phys. Rev. D 99 (2019), no. 8, 082003,
[astro-ph.GA].
DMdark matter
SUSYsupersymmetry
VEVvacuum expectation value
EFTeffective field theory
MFVminimal flavor violation
SMStandard Model
EWElectroweak
|
http://arxiv.org/abs/2409.02853v1 | 20240904162335 | Hardy perturbations of subordinated Bessel heat kernels | [
"Krzysztof Bogdan",
"Tomasz Jakubowski",
"Konstantin Merz"
] | math.AP | [
"math.AP",
"math-ph",
"math.FA",
"math.MP",
"math.PR"
] |
Heat kernel bounds — ]Hardy perturbations of
subordinated Bessel heat kernels
K. Bogdan]Krzysztof Bogdan
[Krzysztof Bogdan]Department of Pure and Applied Mathematics, Wrocław University of Science and Technology, Hoene-Wrońskiego 13C, 50-376 Wrocław, Poland
[email protected]
T. Jakubowski]Tomasz Jakubowski
[Tomasz Jakubowski]Department of Pure and Applied Mathematics, Wrocław University of Science and Technology, Hoene-Wrońskiego 13C, 50-376 Wrocław, Poland
[email protected]
K. Merz]Konstantin Merz
[Konstantin Merz]Institut für Analysis und Algebra, Technische Universität Braunschweig, Universitätsplatz 2, 38106 Braunschweig, Germany
[email protected]
[2020]Primary 47D08, 60J35
§ ABSTRACT
Motivated by the spectral theory of relativistic atoms, we prove matching upper and lower bounds for the transition density of Hardy perturbations of subordinated Bessel heat kernels. The analysis is based on suitable supermedian functions, in particular invariant functions.
This research was supported in part by grant Opus 2023/51/B/ST1/02209 of National Science Center, Poland (K.B.), and the PRIME programme of the German Academic Exchange Service DAAD with funds from the German Federal Ministry of Education and Research BMBF (K.M.).
[
[
=====
§ INTRODUCTION AND MAIN RESULT
Bessel operators are fundamental for mathematical analysis, theoretical physics, and applied sciences. We are interested in their fractional powers with added critically singular Hardy potential,
(-d^2dr^2-2ζr ddr)^α/2-κ r^-α
in L^2(_+,r^2ζdr) with _+=(0,∞).
Here and in what follows,
ζ∈(-1/2,∞),
α∈(0,2]∩(0,2ζ+1),
and κ∈ is called the coupling constant of the Hardy potential.
Below, we call (the potential) κ r^-α attractive (mass-creating) if κ>0 and repulsive (mass-killing) if κ<0.
Our interest in the operators (<ref>) comes from the spectral theory of atoms.
We pursue a program initiated in <cit.> to describe the different rates of spectral dissipation of mass in various angular momentum channels in , as parameterized by different values of ζ, α, and κ. From the point of view of theoretical physics, the most important is the three-dimensional pseudo-relativistic case α=1, with ζ=1+ℓ, ℓ=0,1,…, and Coulomb potential κ r^-1. Here, the operator (<ref>) arises as one of the direct summands of the direct sum decomposition of (-Δ)^1/2-κ/|x| in L^2(^3) into different angular momentum channels, indexed by ℓ∈_0. Here,
κ defines the strength of coupling between the nucleus and electrons in an atom. The passage from the state space ^3, or, more generally, with d∈, to the half-line _+ in (<ref>) is inspired by the intertwining in <cit.>. In fact, in the present paper, we abstract from the setting of <cit.> and study, in maximal generality, a semigroup on _+ associated with (<ref>).
The spectral analysis of <cit.> in will be picked up again in the forthcoming paper <cit.>, where we also clarify the definition of (<ref>) as a self-adjoint operator.
The main result of the present paper, Theorem <ref> below, provides a crucial technical ingredient to be used in <cit.>, namely non-explosion and sharp heat kernel bounds for a semigroup associated with (<ref>).
To state the result, we briefly introduce necessary notation, but defer precise definitions to the ensuing subsections. In particular, here and below, we use a specific parameterization of the coupling constant κ in (<ref>). Thus, for ζ∈(-1/2,∞), we let
Ψ_ζ(η)κ=Ψ_ζ(η) :=
2^αΓ(2ζ+1-η/2)Γ(α+η/2)/Γ(η/2)Γ(2ζ+1-η-α/2) if α∈(0,2∧ (2ζ+1)) and η∈(-α,2ζ+1),
(2ζ-1-η)η if α=2<2ζ+1 and η∈.
Here, as usual, A∧ B:=min{A,B} and A∨ B:=max{A,B}∧∨.
We introduce further notation as we proceed; it is summarized in the [index] at the end of the paper.
Clearly, Ψ_ζ(η) is symmetric about η=(2ζ+1-α)/2.
In view of <cit.>, its maximum is
κ_ c(ζ,α):=Ψ_ζ((2ζ+1-α)/2).
For instance, in the three-dimensional, pseudorelativistic situation with ζ=1+ℓ,
κ_ c(1+ℓ,1) = 2Γ(1+ℓ/2)^2/Γ((ℓ+1)/2)^2.
In particular, for ℓ=0, we get κ_ c(1,1)=2/π while for ℓ=1, we get κ_ c(2,1)=π/2.
In the general situation, Ψ_ζ(η) is strictly positive for η∈(0,2ζ+1-α), zero for η∈{0,2ζ+1-α}, and tends to -∞ as η→ -M or η→2ζ+1-α+M. Here M:=α if α<2 and M:=∞ for α=2. Thus, the parameterization κ=Ψ_ζ(η) only produces
κ∈ (-∞,κ_ c(ζ,α)].
For this range,
we may and often do restrict η to the interval (-M,(2ζ+1-α)/2].
Moreover, for ζ>ζ'>-1/2, we have Ψ_ζ(η)>Ψ_ζ'(η) for all
η in the intersection of the domains of Ψ_ζ and Ψ_ζ'.
By the Hardy inequality, in fact, by the ground state representation <cit.>, the quadratic form corresponding to (<ref>) on L^2(_+,r^2ζ dr) is bounded from below if κ=Ψ_ζ(η) for some η
and it is unbounded from below if κ>κ_ c(ζ,α).
In the former case, the form is bounded from below by zero. This observation motivates to call κ_ c(ζ,α) the critical coupling constant.
Summarizing, in most of the discussion below, (<ref>) is substituted by
(-d^2dr^2-2ζr ddr)^α/2-Ψ_ζ(η) r^-α,
for η admissible, that is, in the domain of Ψ_ζ, see (<ref>).
We call 2ζ+1 the effective dimension.
This is because the reference measure for the Hilbert space in which (<ref>) is defined is r^2ζdr=r^2ζ+1dr/r, where dr/r is the Haar measure on the multiplicative group (0,∞).
As we shall see, η conveniently parametrizes Hardy potentials, and, of course, α/2 is the order of subordination. All the operators (<ref>) are homogeneous of order α (with respect to dilations), but
the interplay of the three parameters critically influences their potential-theoretic properties.
We let
p_ζ^(α)(t,r,s), t, r,s>0,
be the transition density associated to (<ref>) with κ=0; see (<ref>) and (<ref>) for actual definitions in the cases α=2 and α<2, respectively. Here, t>0 is time, but r,s∈ (0,∞) are positions in space.
Moreover, we let
p_ζ,η^(α)(t,r,s), t, r,s>0,
be the Schrödinger perturbation of p_ζ^(α) by the Hardy potential Ψ_ζ(η)/r^α,
as in Definition <ref>.
In <cit.>, we will show that p_ζ,η^(α) equals the heat kernel of (<ref>) in the sense of quadratic forms.
The main contribution of the present paper are the following matching upper and lower, or sharp, bounds for p_ζ,η^(α), which are proved in Section <ref>.
If ζ∈(-1/2,∞), α∈(0,2)∩(0,2ζ+1), and η∈(-α,2ζ+1-α/2], then
p_ζ,η^(α)(t,r,s)
∼_ζ,α,η(1∧r/t^1/α)^-η (1∧s/t^1/α)^-η p_ζ^(α)(t,r,s),
and p_ζ,η^(α)(t,r,s)
is a jointly continuous function of t,r,s>0.
Here, we write A ∼ B∼, or B ≲ A ≲ B, if A, B ≥ 0 and c^-1B ≤ A ≤ cB for a constant c ∈ (0,∞). We write A ∼_τ B if c = c_τ depends on τ, etc. The dependence on fixed parameters, like ζ,α,η is usually omitted.
In view of the sharp Hardy inequality in <cit.>, one may duly expect a blowup for supercritical coupling constants κ, i.e., κ>κ_ c(ζ,α).
Indeed, the interpretation and proof of the following corollary to Theorem <ref> are given in Subsection <ref> below.
Let ζ∈(-1/2,∞) and α∈(0,2). Then the heat kernel associated with (<ref>) blows up if κ >κ_ c(ζ,α).
For α=2, we have the identity
p_ζ,η^(2)(t,r,s) = (rs)^-ηp_ζ-η^(2)(t,r,s).
It is proved, e.g., by Metafune, Negro, and Spina <cit.> by a change variables; see also the beginning of Section <ref> for a sketch of proof. For α=2, (<ref>) and (<ref>) yield that p_ζ,η^(2)(t,r,s) is jointly continuous on (0,∞)^3 and obeys (<ref>), provided ∼ is replaced by the notation ≍ explained in the paragraph below (<ref>). Moreover, the blow up result in Corollary <ref> extends to α=2.
Below in this paper we develop potential-theoretic techniques, which we refer to as the integral method, to study kernels appearing in <cit.> and <cit.>. This may be considered as a complementary study largely independent of <cit.> and <cit.>. Namely, we abstract from the setting of and the angular momentum channels parameterized by ℓ in <cit.> by projecting, transferring, or intertwining <cit.> onto the half-line _+. This allows us to use sub-Markovian semigroups to capture delicate oscillations that influence the different rates of dissipation of mass for different spherical harmonics. This method is not only relevant for this context but is also of independent interest. The addition of the Hardy potential has an analogous effect on the heat kernel, changing the dissipation rate but preserving scaling.
In fact, additional motivation for our work
comes from the fact that models with scaling are of fundamental interest for PDEs, functional analysis, probability, and physics; for further connections, see, e.g.,
<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.
The main methodological inspiration for this work comes from Bogdan, Grzywny, Jakubowski, and Pilarczyk <cit.> and Jakubowski and Wang <cit.>. However, the present development differs in several important ways and is more complex. Namely, the projection onto ℝ_+ unfortunately eliminates both the origin and rotational invariance from the setting. Furthermore, we address η > 0 and η < 0 simultaneously. Moreover, the integral method for negative perturbations involves rather delicate compensation.
Finally, our resolution is expected to have significant consequences for the multi-channeled spectral resolution of the Laplacian and related operators.
Let us note that Cho, Kim, Song, and Vondraček <cit.> (see also Song, Wu, and Wu <cit.>) and Jakubowski and Maciocha <cit.> recently proved heat kernel bounds for the fractional Laplacian with Hardy potential on _+ and Dirichlet boundary condition on (_+)^c. As in our analysis, <cit.> use the integral method introduced in <cit.>, but, in contrast to our approach, they cannot rely on subordination since the Dirichlet fractional Laplacian is
different from the fractional Dirichlet Laplacian, see, e.g., <cit.>.
Consequently, the results of <cit.> and <cit.> are different than ours, as can be readily seen by comparing the respective parameterizations of the coupling constants, cf. <cit.> and (<ref>).
In the following subsections, we make our setting more precise. To this end, we first define the subordinated Bessel heat kernels p_ζ^(α) and recall some of their properties.
Then we give a precise definition of p_ζ,η^(α).
At the end of this section, we outline implications of Theorem <ref> and the structure of the rest of the paper.
Recall that in <cit.>, we will prove that p_ζ,η^(α) equals the heat kernel of (<ref>) in the sense of quadratic forms for κ=Ψ_ζ(η), but we mention this fact only for motivation and of course do not use it in the present paper.
§.§ Subordinated Bessel heat kernels
We define the Bessel heat kernel
p_ζ^(2)(t,r,s)
p_ζ^(2)(t,r,s) : = (rs)^1/2-ζ/2texp(-r^2+s^2/4t)I_ζ-1/2(rs/2t), r,s,t>0.
Here, for z∈∖(-∞,0], I_ν(z) denotes the modified Bessel functions of the first kind of order ν∈ <cit.>.I_ν
Note that
p_ζ^(2)(t,r,0)
= 2^-2ζ/Γ(2ζ+1/2) t^-2ζ+1/2exp(-r^2/4t)
by the series expansion <cit.> of the modified Bessel function.
The kernel p_ζ^(2)(t,r,s) with the reference (speed) measure r^2ζdr on _+ is the transition density of the Bessel process with index ζ-1/2 reflected at the origin. We remark that the Bessel process with index ζ-1/2 killed at the origin has the transition density (<ref>), but with I_ζ-1/2(·) replaced with I_|ζ-1/2|(·). When ζ≥1/2, the Bessel process never hits the origin, i.e., no condition (reflecting or killing) takes effect or needs to be imposed at the origin. See, e.g., <cit.>. We also remark that p_ζ^(2)(t,r,s) is the heat kernel of the nonnegative Bessel operator of index ζ-1/2 in the Liouville form,
_ζ = -d^2/dr^2 - 2ζ/rd/dr in L^2(_+,r^2ζdr).
More precisely, we understand _ζ as the Friedrichs extension of the corresponding quadratic form on C_c^∞(_+) if ζ∈[1/2,∞), but we understand _ζ as the Krein extension of the corresponding quadratic form on C_c^∞(_+) if ζ∈(-1/2,1/2]. In both cases, _ζ is nonnegative and in particular self-adjoint. We refer, e.g., to <cit.>, <cit.>, or <cit.> for further details and references regarding the spectral theory of _ζ, in particular for operator and quadratic form domains and (form) cores. In that setting,
p_ζ^(2)(t,r,s) = -t_ζ(r,s), t,r,s>0.
This information is important to prove Theorem <ref> when α=2.
Next, we define, for α∈(0,2), the α2-subordinated Bessel heat kernels. To that end, recall that for α∈(0,2) and t>0, by Bernstein's theorem, the completely monotone function [0,∞)∋λ↦-tλ^α/2 is the Laplace transform of a probability density function _+∋τ↦σ_t^(α/2)(τ). That is,
-tλ^α/2 = ∫_0^∞ dτ σ_t^(α/2)(τ) -τλ, t>0, λ≥0,
see, e.g., <cit.>.
We give some useful properties of and sharp estimates for σ_t^(α/2)(τ), and references in <cit.>. Using (<ref>), we define the α2-subordinated Bessel heat kernel with reference measure r^2ζdr on _+ as
p_ζ^(α)(t,r,s)
p_ζ^(α)(t,r,s) : = ∫_0^∞ dτ σ_t^(α/2)(τ) p_ζ^(2)(τ,r,s), r,s,t>0.
Crucially, p_ζ^(α)(t,r,s) is a probability transition density. In fact, it also defines a strongly continuous contraction semigroup on L^2(_+,r^2ζdr). We record this in the following proposition, which is proved, e.g., in <cit.>.
Let ζ∈(-1/2,∞), α∈(0,2], and t,t',r,s>0. Then p_ζ^(α)(t,r,s)>0,
∫_0^∞ ds s^2ζ p_ζ^(α)(t,r,s) = 1,
∫_0^∞ dz z^2ζ p_ζ^(α)(t,r,z) p_ζ^(α)(t',z,s) = p_ζ^(α)(t+t',r,s),
p_ζ^(α)(t,r,s) = t^-2ζ+1/α p_ζ^(α)(1,r/t^1/α,s/t^1/α),
and {p_ζ(t,·,·)}_t>0 is a strongly continuous contraction semigroup on L^2(_+,r^2ζdr).
§.§ Hardy-type Schrödinger perturbations
Let ζ∈(-1/2,∞), α∈(0,2]∩(0,2ζ+1), and recall that M=α when α∈(0,2) and M=∞ when α=2. For η∈(-M,2ζ+1), we let
q(z)
q(z) := Ψ_ζ(η)/z^α, z>0.
Because of symmetry of Ψ_ζ(η) about η=(2ζ+1-α)/2, as a rule we assume η∈(-M,(2ζ+1-α)/2] when considering q.
In the Appendix <ref>, for the convenience of the reader, we discuss rather general Schrödinger perturbations of transition densities. The focus of this paper is, however, on the following cases.
For ζ∈(-1/2,∞), α∈(0,2)∩(0,2ζ+1), and η∈ (-α,2ζ+1-α/2], let
p_ζ,η^(α) be
the Schrödinger perturbation of p_ζ^(α) by q in (<ref>), defined in the Appendix <ref>.
For α=2 with α<2ζ+1, and η∈(-∞,2ζ-1/2], let p_ζ,η^(2)(t,r,s) be the heat kernel of the self-adjoint operator _ζ-Ψ_ζ(η)/r^2, with the domain described in <cit.>.
Thus, in the case α=2, we typically refer to the literature. For α∈ (0,2), we rely on the discussion
in the Appendix <ref>: a Schrödinger perturbation adds mass to the semigroup if q>0 and decreases mass if q<0. Thus, for η=0, we have q=0 and p_ζ,0^(α)=p_ζ^(α). For η∈(0,2ζ+1-α/2], we have q> 0 and
p_ζ,η^(α)
p_ζ,η^(α)(t,r,s)
:= ∑_n≥0 p_t^(n,D)(r,s), where
p_t^(0,D)(r,s) := p_ζ^(α)(t,r,s), and
p_t^(n,D)(r,s) := ∫_0^t dτ∫_0^∞ dz z^2ζ p_ζ^(α)(t,r,z) q(z) p_t-τ^(n-1,D)(z,s), n∈.
Each term p_t^(n,D)(r,s) may be understood as an iteration of Duhamel's formula below,
hence the superscript D.
Of course, we have the following bound
p_ζ,η^(α)(t,r,s) ≥ p_ζ^(α)(t,r,s) for η∈(0,2ζ+1-α/2].
For η∈(-α,0), we have q<0 and p_ζ,η^(α)≤ p_ζ^(α); see the Appendix <ref>, in particular Theorem <ref>, for the construction and properties of negative Schrödinger perturbations of transition densities using the more comfortable and general time-inhomogeneous setting. See the remarks before Theorem <ref>, too.
We also note that to deal with η<0 in Subsection <ref>,
we use the results on η>0 from Subsection <ref>. The structure of Section <ref> reflects this connection.
We also note that p_ζ,η^(α) may also be defined by the Feynman–Kac formula (for bridges), but the singularity of z^-α in (<ref>) makes this problematic for large negative Ψ_ζ(η), i.e., η close to -α.
Actually, a viable (probabilistic) approach to negative η, based on the Feynman-Kac formula and approximation by the killed semigroup, is suggested by <cit.>, but we prefer to keep the paper more analytic, and propose the potential-theoretic approach of the Appendix <ref>.
In the following, we write h_β(r):=r^-βh_β(r) for r>0 and β∈, and abbreviate
h:=h_η.
h(r)
We record some fundamental properties of p_ζ,η^(α).
Let ζ∈(-1/2,∞), α∈(0,2]∩(0,2ζ+1), η∈(-M,2ζ+1-α/2], and r,s,t,t'>0. Then the following statements hold.
We have the Chapman–Kolmogorov equation,
∫_0^∞ dz z^2ζ p_ζ,η^(α)(t,r,z) p_ζ,η^(α)(t',z,s)
= p_ζ,η^(α)(t+t',r,s).
We have the Duhamel formula
p_ζ,η^(α)(t,r,s)
= p_ζ^(α)(t,r,s) + ∫_0^t dτ∫_0^∞ dz z^2ζ p_ζ^(α)(τ,r,z) q(z) p_ζ,η^(α)(t-τ,z,s)
= p_ζ^(α)(t,r,s) + ∫_0^t dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(τ,r,z) q(z) p_ζ^(α)(t-τ,z,s).
We have the scaling relations p_t^(n,D)(r,s)=t^-2ζ+1/αp_1^(n,D)(r/t^1/α,s/t^1/α) and
p_ζ,η^(α)(t,r,s)
= t^-2ζ+1/α p_ζ,η^(α)(1,r/t^1/α,s/t^1/α).
Let η∈[0,2ζ+1-α/2]. Then the function h(r)=r^-η is supermedian with respect to p_ζ,η^(α), i.e.,
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) h(s)
≤ h(r).
In particular, p_ζ,η^(α)(t,r,s)<∞ for all t,r>0 and almost all s>0.
For α=2, these claims follow by using the explicit expression in (<ref>).
Thus, suppose α∈(0,2) from now on.
The scaling (<ref>) follows from (<ref>), (<ref>) when η≥0 and Theorem <ref> when η<0, and induction.
For η≥0, the other three properties follow, e.g., from <cit.> and <cit.> together with the computation (<ref>).
For η<0, the Chapman–Kolmogorov equation and the Duhamel formula follow by the construction in Theorem <ref>.
§.§ Implications of Theorem <ref>
In the pioneering paper <cit.>, Bogdan, Grzywny, Jakubowski, and Pilarczyk proved estimates for the heat kernel of the homogeneous Schrödinger operator (-Δ)^α/2-κ/|x|^α on ^d, sometimes called Hardy operator, with α∈(0,2∧ d) and 0≤κ≤κ^*,
where
κ^*=2^αΓ((d+α)/4)^2 /Γ((d-α)/4)^2
= Ψ_d-1/2(d-α/2).
Among others, their bounds were crucial for the systematic study of the L^p-Sobolev spaces generated by powers of (-Δ)^α/2-κ/|x|^α in <cit.>—see the founding paper <cit.> dealing with the case α=2—and to prove the strong Scott conjecture concerning the electron distribution of large relativistic atoms <cit.>.
In <cit.>, we made a first step into a more detailed analysis of the Hardy operator by taking its spherical symmetry into account. More precisely, for d∈, ℓ∈ L_d⊆_0, and m∈ M_d,ℓ⊆^d-2 (for d≥3)[Explicit descriptions of L_d and M_d,ℓ are not important here, but can be found in <cit.>.], let Y_ℓ,m denote the orthonormal basis of (hyper)spherical harmonics in L^2(^d-1). In <cit.>, we considered the function space
V_ℓ,m:={u(|x|)|x|^ℓ Y_ℓ,m(x/|x|): u∈ L^2(_+,r^d+2ℓ-1dr)}
of functions with given angular momentum ℓ.
For [u]_ℓ,m(x):=u(|x|)|x|^ℓ Y_ℓ,m(x/|x|) with u∈ L^2(_+,r^d-1+2ℓdr), we showed
⟨[u]_ℓ,m,((-Δ)^α/2-κ/|x|^α)[u]_ℓ,m⟩_L^2(^d)
= ⟨ u,((-d^2/dr^2-2ζ/rd/dr)^α/2-κ/r^α)u⟩_L^2(_+,r^d+2ℓ-1 dr),
whenever α<d+2ℓ, κ=Ψ_d-1+2ℓ/2(η) and all η∈(-α,d+2ℓ). In particular, the two sides of (<ref>) are nonnegative if and only if κ≤Ψ_d-1+2ℓ/2((d+2ℓ-α)/2).
Moreover, we constructed the generalized ground states and proved a ground state representation or Hardy identity for the quadratic forms (<ref>), see <cit.>. This refined the ground state representation of (-Δ)^α/2-κ/|x|^α in L^2(^d), first proved in <cit.> by Fourier transform[For α=2, the ground state representation of -Δ-κ/|x|^2 was known earlier; see, e.g., <cit.>.]; see also <cit.> for a more abstract approach.
These generalized ground states are just h(r)=r^-η and are crucial for our estimates of
p_ζ,η^(α)(t,r,s).
The main result of our forthcoming paper <cit.> states that p_ζ,η^(α)(t,r,s) is indeed the heat kernel of the restriction of (-Δ)^α/2-Ψ_d+2ℓ-1/2(η)/|x|^α to V_ℓ,m.
Let us now comment on open problems and connections.
The above discussion indicates that the estimates in Theorem <ref>
will be important to advance the study of L^p-Sobolev spaces generated by powers of Hardy operators.
Our bounds should also lead to limits at the origin and self-similar solutions for the considered kernels, see <cit.>.
Extensions to other homogeneous operators, including those with gradient perturbations, are of interest, too, see Metafune, Negro, and Spina <cit.>
and Kinzebulatov, Semënov, and Szczypkowski <cit.>.
Moreover, Theorem <ref> and the results in our forthcoming paper <cit.> are paramount for a more precise description of the location of electrons in large, relativistically described atoms. Let us explain the connection and future plans in more detail. Frank, Merz, Siedentop, and Simon <cit.> consider large atoms by accounting for relativistic effects close to the nucleus using the so-called Chandrasekhar operator. The main result of <cit.> is that, in the limit where the number of electrons and the nuclear charge goes to infinity, the ground state density, i.e., the probability density of finding one electron, converges to the so-called hydrogenic density, i.e., the sum of absolute squares of the eigenfunctions of √(-Δ+1)-1-κ/|x|, where κ is the (rescaled) interaction strength between the nucleus and the electrons. In fact, <cit.> shows a finer result stating that the ground state density of electrons with prescribed angular momentum ℓ∈_0 converges to the sum of absolute squares of the eigenfunctions of the operator
(-d^2dr^2-2ζr ddr+1)^α/2-1-κ r^-α in L^2(_+,r^2ζdr),
with α=1 and ζ=ℓ+1/2. The operator in (<ref>) is a bounded perturbation of (<ref>), but has, unlike (<ref>), infinitely many negative eigenvalues[This is because of the summand +1 in (-d^2dr^2-2ζr ddr+1)^α/2, which breaks the homogeneity and necessitates a distinction between large and small frequencies, the latter being responsible for the emergence of eigenvalues.].
Moreover, <cit.> provides pointwise estimates for the hydrogenic density using the heat kernel estimates for the homogeneous operator (-Δ)^α/2-κ/|x|^α established in <cit.>.
In this connection, we note that the estimates in the present paper and in <cit.>, together with the arguments developed in <cit.>, will lead to bounds for the sum of absolute squares of eigenfunctions (<ref>). In particular, we can show that for all t>0, there is c_t>0 such that
∑_n≥1|ϕ_n(r)|^2
≤ c_t r^-2η, 0<r≤ t,
where η is in the one-to-one correspondence to κ given by (<ref>) and
{ϕ_n}_n∈ are the eigenfunctions corresponding to the negative eigenvalues of the operator in (<ref>). We are optimistic that, for ζ and α fixed, the upper bound in (<ref>) is optimal.
In fact, the short-distance upper bound for the hydrogenic density of √(1-Δ)-1-κ/|x|, obtained in <cit.>, is optimal due to the lower bound on the ground state of √(1-Δ)-1-κ/|x|, which was proved by Jakubowski, Kaleta, and Szczypkowski in <cit.> by using lower bounds for the corresponding heat kernel in <cit.>. While upper heat kernel bounds for (<ref>) follow from those in the present paper, <cit.>, and subordination, proving corresponding lower bounds will require different techniques.
§.§ Structure of the paper
In Section <ref>, we state further pointwise bounds and a 3G inequality for the subordinated Bessel heat kernel p_ζ^(α).
In Section <ref>, we prove that the function h(r)=r^-η is invariant under p_ζ,η^(α). We also state further integral equalities and inequalities involving p_ζ,η^(α) and h(r).
In Section <ref>, we prove Theorem <ref> and Corollary <ref>.
Finally, in the Appendix <ref>, we provide an abstract discussion of Schrödinger perturbations of transition densities, which may be of independent interest.
§.§.§ Acknowledgments
K.M. thanks Haruya Mizutani for his hospitality at Osaka University, where parts of this research were carried out.
We also thank Volker Bach, Rupert Frank, Wolfhard Hansen, Jacek Małecki, Haruya Mizutani, and Grzegorz Serafin for helpful discussions.
§ BOUNDS FOR SUBORDINATED BESSEL HEAT KERNELS
§.§ Pointwise bounds and explicit expressions
For s,r ∈ℝ, t>0, and α∈(0,2], let p^(α)(t,r,s) be the stable density on the real line, i.e.,
p^(α)(t,r,s)
:= 1/2π∫_-∞^∞-i(s-r)z-t|z|^α dz.
Note that
p^(2)(t,r,s) = 1/√(4π t)exp(-(r-s)^2/4t)
is the Gauß–Weierstraß kernel and, by <cit.>,
p^(α)(t,r,s) ∼t/(t^1/α + |r-s|)^1+α for α∈(0,2).
From <cit.> and <cit.>, we record the following sharp upper and lower bounds for p_ζ^(α)(t,r,s).
Let ζ∈(-1/2,∞). Then, there are c,c'>0 such that for all r,s,t>0,
p_ζ^(2)(t,r,s)
≍_ζ t^-1/2exp(-(r-s)^2/c t)/(rs+t)^ζ
≍_ζ(1∧r/t^1/2)^ζ (1∧s/t^1/2)^ζ (1/rs)^ζ· t^-1/2·exp(-(r-s)^2/c' t).
≍Moreover, for all α∈(0,2) and all r,s,t>0,
p_ζ^(α)(t,r,s)
∼_ζ,αt/|r-s|^1+α(r+s)^2ζ + t^1+α/α(t^1/α+r+s)^2ζ
∼_ζ,α p^(α)(t,r,s) (t^1/α + r+s)^-2ζ.
Here and below the notation ≍_ζ combines an upper bound and a lower bound similarly as ∼_ζ, but the constants in exponential factors (e.g., the constants c and c' in (<ref>)–(<ref>)) may be different in the upper and the lower bounds. Note that we allow the constants in the exponential factors to depend on ζ. Then, for instance, (<ref>) is equivalent to the statement that there are positive c_j,ζ, j∈{1,2,3,4}, with
c_1,ζ t^-1/2exp(-(r-s)^2/c_2,ζ t)/(rs+t)^ζ≤ p_ζ^(2)(t,r,s)
≤ c_3,ζ t^-1/2exp(-(r-s)^2/c_4,ζ t)/(rs+t)^ζ.
For an explicit expressions for p_ζ^(α)(t,r,s) in the physically important case α=1, we refer, e.g., to Betancor, Harboure, Nowak, and Viviani <cit.>.
Below, we will also use the following convenient estimate.
Let ζ∈(-1/2,∞) and α∈(0,2). Then, for every σ,λ∈[0,2ζ+1] such that σ+λ∈[0,2ζ+1], we have
p_ζ^(α)(τ,z,s) ≲τ^1/α+z+s/τ^1/α+|z-s|τ^-λ/α s^-σ z^σ+λ-2ζ-1.
In particular if z<s/2 or s ≲τ^1/α z, then
p_ζ^(α)(τ,z,s) ≲τ^-λ/α s^-σ z^σ+λ-2ζ-1.
Since 0<σ≤ 2ζ+1, by (<ref>) and (<ref>), we have
p_ζ^(α)(τ,z,s)
∼τ/(τ^1/α+|z-s|)^1+α (τ^1/α+z+s)^-2ζ
= τ(τ^1/α+z+s)/(τ^1/α+|z-s|)^1+α (τ^1/α+z+s)^-λ(τ^1/α+z+s)^-σ(τ^1/α+z+s)^σ+λ-2ζ-1
≤τ^1/α+z+s/τ^1/α+|z-s|τ^-λ/α s^-σ z^σ+λ-2ζ-1.
For z<s/2 or s/2 ≤τ^1/α z, we have τ^1/α+z+s/τ^1/α+|z-s|≲ 1 and consequently p_ζ^(α)(τ,z,s) ≲τ^-λ/α s^-σ z^σ+λ-2ζ-1.
§.§ Comparability results for p_ζ^(α)
The following lemma is proved in <cit.> and allows to compare two kernels p_ζ^(α) at different times and positions with each other.
Let ζ∈(-1/2,∞) and α∈(0,2].
* Let z,s>0, 0<C≤ 1, and τ∈[C,C^-1]. Then, there exist c_j=c_j(ζ,C)∈(0,∞), j∈{1,2}, such that
p_ζ^(2)(1,c_1 z,c_1 s) ≲_C,ζ p_ζ^(2)(τ,z,s) ≲_C,ζ p_ζ^(2)(1,c_2 z,c_2 s),
p_ζ^(α)(τ,z,s) ∼_C,ζ,α p_ζ^(α)(1,z,s), α<2.
* Let C,τ>0 and 0<z≤ s/2<∞. Then, there is c=c(ζ,C)∈(0,∞) with
p_ζ^(α)(τ,z,s)
≲_C,ζ,α p_ζ^(α)(τ,c,c s)_{τ>C}
+ (τ^-1/2-cs^2/τ/(τ+s^2)^ζ_α=2 + τ/s^2ζ+1+α+τ^2ζ+1+α/α_α∈(0,2))_{τ<C},
p_ζ^(α)(τ,z,s)
≲_ζ,α s^-(2ζ+1).
* Let 0<τ≤1, 0<z≤ s/2, and s≥ C>0. Then, there is c=c(ζ,C)∈(0,∞) with
p_ζ^(α)(τ,z,s) ≲_C,ζ,α p_ζ^(α)(1,c,c s).
* There is c=c(ζ)∈(0,∞) such that for all r,s>0,
min{p_ζ^(α)(1,1,r),p_ζ^(α)(1,1,s)}≲_ζ,α p_ζ^(α)(1,c r,c s).
* Let r,s,z,t>0 with |z-s|>|r-s|/2. Then, there is c=c(ζ)∈(0,∞) with
p_ζ^(α)(t/2,z,s) ≲_ζ,α p_ζ^(α)(t,cr,cs).
The constants in exponential factors in Lemma <ref> may change from place to place when α=2.
The following lemma describes the concentration of mass of p_ζ^(α) at the origin. It is similar to <cit.>.
Let ζ∈(-1/2,∞) and α∈(0,2]. Then, there is c=c_ζ,α∈(0,1) such that for any R>1,
c < ∫_0^2R dw w^2ζ p_ζ^(α)(τ,z,w) ≤ 1, τ∈(0,1], z<R.
The upper bound follows from the normalization of p_ζ^(α)(τ,z,w).
We now prove the lower bound in (<ref>).
First, consider α∈(0,2). Let x=z ∨τ^1/α. Then, for w ∈ (x,x+τ^1/α) ⊂ (0,2R), we have τ^1/α + |z-w| ∼τ^1/α and τ^1/α+z+w ∼ w ∼ x. Hence, by (<ref>) and (<ref>), we get
∫_0^2R dw w^2ζ p_ζ^(α)(τ,z,w) ≳∫_x^x+τ^1/α dw τ/(τ^1/α + |z-w|)^1+α (τ^1/α+z+w)^-2ζ w^2ζ
≳∫_x^x+τ^1/α dw τ^-1/α = 1.
Applying (<ref>), by the same proof, we get the result for α=2.
§.§ A 3G inequality for p_ζ^(α) with α∈(0,2)
For the proof of the continuity of the heat kernel for all α∈(0,2) and η∈(-α,(2ζ+1-α)/2] in Theorem <ref>, we will use the following 3G inequality for p_ζ^(α)(t,r,s).
Let ζ∈(-1/2,∞) and α∈(0,2). For t,τ,t,s,r,z>0,
p_ζ^(α)(t,r,z) p_ζ^(α)(τ,z,s)/p^(α)(t+τ,r,s)≲p_ζ^(α)(t,r,z)/(τ^1/α+z+s)^2ζ + p_ζ^(α)(τ,z,s)/(t^1/α+z+r)^2ζ.
It suffices to use (<ref>) and the 3G inequality in <cit.>
p^(α)(t,r,z) p^(α)(τ,z,s) ≲ p^(α)(t+τ,r,s) [p^(α)(t,r,z) + p^(α)(τ,z,s) ].
There is a similar 3G inequality in <cit.>. However, the present 3G inequality in Lemma <ref> and that in <cit.> do not imply each other.
§ INTEGRAL ANALYSIS OF P_Ζ,Η^(Α)
The integral analysis in this section is a potential-theoretic study based on suitable supermedian functions. In particular, we show that the function s^-η is invariant for the transition density p^(α)_ζ,η,
see (<ref>).
The main results of this section are Theorems <ref> and <ref>, which are key elements in the proof of Theorem <ref>. Let us now sketch the main ideas behind the integral analysis presented in this section. The method, developed in <cit.> and further improved in <cit.>, builds on the results of <cit.>.
For flexibility in what follows, we introduce an extra parameter β acting as a proxy for η. To
prove Theorem <ref>, we essentially show that Ψ_ζ(β)(_ζ)^-α/2 s^-α-β = s^-β, see (<ref>).
As a consequence, we obtain (<ref>) for η = 0; see Lemma <ref>. Next, by applying Duhamel's formula and Lemma <ref>, we derive (<ref>), which is almost (<ref>). To fully establish (<ref>), we prove the convergence of the integrals in (<ref>); see Corollary <ref> and Lemma <ref>.
We point out Proposition <ref> as a crucial step in our integral analysis, in which we estimate the mass of p_ζ,η^(α). It is the key to proving the estimates stated in Corollary <ref>; the case η<0 being simpler since the convergence of the integrals follows from the inequality p_ζ,η^(α)≤ p_ζ^(α).
The result stated in Theorems <ref> and <ref> may be viewed as follows. Denote _ζ,η^(α) := _ζ^α/2 - Ψ_ζ(η)s^-α. If we let t→∞ in (<ref>), then we formally get
(Ψ_ζ(β) -Ψ_ζ(η)) (_ζ,η^(α))^-1 s^-β-α= s^-β,
which is another expression of
_ζ,η^(α) s^-β= (Ψ_ζ(β) -Ψ_ζ(η)) s^-β-α. The latter right-hand side changes sign at β=η, so we may informally conclude that for the operator _ζ,η^(α), the function s^-β is subharmonic, harmonic and superharmonic if β <η, β=η and β>η, respectively. In fact, in Theorems <ref> and <ref>, we prove that s^-β is supermedian for p_ζ,η^(α) if β≥η and h(s) = s^-η is even invariant for p_ζ,η^(α), which is crucial to prove the lower bounds.
As we shall see, the estimates of Bessel and subordinated Bessel heat kernels also play an important role in our development.
When proving ground state representations for (<ref>) in <cit.>, we arrived at the invariant function h(r) by integrating p_ζ^(α)(t,r,s) against suitable functions in space and time, as suggested by the road map in <cit.>.
In the following, we call two parameters β,γ∈ admissible, whenever they satisfy γ∈(-1,∞) and β∈(1,(2ζ-γ)/α).
For admissible β,γ, we recall from <cit.> that the function[The function h_β,γ is also well-defined for β∈(0,(2ζ-γ)/α].]
h_β,γ(r)
:= ∫_0^∞dt/t t^β∫_0^∞ ds s^γ p_ζ^(α)(t,r,s)
= C^(α)(β,γ,ζ) r^αβ+γ-2ζ, r>0,
is finite where
C^(α)(β,γ,ζ)
:= Γ(β)/Γ(αβ/2) C(αβ/2,γ,ζ), with
C(β,γ,ζ)
:= 2^-2βΓ (β) Γ(1/2 (γ+1)) Γ(1/2 (2ζ-2β-γ))/Γ(1/2(2ζ-γ)) Γ(1/2(2β+γ+1)).
Moreover, the Fitzsimmons ratio (see <cit.>, <cit.>) is
q_β,γ(r) := (β-1) h_β-1,γ(r)/h_β,γ(r), r>0.
In particular, we recover
q(z) = (β-1)h_β-1,γ(z)/h_β,γ(z), z>0
from (<ref>), whenever β,γ are admissible, to wit, γ∈(-1,∞) and β∈(1,(2ζ-γ)/α), and are chosen so that αβ+γ-2ζ=-η. This is, e.g., the case when β=(2ζ-γ-η)/α with ζ∈(-1/2,∞), α∈(0,2]∩(0,2ζ+1), η∈[0,2ζ+1-α/2], and γ∈(-1,ζ-(α+1)/2).
We will use the function h_β,γ and its variant h_β,γ^(+) in Section <ref> to obtain invariant functions for p_ζ,η^(α). When doing so, the additional flexibility provided by β and γ will be helpful.
§.§ Continuity
For α∈(0,2), we will use the following lemma to prove that p_ζ,η^(α) is a strongly continuous contraction semigroup and that p_ζ,η^(α)(t,r,s) is continuous in t,r,s>0.
Let ζ∈(-1/2,∞), α∈(0,2] and δ∈ (0,2ζ+1). Then,
∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) s^-δ∼ r^-δ_{τ<r^α} + τ^-δ/α_{τ>r^α} =r^-δτ^-δ/α
and
∫_0^t dτ ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) s^-δ ∼
t (r^α∨ t)^-δ/α for δ<α
log(1+t/r^α) for δ=α
r^-δ (t∧ r^α) for δ>α.
We only prove this lemma for α∈(0,2); the proof for α=2 is analogous and, in fact, easier using the explicit expression for p_ζ^(2)(t,r,s).
Formula (<ref>) follows from integrating (<ref>). Hence, we only need to prove (<ref>). By scaling we may and do assume r=1. We first show that for τ>1,
∫_0^∞ ds s^2ζ p_ζ^(α)(τ,1,s)s^-δ∼τ^-δ/α.
Indeed, since τ>1, we have |s-1|< s+1 <3τ^1/α for s<2τ^1/α. Hence, by (<ref>),
∫_0^2τ^1/α ds s^2ζ p_ζ^(α)(τ,1,s) s^-δ∼τ^-(2ζ+1)/α∫_0^2τ^1/α ds s^2ζ-δ∼τ^-δ/α.
For s>2τ^1/α, we have s+1 ∼ |s-1| ∼ s. Thus,
∫_2τ^1/α^∞ ds s^2ζ p_ζ^(α)(τ,1,s) s^-δ∼τ∫_2τ^1/α^∞ ds s^-δ-1-α∼τ^-δ/α,
which yields (<ref>).
Now, suppose τ≤ 1. Note that s+1+τ^1/α∼ s ∨ 1. Additionally, for s<1/2, p^(α)(τ,1,s) ∼τ. Hence, by (<ref>),
∫_0^∞ ds s^2ζ p_ζ^(α)(τ,1,s) s^-δ≲∫_1/2^∞ ds p^(α)(τ,1,s) +τ∫_0^1/2 ds s^2ζ-δ≲ 1,
where we used that p^(α)(τ,1,s) is the density of a probability distribution.
Finally, for 1<s<1+τ^1/α, p^(α)(τ,1,s) ∼τ^-1/α. Thus,
∫_0^∞ ds s^2ζ p_ζ^(α)(τ,1,s) s^-δ≳∫_1^1+τ^1/α ds τ^-1/α = 1,
which, together with (<ref>), yields (<ref>).
We now prove that t↦ p_ζ,η^(α)(t,·,·) is a strongly continuous contraction semigroup. This property will also be important to prove the pointwise continuity of p_ζ,η^(α)(t,r,s) with respect to all r,s,t>0.
Let ζ∈(-1/2,∞), α∈(0,2], and η∈(-α,2ζ+1-α/2].
Then {p_ζ,η^(α)(t,·,·)}_t≥0 is a strongly continuous contraction semigroup on L^2(_+,r^2ζdr).
For α=2, Proposition <ref> follows, e.g., from the explicit expression for p_ζ,η^(2) in (<ref>).
Thus, we consider α∈(0,2) in the following.
Since we verified the semigroup property in Proposition <ref>, it suffices to verify the strong continuity and the contraction property.
By the symmetry of p_ζ,η^(α), the supermedian property (<ref>), and a Schur test, p_ζ,η^(α) is a contraction on L^2(_+,r^2ζdr) for every t>0.
It remains to prove the strong continuity of p_ζ,η^(α)(t,·,·). Since C_c^∞(_+) is dense in L^2(_+,r^2ζdr) and p_ζ^(α)(t,·,·) is strongly continuous (see Lemma <ref>), it follows from Duhamel's formula, that p_ζ,η^(α)(t,·,·) is strongly continuous, if lim_t↘0T_tϕ_L^2(_+,r^2ζdr)=0 holds for every nonnegative function ϕ∈ C_c^∞(_+), where T_t is the integral operator acting as
(T_tϕ)(r)
= ∫_0^t dτ∫_0^∞ dz z^2ζ p_ζ^(α)(τ,r,z)q(z) ∫_0^∞ ds s^2ζ p_ζ,η^(α)(t-τ,z,s)ϕ(s).
By the definition of p_ζ,η^(α), the integral kernel of T_t is maximal for η=(2ζ+1-α)/2 (this is clear for η≥ 0 by (<ref>); for η<0, see Lemma <ref>). Thus, it suffices to consider this case. Since ϕ∈ C_c^∞(_+), we have ϕ≲_ϕ h^*, with h^*(r)=r^-(2ζ+1-α)/2. By the supermedian property
(<ref>) and Lemma <ref> with δ = ζ+(1+α)/2 ∈ (0,2ζ+1),
T_tϕ_L^2(_+,r^2ζdr)^2
≲_ϕT_t h^*_L^2(_+,r^2ζdr)^2
≲∫_0^∞ dr r^2ζ(∫_0^t dτ∫_0^∞ dz z^2ζ p_ζ^(α)(τ,r,z)q(z)h^*(z))^2
≲∫_0^∞ dr r^-(α+1)(t∧ r^α)^2.
Here we used αβ+γ-2ζ=-ζ-(1+α)/2<-α, which allows us to use the third line in (<ref>).
The conclusion follows from an application of the dominated convergence theorem.
§.§ The case η∈(0,(2ζ+1-α)/2]
One of the main ingredients in the proof of heat kernel estimates is the invariance of the ground state expressed in terms of the heat kernel.
This is the content of the following important theorem.
Let ζ∈(-1/2,∞), α∈(0,2]∩(0,2ζ+1), η∈(0,(2ζ+1-α)/2], and r,t>0. Then,
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s)s^-η
= r^-η.
Moreover, for β∈[0,2ζ+1-α-η), we have
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s)s^-β ds
= r^-β + (1-Ψ_ζ(β)/Ψ_ζ(η))∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s) q(s) s^-β.
For α=2, Theorem <ref> follows, e.g., from the explicit expression for p_ζ,η^(2) in (<ref>). Thus, we may freely assume α∈(0,2) in the following, although all lemmas, propositions, and theorems in this subsection also hold true for α=2.
To prove Theorem <ref>, we need several auxiliary statements, which follow almost exactly as in <cit.>.
In Proposition <ref>, we estimate the mass of p_ζ,η^(α) by 1+(r/t^1/α)^-η.
Auxiliary bounds to that end are contained in Lemmas <ref> and <ref>. Lemmas <ref> and <ref> contain the heart of the proof of Theorem <ref>. Here, we integrate the zeroth and n-th summands of the perturbation series (<ref>) against monomials. We note the role of the auxiliary parameter β, in particular (<ref>), in proving (<ref>).
We also note that (<ref>) plays an important role to prove the upper bound in (<ref>) in Theorem <ref> for small distances to the origin; see Lemmas <ref> and <ref>.
We apply Lemma <ref> in Corollary <ref> to obtain a weaker version of Theorem <ref>, which will be useful to prove the upper bounds for p_ζ,η^(α); see, in particular, Lemma <ref>.
These results together with the finiteness of certain space and space-time integrals over p_ζ,η^(α) in Lemma <ref> eventually enable us to conclude the proof of Theorem <ref>.
In Corollary <ref>, we collect some convenient consequences of Theorem <ref> that will be used to prove the bounds for p_ζ,η^(α) when η>0.
We finish this section with Lemma <ref>, where we show the finiteness of p_ζ,η^(α)(t,r,s)>0 for all r,s,t>0.
Let us now give the details and start with the following auxiliary bound.
For ζ∈(-1/2,∞), α∈(0,2], η∈(0,(2ζ+1-α)/2], and r,t,R>0,
∫_0^1 dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z)q(z)
≤(2R)^η/c r^-η,
where c=c_R is the constant appearing in Lemma <ref>.
By Lemma <ref>, there is c=c_R∈(0,1) such that
c ≤∫_0^2R p_ζ^(α)(1-τ,z,s)s^2ζ ds ≤ 1,
0<z<R, τ∈(0,1].
Thus, by Duhamel (<ref>) and the supermedian property (<ref>), we have
∫_0^1 dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z)q(z)
≤1/c∫_0^2R ds s^2ζ∫_0^1 dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z)q(z)p_ζ^(α)(1-τ,z,s)
≤1/c∫_0^2Rds s^2ζ p_ζ,η^(α)(1,r,s)
≤(2R)^η/c∫_0^2R ds s^2ζ p_ζ,η^(α)(1,r,s) s^-η≤(2R)^η/c r^-η.
This concludes the proof.
We record the following Chapman–Kolmogorov-type inequality for partial sums of the perturbation series (<ref>).
Let ζ∈(-1/2,∞), α∈(0,2], η∈(0,(2ζ+1-α)/2], and r,s,t>0.
For n∈_0={0,1,2,...}, let
P_t^(n,D)(r,s) := ∑_k=0^n p_t^(k,D)(r,s),
with p_t^(n,D)(r,s) defined in (<ref>). Then, for all 0<τ<t and n∈_0,
∫_0^∞ dz z^2ζ P_τ^(n,D)(r,z)p_ζ^(α)(t-τ,z,s)
≤ P_t^(n,D)(r,s).
By scaling, it suffices to consider 0<τ<1=t.
For n=0, the statement is just the Chapman–Kolmogorov equality, so let n∈.
Using the definition of P_t^(n,D) and p_t^(n,D), we get
∫_0^∞ dz z^2ζ P_τ^(n,D)(r,z) p_ζ^(α)(1-τ,z,s)
= ∫_0^∞ dz z^2ζ p_τ^(0,D)(r,z) p_ζ^(α)(1-τ,z,s)
+ ∑_k=1^n ∫_0^∞ dz z^2ζ∫_0^τ dρ∫_0^∞ dw w^2ζ p_ρ^(k-1,D)(r,w) q(w)p_ζ^(α)(τ-ρ,w,z) p_ζ^(α)(1-τ,z,s)
= p_ζ^(α)(1,r,s)
+ ∑_k=1^n∫_0^τ dρ∫_0^∞ dw w^2ζ p_ρ^(k-1,D)(r,w) q(w)p_ζ^(α)(1-ρ,w,s)
≤ p_ζ^(α)(1,r,s)
+ ∑_k=1^n∫_0^1 dρ∫_0^∞ dw w^2ζ p_ρ^(k-1,D)(r,w) q(w)p_ζ^(α)(1-ρ,w,s)
= P_1^(n,D)(r,s).
This concludes the proof.
Let ζ∈(-1/2,∞), α∈(0,2], η∈(0,(2ζ+1-α)/2], and r,s,t>0.
Then p_ζ,η^(α)(t,r,s)<∞ for all r>0 and almost all s>0, and there is a constant M≥1 such that
∫_0^∞ p_ζ,η^(α)(t,r,s)s^2ζ ds
≤ M[1+(t^-1/αr)^-η].
By the scaling (<ref>), it suffices to consider t=1.
For n∈, we consider the n-th partial sum P_t^(n,D)(r,s) = ∑_k=0^n p_t^(n,D)(r,s), with p_t^(n,D)(r,s) defined in (<ref>). Then, by Duhamel's formula (<ref>) and P_1^(n-1,D)(r,s)≤ P_1^(n,D)(r,s)≤ p_ζ,η^(α)(1,r,s),
P_1^(n,D)(r,s)
≤ p_ζ^(α)(1,r,s)
+ ∫_0^1 dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z)(r,z)q(z)p_ζ^(α)(1-τ,z,s)
+ Ψ_ζ(η)/R^α∫_0^1 dτ∫_R^∞ dz z^2ζ P_τ^(n,D)(r,z)p_ζ^(α)(1-τ,z,s).
By Lemma <ref>,
∫_0^1 dτ∫_R^∞ dz z^2ζ P_τ^(n,D)(τ,r,z)p_ζ^(α)(1-τ,z,s) ≤ P_1^(n,D)(r,s).
By induction and the heat kernel bounds in Proposition <ref>, for each n∈_0, we have p_1^(n,D)(r,s)<∞. Thus, for R≥(2Ψ_ζ(η))^1/α∨2 and all r,s>0, we have, by (<ref>),
P_1^(n,D)(r,s) ≤ 2p_ζ^(α)(1,r,s) + 2∫_0^1 dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z)q(z)p_ζ^(α)(1-τ,z,s).
Integrating both sides of (<ref>) against s^2ζ ds and using ∫_0^∞ p_ζ^(α)(1,r,s)s^2ζ ds=1 for all r>0 by (<ref>), we get, upon letting n→∞ on the left-hand side of (<ref>),
∫_0^∞ ds s^2ζ p_ζ,η^(α)(1,r,s)
≲ 1 + ∫_0^1 dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z)q(z)
≲ 1 + R^η r^-η.
Here, we used Lemma <ref> to obtain the final inequality in (<ref>).
Let ζ∈(-1/2,∞), α∈(0,2], η∈[0,(2ζ+1-α)/2],
β∈(0,2ζ+1-α), n∈_0, and r>0.
Then,
∫_0^∞ dτ∫_0^∞ ds s^2ζ p_τ^(n,D)(r,s) s^-β-α
= Ψ_ζ(η)^n/Ψ_ζ(β)^n+1 r^-β.
We prove the statement using induction. By (<ref>), we get
∫_0^∞ dt ∫_0^∞ ds s^2ζ p_ζ^(α)(t,r,s)s^-α-β = C^(α)(1,2ζ-α-β,ζ) r^-β
= 2^-αΓ(2ζ+1-α-β/2) Γ(β/2)/Γ(α+β/2) Γ(2ζ+1-β/2) r^-β = 1/Ψ_ζ(β) r^-β,
which gives (<ref>) for n=0. Now, suppose (<ref>) holds for some n∈_0. Then, by the definition of p_τ^(n,D)(r,s) in the formula (<ref>), a time-translation τ↦τ+t, (<ref>), and the induction hypothesis,
∫_0^∞ dτ∫_0^∞ ds s^2ζ p_τ^(n+1,D)(r,s) s^-β-α
= ∫_0^∞ dτ∫_0^∞ ds s^2ζ∫_0^τ dt ∫_0^∞ dz z^2ζ p_t^(n,D)(r,z)q(z)p_ζ^(α)(τ-t,z,s) s^-β-α
= ∫_0^∞ dt ∫_0^∞ dz z^2ζ p_t^(n,D)(r,z)q(z) ∫_0^∞ dτ ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,z,s)s^-β-α
= ∫_0^∞ dτ' ∫_0^∞ dz z^2ζ p_τ'^(n,D)(r,z)Ψ_ζ(η)z^-α·z^-β/Ψ_ζ(β) =Ψ_ζ(η)^n+1/Ψ_ζ(β)^n+2r^-β.
This concludes the proof.
Let ζ∈(-1/2,∞), α∈(0,2]∩(0,2ζ+1), η∈(0,(2ζ+1-α)/2], and β∈(η,2ζ+1-α-η). Then, for all r>0,
∫_0^∞ dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)s^-β-α
= r^-β/Ψ_ζ(β)-Ψ_ζ(η).
Since Ψ_ζ(β)>Ψ_ζ(η) (by the symmetry of Ψ_ζ(η) about η=(2ζ+1-α)/2), the claim follows from p_ζ,η^(α)(t,·,·)=∑_n≥0p_t^(n,D), Lemma <ref>, and geometric series.
For η=0, i.e., Ψ_ζ(η)=0, Theorem <ref> is verified by the following lemma.
Let ζ∈(-1/2,∞), α∈(0,2], and
β∈[0,2ζ+1-α).
Then, for all r,t>0,
∫_0^∞ ds s^2ζ p_ζ^(α)(t,r,s) s^-β
= r^-β
- ∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) Ψ_ζ(β)/s^α s^-β.
The assertion for β=0 follows from (<ref>) and Ψ_ζ(0)=0.
For β>0, by (<ref>) and Chapman-Kolmogorov,
r^-β = ∫_0^t dτ ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) Ψ_ζ(β)/s^αs^-β
+ ∫_0^∞ dτ ∫_0^∞ ds s^2ζ p_ζ^(α)(t+τ,r,s) Ψ_ζ(β)/s^αs^-β
= ∫_0^t dτ ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) Ψ_ζ(β)/s^αs^-β
+ ∫_0^∞ dτ ∫_0^∞ ds s^2ζ∫_0^∞ dz z^2ζ p_ζ^(α)(t,r,z) p_ζ^(α)(τ,z,s) Ψ_ζ(β)/s^αs^-β
= ∫_0^t dτ ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) Ψ_ζ(β)/s^αs^-β + ∫_0^∞ dz z^2ζ p_ζ^(α)(t,r,z) z^-β,
where in the last line we used again (<ref>).
Let ζ∈(-1/2,∞), α∈(0,2], η∈(0,(2ζ+1-α)/2), and β∈(0,2ζ+1-α-η). Then, for all r,t>0,
∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)s^-β-α < ∞,
∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)s^-β < ∞.
By (<ref>), for β∈ (η,2ζ+1-α-η), we have
∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)s^-β-α≤1/Ψ_ζ(β)- Ψ_ζ(η) r^-β.
Now let β∈ (0,η]. Note that β < (2ζ+1-α)/2 and (2ζ+1-α)/2∈ (η,2ζ+1-α-η). By (<ref>) and (<ref>),
∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)s^-β-α
≤∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)(1+s^-(2ζ+1-α)/2-α)
≤ M∫_0^t [1+(τ^-1/αr)^-η] dτ + 1/Ψ_ζ((2ζ+1-α)/2)- Ψ_ζ(η) r^-(2ζ+1-α)/2 <∞,
Hence, (<ref>) follows by (<ref>) and (<ref>). By Lemma <ref>, we have
∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s)s^-β≤ r^-β.
Therefore, by Duhamel's formula, (<ref>), and (<ref>),
∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)s^-β
≤∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s)s^-β + ∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)q(s)s^-β < ∞,
which ends the proof of (<ref>).
We are now ready to give the
Let t>0 and r>0. By Duhamel's formula,
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) s^-β = ∫_0^∞ ds s^2ζ p_ζ^(α)(t,r,s) s^-β
+ ∫_0^t dτ∫_0^∞ dz z^2ζp_ζ,η^(α)(τ,r,z) q(z) (∫_0^∞ ds s^2ζp_ζ^(α)(t-τ, z, s) s^- β) .
Next, by Lemma <ref>,
Ψ_ζ(β) ∫_0^t du∫_0^∞ ds s^2ζ p_ζ,η^(α)(u,r,s) s^-α - β
= Ψ_ζ(β) ∫_0^t du ∫_0^∞ ds s^2ζ p_ζ^(α)(u,r,s) s^-α - β
+ Ψ_ζ(β) ∫_0^t du ∫_0^∞ ds s^2ζ∫_0^u dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(τ,r,z) q(z) p_ζ^(α)(u-τ, z, s) s^-α - β
= r^-β - ∫_0^∞ s^2ζ p_ζ^(α)(t,r,s) s^-β
+ ∫_0^t dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(τ,r,z) q(z) (Ψ_ζ(β)∫_0^t-τ du ∫_0^∞ ds s^2ζ p_ζ^(α)(u, z, s) s^-α - β).
We add (<ref>) and (<ref>) and apply Lemma <ref> to the terms in parentheses to get
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) s^-β + Ψ_ζ(β) ∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s) s^-α - β
= r^-β + Ψ_ζ(η) ∫_0^t dτ∫_0^∞ s^2ζ p_ζ,η^(α)(τ,r,s) s^-α - β.
Let η∈ (0,(2ζ+1-α)/2). Then (<ref>) follows by (<ref>) and Lemma <ref>. Furthermore, let β↗δ. By (<ref>), Lemma <ref>, and the Lebesgue convergence theorem, we get (<ref>).
Now, consider η = (2ζ+1-α)/2 and let β∈ (0,η). Note that p_ζ,η^(α) is the perturbation of p_ζ,β^(α) by (Ψ_ζ(η) - Ψ_ζ(β))s^-α (see, e.g., <cit.>). Hence, by Duhamel's formula, Fubini's theorem, and (<ref>) applied to p_ζ,β^(α), we get
(Ψ_ζ(η)-Ψ_ζ(β)) ∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s) s^-α - β
= ∫_0^∞ dz z^2ζ[(Ψ_ζ(η)-Ψ_ζ(β)) ∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s) s^-α p_ζ,β^(α)(t-τ,s,z)] z^-β
= ∫_0^∞ dz z^2ζ[ p_ζ,η^(α)(t,r,z) - p_ζ,β^(α)(t,r,z)] z^-β
= ∫_0^∞ dz z^2ζ p_ζ,η^(α)(t,r,z) z^-β - r^-β,
which gives (<ref>). Next, let β < η. Since s^-β < 1+ s^-η, by (<ref>) and (<ref>),
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) s^-β≤∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) (1+ s^-η) <∞.
By (<ref>),
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) s^-β dy ≥ r^-β.
Hence, letting β↗η, by Lebesgue's convergence theorem we get
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) s^-η dy ≥ r^-η,
which together with (<ref>) gives (<ref>). This concludes the proof of Theorem <ref>.
Using Theorem <ref> and Proposition <ref>, we now prove an approximate supermedian property for
H(t,r) := 1 + (t^-1/αr)^-η, r,t>0
and
H(r) := H(1,r) = 1+r^-η
with respect to p_ζ,η^(α)(t,·,·).
Let ζ∈(-1/2,∞), α∈(0,2], and η∈(0,(2ζ+1-α)/2]. Then, there is a constant M>0 such that for all r,t>0,
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s)H(t,s)
≤ (M+1) H(t,r).
Furthermore, for all β∈[0,η],
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) (s/t^1/α)^-β≤ M H(t,r).
Finally,
∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z)
≤ M H(r).
Formula (<ref>) follows from (<ref>) and (<ref>).
Formula (<ref>) follows from s^-β≤ 1+s^-η=H(s), Formula (<ref>) and scaling. To prove Formula (<ref>), we use (<ref>) and Duhamel's formula to obtain
MH(r)
≥∫_0^∞ ds s^2ζ p_ζ,η^(α)(1,r,s)
= ∫_0^∞ ds s^2ζ(p_ζ^(α)(1,r,s) + ∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s) )
= 1 + ∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z).
This concludes the proof.
Finally, we show the finiteness of p_ζ,η^(α)(t,r,s) for all r,s,t>0.
Let ζ∈(-1/2,∞), α∈(0,2], η∈(0,(2ζ+1-α)/2]. Then p_ζ,η^(α)(t,r,s)<∞ for all r,s,t>0.
By the scaling (<ref>), it suffices to consider t=1.
By (<ref>), we know p_ζ,η^(α)(1,r,s)<∞ for all r>0 and almost all s>0. We now show finiteness for a given r>0 and all s>0. By (<ref>), i.e.,
P_1^(n,D)(1,r,s) ≤ 2p_ζ^(α)(1,r,s) + 2∫_0^1 dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z)q(z)p_ζ^(α)(1-τ,z,s),
with R=2∧(2Ψ_ζ(η))^1/α and P_t^(n,D)(r,s) = ∑_k=0^n p_t^(n,D)(r,s), it suffices to show
∫_0^1 dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z)q(z)p_ζ^(α)(1-τ,z,s) < ∞.
If s≥2R, we use the uniform boundedness of p_ζ^(α)(1-τ,z,s) and (<ref>), which allows to bound the remaining integral by
∫_0^1dτ∫_0^∞ ds z^2ζ p_ζ,η^(α)(τ,r,z)q(z)≲ H(r)<∞.
We assume s<2R from now on.
We first consider the integral over τ∈(0,1-1/2Ψ_ζ(η)(s/2)^α). Then the left-hand side of (<ref>) is again finite by the uniform boundedness of p_ζ^(α)(1-τ,z,s) and by (<ref>).
It remains to consider the integral over τ∈(1-1/2Ψ_ζ(η)(s/2)^α,1). We now distinguish between |z-s|≶ s/2. For |z-s|>s/2, we bound
p_ζ^(α)(1-τ,z,s) ≲1/s^2ζ+1+α
and use (<ref>). If |z-s|≤ s/2, then z≥ s/2. Thus, by (<ref>) and the Chapman–Kolmogorov equation, we can bound the integral in question by
Ψ_ζ(η)(2/s)^α∫_1-1/2Ψ_ζ(η)(s/2)^α^1 dτ∫_s/2^R dz p_ζ,η^(α)(τ,r,z)p_ζ,η^(α)(1-τ,z,s)
≤Ψ_ζ(η)(2/s)^α·1/2Ψ_ζ(η)(s/2)^α· p_ζ,η^(α)(1,r,s)
= 1/2 p_ζ,η^(α)(1,r,s),
which can be absorbed by the left-hand side of (<ref>) upon taking n→∞. This concludes the proof.
§.§ The case η<0
In this section, we extend Theorem <ref> to η < 0. Given the explicit expression for p_ζ,η^(2), this is straightforward for α = 2. Hence, the main goal of this section is to generalize the previous integral analysis for α∈ (0, 2) to those coupling constants Ψ_ζ(η) with η∈ (-α, 0). Our approach relies on compensating the kernels p_ζ^(α)(t,r,s) to add extra integrability for large t; see (<ref>).
We begin the integral analysis by constructing generalized ground states for α=2 and then use subordination to study α∈(0,2) in the spirit of <cit.>.
We proceed similarly as in <cit.> by integrating p_ζ^(α) against suitable functions of space and time. This gave us the functions h_β,γ in (<ref>) discussed at the beginning of Section <ref> above. However, since we wish to consider a wider range of parameters, we modify our construction slightly by compensating the kernel p_ζ^(α)(t,r,s) with p_ζ^(α)(t,r,0) when we integrate against monomials growing too fast for large t. The following calculation is similar but more delicate than <cit.>.
Let ζ∈(-1/2,∞), α∈(0,2], γ∈(-1,∞), and 0<β<(2ζ-γ+2)/α with β≠ (2ζ-γ)/α.
Then,
h_β,γ^(+)(s)
:= ∫_0^∞dt/t t^β∫_0^∞ dr r^γ( p_ζ^(α)(t,r,0) - p_ζ^(α)(t,r,s) )
= C^(α)(β,γ,ζ) s^αβ+γ-2ζ,
with C^(α)(β,γ,ζ) as in (<ref>).
If, additionally, β=(2ζ-γ-η)/α for some η∈(-α,0), then
q(s)
= Ψ_ζ(η)/s^α
= (β-1) h_β-1,γ(s)/h_β,γ^(+)(s)
where h_β-1,γ(s) is as in (<ref>).
Note that the case αβ=2ζ-γ, which is excluded in the above lemma, corresponds to η=0, as can be seen by considering the right-hand sides of (<ref>) and (<ref>).
The constant on the right-hand side of (<ref>) agrees with that in (<ref>). The subtraction of p_ζ^(α)(t,r,0) in (<ref>), however, allows us to extend the range of admissible β, compare with (<ref>).
Formula (<ref>) immediately follows from (<ref>), which we prove now. Consider first α=2. For β∈(0,(2ζ-γ)/2), the calculations were already carried out in <cit.>. Thus, consider β∈((2ζ-γ)/2,(2ζ-γ+2)/2) from now on. It is for this range that we require the compensation by p_ζ^(2)(t,r,0) to make the t-integral convergent at t=∞; at t=0, no compensation is necessary in view of the exponential factor -c(r-s)^2/t in p_ζ^(2)(t,r,s) in (<ref>) and similarly for p_ζ^(2)(t,r,0) in (<ref>). We now compute h_β,γ^(+)(s). By scaling,
h_β,γ^(+)(s)· s^-γ-2β+2ζ = C^(+)(β,γ,ζ)
with
C^(+)(β,γ,ζ)
= ∫_0^∞dt/t t^β∫_0^∞ dr r^γ(p_ζ^(2)(t,r,0)-p_ζ^(2)(t,r,1)).
Since γ>-1, the r-integral gives, as in <cit.>,
∫_0^∞ dr r^γ(p_ζ^(2)(t,r,0)-p_ζ^(2)(t,r,1))
= -2^γ -2 ζΓ(γ +1/2) t^1/2 (γ -2 ζ )( _1F_1(ζ -γ/2;ζ +1/2;-1/4 t)-1)/Γ(ζ +1/2).
Here, _1F_1(a;b;z) with a,b∈, b∉{0,-1,-2}, and z∈{w∈: |w|<1} denotes Kummer's confluent hypergeometric function <cit.>._1F_1(a;b;z)
The right-hand side of (<ref>) behaves like t^-ζ+γ/2-1 as t→∞ by <cit.>. To study the behavior at t=0, we use <cit.>, i.e.,
_1F_1(a;b;z) = z _1F_1(b-a;b;-z)
and
lim_z→∞ _1F_1(a;b;z) ·1/zz^a-b
= 1/Γ(a)Γ(b)
to infer the existence of some c_γ,ζ∈ such that
_1F_1(ζ -γ/2;ζ +1/2;-1/4 t) =
c_γ,ζ t^-γ/2+ζ + 𝒪(t^-γ/2+ζ+1) at t=0.
Thus, the right-hand side behaves like 1+t^-ζ+γ/2 as t→0.
These asymptotics and similar computations as in <cit.> thus give
C^(+)(β,γ,ζ)
= -2^-2 β -1Γ (β) Γ(γ +1/2) Γ(-β +ζ -γ/2)/Γ(β +γ/2+1/2) Γ(-γ/2+ζ)
for β∈(ζ-γ/2,ζ-γ/2+1).
This proves (<ref>) for α=2.
Now consider α∈(0,2). By subordination and the previous computations,
h_β,γ^(+)(s)
= ∫_0^∞ dr r^γ∫_0^∞ dτ[p_ζ^(2)(τ,r,s)-p_ζ^(2)(τ,r,0)] ∫_0^∞dt/t σ_t^(α/2)(τ) · t^β
= Γ(β)/Γ(αβ/2)∫_0^∞ dr r^γ∫_0^∞ dτ [p_ζ^(2)(τ,r,s)-p_ζ^(2)(τ,r,0)] ·τ^-1+αβ/2
= Γ(β)/Γ(αβ/2) C(αβ/2,γ,ζ) s^αβ+γ-2ζ
= C^(α)(β,γ,ζ) s^αβ+γ-2ζ,
by the definition of C^(α)(β,γ,ζ) in (<ref>).
Here and in the following, we could also work with h_β,γ^(+) defined by (<ref>) when β∈(0,(2ζ-γ)/α). Using the new ansatz (<ref>) involving the difference of heat kernels allows us to increase the range of admissible β and, importantly, work under the less restrictive assumption β<(2ζ-γ+2)/α.
We are now ready to start the actual integral analysis. We begin with a variant of Lemma <ref>.
Let ζ∈(-1/2,∞), α∈(0,2], β∈(-α,0), and r,t>0. Then,
∫_0^∞ ds s^2ζ p_ζ^(α)(t,r,s) s^-β
= r^-β - ∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) Ψ_ζ(β)/s^α s^-β.
Let β,γ be such that h_β,γ^(+)(r) = C^(α)(β,γ,ζ) r^-β = h_β,γ with h_β,γ^(+)(r) as in (<ref>) and h_β,γ(r) as in (<ref>).
We start with the Ψ_ζ(β) term. By the expression (<ref>) for the Hardy potential, and (<ref>), we have
Ψ_ζ(β)/s^α· h_β,γ(s)
= (β-1) ∫_0^∞ dϑ∫_0^∞ dz ϑ^β-2 z^γ p_ζ^(α)(ϑ,s,z).
Integrating (<ref>) against s^2ζp_ζ^(α)(τ,r,s)_[0,t](τ) ds dτ yields
∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) Ψ_ζ(β)/s^α· h_β,γ(s)
= (β-1) ∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) ∫_0^∞ dϑ∫_0^∞ dz ϑ^β-2 z^γ p_ζ^(α)(ϑ,s,z)
= ∫_0^t dτ∫_0^∞ dϑ∫_0^∞ dz p_ζ^(α)(τ+ϑ,r,z) ∂_ϑϑ^β-1 z^γ
= -∫_0^t dτ∫_0^∞ dϑ∫_0^∞ dz [∂_τ p_ζ^(α)(τ+ϑ,r,z)] ϑ^β-1 z^γ
= ∫_0^∞ dϑ∫_0^∞ dz [p_ζ^(α)(ϑ,r,z) - p_ζ^(α)(ϑ+t,r,z)] ϑ^β-1 z^γ,
where we used the semigroup property and (β-1)ϑ^β-2=(ϑ^β-1)' in the second step, integrated by parts in the third step[The boundary term at ϑ=∞ vanishes due to the decay of order ϑ^-(2ζ+1)/α for the heat kernel (see (<ref>) and (<ref>)), which suppresses ϑ^β-1 at infinity, since β<(2ζ-γ)/α<(2ζ+1)/α for γ>-1. The boundary term at ϑ=0 vanishes since β>1 and the heat kernels are non-singular at the temporal origin.], and integrated with respect to dτ using the fundamental theorem of calculus. Now we add and subtract p_ζ^(α)(ϑ,0,z) in the integral on the right-hand side of (<ref>), and use (<ref>) to obtain
∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) Ψ_ζ(β)/s^α· h_β,γ(s)
= ∫_0^∞ dϑ∫_0^∞ dz [(p_ζ^(α)(ϑ,r,z) - p_ζ^(α)(ϑ,0,z)) + (p_ζ^(α)(ϑ,0,z) - p_ζ^(α)(ϑ+t,r,z))] ϑ^β-1 z^γ
= h_β,γ(r) - ∫_0^∞ dϑ∫_0^∞ dz ∫_0^∞ dw w^2ζ p_ζ^(α)(t,r,w) (p_ζ^(α)(ϑ,w,z)-p_ζ^(α)(ϑ,0,z)) ϑ^β-1 z^γ
= h_β,γ(r) - ∫_0^∞ dw w^2ζ p_ζ^(α)(t,r,w) h_β,γ^(+)(w).
In the last two steps, we used the definition (<ref>) of h_β,γ^(+), the normalization (<ref>), and the semigroup property for p_ζ^(α)(t+ϑ,·,·).
We finish off with a complement of Theorem <ref> for repulsive Hardy potentials
Recall that M=∞ if α=2 and M=α if 0<α<2.
Let ζ∈(-1/2,∞), α∈(0,2]∩(0,2ζ+1), η∈(-M,0), β∈(-M,0), and r,t>0. Then
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s)s^-β
= r^-β + (Ψ_ζ(η)-Ψ_ζ(β)) ∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)s^-β-α.
In particular, for any r,t>0,
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s)s^-η = r^-η.
For α=2, the statements follow from explicit calculations using (<ref>). Thus, suppose α∈(0,2).
By Duhamel's formula (<ref>) and (<ref>),
∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s) Ψ_ζ(β)/s^α s^-β
= ∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ^(α)(τ,r,s) Ψ_ζ(β)/s^α s^-β
+ ∫_0^t dτ∫_0^∞ ds s^2ζ-β∫_0^τ dϑ∫_0^∞ dz z^2ζ p_ζ,η^(α)(ϑ,r,z)q(z) p_ζ^(α)(τ-ϑ,z,s) Ψ_ζ(β)/s^α
= r^-β - ∫_0^∞ ds s^2ζ p_ζ^(α)(t,r,s)s^-β
+ ∫_0^t dϑ∫_0^∞ dz z^2ζ p_ζ,η^(α)(ϑ,r,z)q(z)[z^-β - ∫_0^∞ ds s^2ζ p_ζ^(α)(t-ϑ,z,s)s^-β].
Here we applied a time-shift τ↦τ+ϑ and (<ref>) to both lines after the first equality. Bringing the z^-β term to the left-hand side and using
∫_0^∞ ds s^2ζ∫_0^t dϑ∫_0^∞ dz z^2ζ p_ζ,η^(α)(ϑ,r,z)q(z) p_ζ^(α)(t-ϑ,z,s) s^-β
= ∫_0^∞ ds s^2ζ (p_ζ,η^(α)(t,r,s) - p_ζ^(α)(t,r,s))s^-β,
which follows from Duhamel's formula, we obtain
(Ψ_ζ(β)-Ψ_ζ(η))∫_0^t dτ∫_0^∞ ds s^2ζ p_ζ,η^(α)(τ,r,s)s^-β-α
= r^-β - ∫_0^∞ ds s^2ζ p_ζ^(α)(t,r,s)s^-β
- ∫_0^∞ ds s^2ζ[p_ζ,η^(α)(t,r,s) - p_ζ^(α)(t,r,s)] s^-β
= r^-β - ∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) s^-β.
This proves (<ref>). Finally, (<ref>) follows from (<ref>) by taking β=η.
§ PROOF OF THEOREM <REF>
In this section, we prove the upper and lower bounds for, and continuity of p_ζ,η^(α)(t,r,s). Moreover, we prove that p_ζ,η^(α) blows up for supercritical coupling constants.
Before we start the proof of Theorem <ref>, we sketch the proof of (<ref>), which
essentially follows from a change of variables. It was carried out, e.g., by Metafune, Negro, and Spina <cit.>.
Recall that p_ζ^(2) is the heat kernel of the nonnegative Bessel operator _ζ in (<ref>); see (<ref>). Using the unitary operator U_ζ:L^2(_+,r^2ζdr)→ L^2(_+,dr) defined by
L^2(_+,r^2ζdr) ∋ u ↦ (U_ζ u)(r)=r^ζu(r) ∈ L^2(_+,dr),
we have
_ζ - Ψ_ζ(η)/r^2 = U_ζ^*(-d^2/dr^2 + (2ζ-1)^2-1-4Ψ_ζ(η)/4r^2)U_ζ
= U_η^* U_ζ-η^*(-d^2/dr^2 + (2(ζ-η)-1)^2-1/4r^2)U_ζ-ηU_η
= U_η^* _ζ-η U_η in L^2(_+,r^2ζdr).
Here we used (2ϑ-1)^2-1=(2ζ-1)^2-1-4Ψ_ζ(η) for ϑ∈{ζ-η,1-ζ+η} since Ψ_ζ(η)=η(2ζ-1-η) for α=2. Thus, the change of variables in (<ref>) yields
exp(-t(_ζ-Ψ_ζ(η)/r^2))
= exp(-tU_η^*_ζ-ηU_η)
= U_η^* exp(-t_ζ-η) U_η.
Hence, since p_ζ-η^(2) is the heat kernel of _ζ-η, we see that
p_ζ,η^(2)(t,r,s) = (rs)^-η p_ζ-η^(2)(t,r,s)
is the heat kernel of _ζ-Ψ_ζ(η)/r^2 in L^2(_+,r^2ζdr), where _ζ is defined in (<ref>).
In the remainder of this section, we focus on α∈(0,2).
To prove Theorem <ref>, we build on the ideas of <cit.> and <cit.> for the cases η∈(-α,0) and η∈(0,(2ζ+1-α)/2], respectively. Let us first discuss positive η. The main tools we use are Duhamel's formula, the Chapman–Kolmogorov equation, and the scaling given in Proposition <ref>. The first step is to get estimates for p_ζ,η^(α)(1,r,s) with arbitrary r>0 and s≳1. In Lemma <ref> below, we show p_ζ,η^(α)(1,r,s) ≲ H(r)p_ζ^(α)(1,r,s) by properly splitting the integrals in Duhamel's formula and applying the estimates (<ref>) and (<ref>).
For 0<r∨ s≲1, we show p_ζ,η^(α)(1,r,s)≲ H(r) H(s) using p_ζ,η^(α)(2,r,s) = ∫__+ dz z^2ζ p_ζ,η^(α)(1,r,z) p_ζ,η^(α)(1,z,s) (_z<1 + _z>1) (Chapman–Kolmogorov). This requires bounds for p_ζ,η^(α)(1,r,z) for r≲1 and z≶1.
For r≲1≲ z, we use Lemma <ref> below.
For r∨ z≲1, we use Lemmas <ref>–<ref> below. More precisely, in Lemma <ref>, we prove the preliminary estimate p_ζ,η^(α)(1,r,s) ≲ r^-η s^μ-2ζ-1, which we systematically improve in Lemma <ref> and in the crucial Lemma <ref> to get the upper bounds stated in Lemma (<ref>).
To get the lower bounds in Theorem <ref>, we use the upper bounds stated in Lemma <ref>, the Chapman–Kolmogorov equation, and the invariance of the function h(s) = s^-η.
For η<0, the Duhamel formula is almost useless. Our proofs are generally based on the Chapman–Kolmogorov equation and the method called "self-improving estimates". This method is applied in the key estimates of the mass M(t,r) of p_ζ,η^(α), defined in (<ref>), which are contained in Proposition <ref>. To that end, we proceed as follows. In the first step, we show M(1,r) < δ M(1,3^1/αr) + A r^-η for sufficiently small δ and some A>0. Next, we iterate this inequality to obtain M(1,r) < δ^n M(1,3^n/αr) + A_n r^-η for some bounded sequence A_n, thereby giving the desired estimates for M(1,r). Proposition <ref> yields the upper estimates of p_ζ,η^(α)(1,r,s) for r,s ≲ 1. The case r,s ≳ 1 is quite obvious because of the bound p_ζ,η^(α)≤ p_ζ^(α). In contrary to positive perturbations, the most challenging case is r ≲ 1 ≲ s. The key estimate is given in Lemma <ref>, which together with <ref> allows to apply the method of "self-improving estimates" in the proof of the inequality (<ref>). The proof of the lower bounds is based on Lemma <ref> and the estimates of the first summand in the perturbation series.
§.§ The case η∈(0,(2ζ+1-α)/2]
§.§.§ Upper bound
The proof of the upper bound in (<ref>) will be concluded in Lemma <ref> after a series of preparatory lemmas involving H(r) = H(1,r) = 1+r^-η defined in (<ref>) above.
In the proofs below, we use the scaling symmetry of p_ζ,η^(α)(t,r,s) to reduce the analysis to the case t=1. Our first step to study p_ζ,η^(α)(1,r,s) is to use Duhamel's formula. In order to discuss "small" and "large" distances to the origin, we will, motivated by the uncertainty principle, introduce a cut-off in the time parameter, defined as
g(s)
g(s) := s^α/2^α+1Ψ_ζ(η), s>0.
The prefactor [2^α+1Ψ_ζ(η)]^-1 is chosen such that certain parts of the integrals appearing in Duhamel's formula are absorbed by p_ζ,η^(α).
In turn, the time cut-off (<ref>) suggests to distinguish distances to the origin that are smaller or larger than 2(2Ψ_ζ(η))^1/α.
By the scaling symmetry of p_ζ,η^(α)(t,r,s), we could have also considered values of t different from one, which would then require to consider different time and spatial cut-offs.
All theorems, propositions, and lemmas of this subsection continue to hold for α=2. The only necessary change is that the spatial arguments r,s>0 of the heat kernel p_ζ^(2)(t,r,s) for α=2 should be, in bounds, replaced with cr and cs for some c=c(ζ,α,η)∈(0,∞).
In the following lemma, we examine the region r<2(2Ψ_ζ(η))^1/α, s>0.
Let ζ∈(-1/2,∞),
α∈(0,2),
η∈(0,(2ζ+1-α)/2], s>0, and 0<r≤ 2(2Ψ_ζ(η))^1/α. Then, there is M>0 such that
p_ζ,η^(α)(1,r,s)
≤ 2 (p_ζ^(α)(1,r,s) + M p_ζ^(α)(1,1,s)) H(r)
+ 2∫_g(s)∧ 1^1 dτ∫_s/2^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z) q(z) p_ζ^(α)(τ,z,s)
+ ∫_0^1/2dτ∫_0^s/2 dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s).
By Duhamel's formula (<ref>),
p_ζ,η^(α)(1,r,s) = p_ζ^(α)(1,r,s) + I_1 + I_2 + I_3,
where
I_1 := ∫_0^1∧ g(s) dτ∫_s/2^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s),
I_2 := ∫_1∧ g(s)^1 dτ∫_s/2^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s),
I_3 := ∫_0^1 dτ∫_0^s/2 dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s).
We leave I_2 untouched. To bound I_1, we use q(z)≤ q(s/2), p_ζ^(α)(τ,z,s)≤ p_ζ,η^(α)(τ,z,s), and the semigroup property to obtain
I_1
≤ q(s/2) ∫_0^1∧ g(s)dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z) p_ζ,η^(α)(τ,z,s)
= p_ζ,η^(α)(1,r,s) q(s/2) ∫_0^1∧ g(s)dτ≤1/2 p_ζ,η^(α)(1,r,s).
Since p_ζ,η^(α)(1,r,s)<∞ by Lemma <ref>, this term can be absorbed by the left-hand side of (<ref>). To treat I_3, we split
I_3
= ∫_0^1/2dτ∫_0^s/2 dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s)
+ ∫_1/2^1 dτ∫_0^s/2 dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s).
We leave the first summand untouched. To estimate the second summand, we use p_ζ^(α)(τ,z,s)≲ p_ζ^(α)(1,1,s) for 1/2<τ<1 and 0<z<s/2 (by (<ref>) and (<ref>) in Lemma <ref>), (<ref>), Duhamel's formula (<ref>), and Proposition <ref>, and obtain
∫_1/2^1 dτ∫_0^s/2 dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s)
≲ p_ζ^(α)(1,1,s) ∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z)
= p_ζ^(α)(1,1,s)∫_0^1dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z)∫_0^∞ dy y^2ζ p_ζ^(α)(τ,z,y)
= p_ζ^(α)(1,1,s)∫_0^∞ dy y^2ζ∫_0^1dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z)p_ζ^(α)(τ,z,y)
= p_ζ^(α)(1,1,s) ∫_0^∞ dy y^2ζ ( p_ζ,η^(α)(1,r,y) - p_ζ^(α)(1,r,y))
≤ M p_ζ^(α)(1,1,s) H(r).
This concludes the proof of (<ref>).
Lemma <ref> will be particularly important to study the region r∨ s≤ 2(2Ψ_ζ(η))^1/α. Although it could also be used to study r≤2(2Ψ_ζ(η))^1/α≤ s, we will use different methods here.
Let ζ∈(-1/2,∞), α∈(0,2), η∈(0,(2ζ+1-α)/2]. Then, there is C>0 such that for all s≥2(2Ψ_ζ(η))^1/α and r>0,
p_ζ,η^(α)(1,r,s) ≤ C H(r) p_ζ^(α)(1,r,s).
We use (<ref>) for R=(2Ψ_ζ(η))^1/α, to get
p_ζ,η^(α)(1,r,s)
≤ 2 p_ζ^(α)(1,r,s)
+ 2 ∫_0^1dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z) q(z) p_ζ^(α)(1-τ,z,s).
It suffices to estimate the second summand by p_ζ^(α)(1,r,s)H(r).
For 0<τ<1, 0<z≤ s/2, s≥ 2R>0, we estimate p_ζ^(α)(1-τ,z,s)≲_η p_ζ^(α)(1,1,s) ((<ref>) in Lemma <ref>) and use (<ref>) to bound
∫_0^1dτ∫_0^R dz z^2ζ p_ζ,η^(α)(τ,r,z) q(z) p_ζ^(α)(1-τ,z,s)
≲ p_ζ^(α)(1,1,s) H(r).
Since for r≤2R, we have p_ζ^(α)(1,1,s) ∼ p_ζ^(α)(1,r,s), we obtain (<ref>) for 0≤ r≤ 2R≤ s.
On the other hand, by the symmetry of p_ζ,η^(α)(1,r,s) in r and s, (<ref>) and (<ref>) give
p_ζ,η^(α)(1,r,s) ≲ p_ζ^(α)(1,r,s) + min{p_ζ^(α)(1,1,s) H(r),p_ζ^(α)(1,1,r) H(s)}, r,s≥ 2R.
Since H(s)∼1∼ H(r) for all r,s≥ 2R, and p_ζ^(α)(1,r,1)∧ p_ζ^(α)(1,1,s)≲_η p_ζ^(α)(1,r,s) for all r,s>0 ((<ref>) in Lemma <ref>), we get (<ref>) for r,s≥ 2R.
This ends the proof.
Lemma <ref>, together with (<ref>) in Theorem <ref> and (<ref>) in Corollary <ref>, allows us to understand the remaining region r,s≤ 2(2Ψ_ζ(η))^1/α.
Let ζ∈(-1/2,∞), α∈(0,2), η∈(0,(2ζ+1-α)/2]. Then, for every μ∈(0,η), there is a constant C_μ>0 such that
p_ζ,η^(α)(1,r,s)
≤ C_μ( H(r)s^μ-(2ζ+1)∧ H(s)r^μ-(2ζ+1)),
0<r,s≤2(2Ψ_ζ(η))^1/α.
By symmetry of p_ζ,η^(α)(1,r,s), we may assume r<s without loss of generality. We use (<ref>) in Lemma <ref>.
First note that p_ζ^(α)(1,r,s)∨ p_ζ^(α)(1,1,s)≲1 for r,s≲ 1.
Since 0<μ≤ 2ζ+1, by (<ref>) with λ=0 and σ = 2ζ+1-μ, we get p_ζ^(α)(τ,z,s) ≲ s^μ-2ζ-1z^-μ. Therefore, by Lemma <ref>, (<ref>) in Theorem <ref> and (<ref>) in Corollary <ref>, we get
p_ζ,η^(α)(1,r,s) ≲ H(r) + s^μ-(2ζ+1)∫_0^1dτ∫_0^∞dz z^2ζ p_ζ,η^(α)(1-τ,r,z) z^-μ-α
≤ H(r) + s^μ-(2ζ+1)1/Ψ_ζ(η)-Ψ_ζ(μ)∫_0^∞ dz z^2ζ p_ζ,η^(α)(1,r,z)z^-μ
≲ H(r) + s^μ-(2ζ+1)·H(r)/Ψ_ζ(η)-Ψ_ζ(μ)≲ H(r)s^μ-(2ζ+1),
where in the last line we used that s≲ 1 and μ<2ζ+1.
Lemma <ref> is not yet precise enough. The following two lemmas, together with Proposition <ref> and (<ref>) in Theorem <ref> will help us sharpen it. The improved estimates for r,s≤ 2(2Ψ_ζ(η))^1/α are contained in Lemma <ref>.
Let ζ∈(-1/2,∞),
α∈(0,2),
η∈(0,(2ζ+1-α)/2], and r>0.
Then, for all ν∈(η,2ζ+1-η) and δ∈(0,η),
∫_0^∞ dz z^2ζ p_ζ,η^(α)(t,r,z)z^-ν≲_ν,δ r^-ν[1+(r/t^1/α)^-δ].
In particular,
∫_0^∞ dz z^2ζ p_ζ,η^(α)(t,r,z)z^-ν≲_ν,δ r^-ν[1+r^-δ], t∈(0,1].
This lemma can be seen as an extension of (<ref>) in Corollary <ref>, which treated all ν∈(0,η].
By scaling, it suffices to consider t=1.
We write R=2(2Ψ_ζ(η))^1/α and distinguish between r≤ R and r>R. We first consider r≤ R and divide the integral in question into two parts.
Let μ=η-δ so that μ∈(0,η) and μ-ν<0.
By Lemma <ref>, for r,z≤ R, we have
p_ζ,η^(α)(1,r,z)
≲ r^-ηz^μ-(2ζ+1)_{r≤ z} + r^μ-(2ζ+1)z^-η_{z≤ r}.
Then, by Lemma <ref> and (<ref>),
∫_0^R dz z^2ζ p_ζ,η^(α)(1,r,z) z^-ν
≲_R ∫_0^r dz z^2ζ-ν· r^μ-(2ζ+1)· z^-η + ∫_r^R dz z^2ζ-ν· r^-η· z^μ-(2ζ+1)
≲ r^-ν-δ, r≤ R.
For the complementary integral, we use that Proposition <ref> and ν+δ>η imply
∫_R^∞ dz z^2ζ p_ζ,η^(α)(1,r,z)z^-ν≤ R^-ν∫_0^∞ dz z^2ζ p_ζ,η^(α)(1,r,z)
≲_R H(r) ≲_R r^-ν-δ, r≤ R.
This concludes the proof for r≤ R.
For r≥ R we split the z-integration at z=r/2. For z>r/2, we get
∫_r/2^∞ dz z^2ζ p_ζ,η^(α)(1,r,z)z^-ν≤ r^-ν∫_0^∞ dz z^2ζ p_ζ,η^(α)(1,r,z)
≲_R r^-ν, r≥ R.
For z<r/2, we get, using Lemma <ref> and p_ζ^(α)(1,r,z)≲ r^-(2ζ+1+α) (Proposition <ref>),
∫_0^r/2 dz z^2ζ p_ζ,η^(α)(1,r,z)z^-ν≲∫_0^r/2 dz z^2ζ-ν p_ζ^(α)(1,r,z) H(z)
≲ r^-ν-α.
The proof is concluded.
Let us introduce
H_ν(r)
H_ν(r) = 1 + r^-ν, r, ν>0.
Thus, H_η(r)=H(r), with H(r) as in (<ref>).
By Proposition <ref>, (<ref>) in Corollary <ref>, and Lemma <ref>, we conclude that for all ν∈[0,2ζ+1-η) and δ∈(0,η),
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) H_ν(s)
≲_δ H_(ν+δ)∨η(r) , t∈(0,1] , r>0 .
The following result improves and extends this estimate to all t>0.
Let ζ∈(-1/2,∞),
α∈(0,2),
η∈(0,(2ζ+1-α)/2]. Let ν∈[0,2ζ+1-η). Then,
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) H_ν(s/t^1/α)
≲_ν H(r/t^1/α), t>0, r>0.
In particular,
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s) H_ν(s)
≲_ν t^-ν/α H(r/t^1/α), t∈(0,1], r>0.
By scaling (<ref>) of p_ζ,η^(α), it suffices to prove (<ref>) for t=1.
If ν≤η, then the claim follows from (<ref>) in Corollary <ref>.
Thus, suppose ν=η+ξα with ξ∈(0,(2ζ+1-2η)/α). We distinguish between ξ∈(0,1) and ξ≥1.
If ξ<1, then (<ref>) in Theorem <ref> and (<ref>) in Corollary <ref> imply for η<μ≤η+α,
∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(τ,r,z) H_μ(z)
≲ H(r) + ∫_0^∞ ds s^2ζ p_ζ,η^(α)(1,r,s)s^-μ+α
≲ H(r), r>0.
Let 0<ϵ<(1-ξ)α∧η so that ν+ϵ=η+ξα+ϵ<η+α. By the semigroup property, ∫_0^1 dτ=1, and Formulae (<ref>) and (<ref>), if ν+ϵ∈(η,η+α) then there is C>0 such that
∫_0^∞ ds s^2ζ p_ζ,η^(α)(1,r,s) H_ν(s)
= ∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(τ,r,z)∫_0^∞ ds s^2ζ p_ζ,η^(α)(1-τ,z,s) H_ν(s)
≲∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(τ,r,z) H_ν+ϵ(z)
≤ CH(r), r>0,
as desired. This concludes the case ξ∈(0,1).
Now let ξ∈[1,(2ζ+1-2η)/α). By Corollary <ref>, if η+α<μ<2ζ+1-η then
∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(τ,r,z) H_μ(z)
≲ H_μ-α(r), r>0.
We fix ϵ∈(0,1) such that δ:=(1-ϵ)α<2ζ+1-ν-η, and δ<η. By the semigroup property, ∫_0^1dτ=1, (<ref>), and (<ref>), for η+α≤μ≤ν, we have, if η+α≤μ≤ν,
∫_0^∞ ds s^2ζ p_ζ,η^(α)(1,r,s) H_μ(s)
= ∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(τ,r,z) ∫_0^∞ ds s^2ζ p_ζ,η^(α)(1-τ,z,s) H_μ(s)
≲∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(τ,r,z) H_μ+δ(z)
≲ H_μ-ϵα(r), r>0.
We choose n∈ such that ξα∈[ϵ(nα-1),ϵ(nα-1)+α]∩[α,2ζ+1-2η).
By the semigroup property, (<ref>), and (<ref>), if η+(ξ-nϵ)α+ϵ∈ (η,η+α), then
∫_0^∞ ds s^2ζ p_ζ,η^(α)(n+1,r,s) H_ν(s)
= ∫_0^∞ dz_1 z_1^2ζ ... ∫_0^∞ dz_n z_n^2ζ∫_0^∞ ds s^2ζ p_ζ,η^(α)(1,r,z_1) ... p_ζ,η^(α)(1,z_n,s) H_ν(s)
≤ C^n∫_0^∞ dz_1 z_1^2ζ p_ζ,η^(α)(1,r,z_1) H_ν-nϵα(z_1)
= C^n ∫_0^∞ dz_1 z_1^2ζ p_ζ,η^(α)(1,r,z_1) H_η+(ξ-nϵ)α(z_1)
≤ C^n+1 H(r),
r>0,
recalling ν=η+ξα. This ends the proof in view of p_ζ,η^(α)(n+1,r,s) ∼_n p_ζ,η^(α)(1,r,s).
Lemmas <ref> and <ref> allow us to improve the estimate for r,s≤ 2(2Ψ_ζ(η))^1/α from Lemma <ref>.
Let ζ∈(-1/2,∞), α∈(0,2), η∈(0,(2ζ+1-α)/2]. Let 0<r,s≤2(2Ψ_ζ(η))^1/α. Then, for each δ∈(0,η),
p_ζ,η^(α)(1,r,s)
≲_δ(H(r)s^-η-δ∧ H(s) r^-η-δ).
We will use Lemma <ref> and the symmetry of p_ζ,η^(α). The first line on the right-hand side of (<ref>) is clearly bounded by H(r)s^-η-δ. Now, we estimate the second line on the right-hand side of (<ref>). Note that for τ∈(1/2,1), we have p_ζ^(α)(τ,z,s)≲ 1 (see (<ref>)). Hence, by (<ref>), we have
∫_1/2^1 dτ∫_s/2^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s)
≲∫_0^1 dτ∫_s/2^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z)
≲ H(r)
≲ H(r)s^-η-δ.
Next, let ε∈ (0,αδ). By (<ref>) with σ = η+δ and λ =α-ε and Lemma <ref>, we get
∫_0^1/2dτ∫_0^s/2 dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s)
+ ∫_g(s)^1/2 dτ∫_s/2^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s)
≲ s^-η-δ∫_0^1/2dτ τ^(ε-α)/α∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z) z^-(2ζ+1 -η-δ+ε)
≲ s^-η-δ∫_0^1/2 dτ τ^(ε-α)/αH(r/(1-τ)^1/α) · (1-τ)^-(2ζ+1 -η-δ+ε)/α
≲ H(r) s^-η-δ.
This concludes the proof.
Combining the previous results yields the desired upper bounds in Theorem <ref>.
Let ζ∈(-1/2,∞), α∈(0,2), η∈(0,(2ζ+1-α)/2]. Then,
p_ζ,η^(α)(t,r,s) ≲ p_ζ^(α)(t,r,s) H(t,r)H(t,s), r,s,t>0.
By scaling (Formula (<ref>)), it suffices to consider t=1. Let R=2(2Ψ_ζ(η))^1/α. Fix δ∈(0,(2ζ+1)/2-η). By the semigroup property for p_ζ,η^(α) and p_ζ^(α), Lemmas <ref> and <ref>, we have for 0<r,s≤ R,
p_ζ,η^(α)(2,r,s)
= ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1,r,z) p_ζ,η^(α)(1,z,s)
≲ H(r)H(s) ∫_0^R dz z^2ζ· z^-2(η+δ)
+ H(r)H(s) ∫_R^∞ dz z^2ζ p_ζ^(α)(1,r,z) p_ζ^(α)(1,z,s)
≲_R H(r)H(s) (1+ p_ζ^(α)(2,r,s)), r,s≤ R.
Since p_ζ^(α)(2,r,s)∼ p_ζ^(α)(1,r,s)∼_R 1 by (<ref>), this concludes the proof of r,s≤ R.
Now let r>0 and s≥ R. By Lemma <ref>,
p_ζ,η^(α)(1,r,s)
≲ p_ζ^(α)(1,r,s)H(r)
∼ p_ζ^(α)(1,r,s) H(r) H(s), s>R.
Combining (<ref>) and (<ref>), and using the symmetry of p_ζ,η^(α) yields the claim.
§.§.§ Lower bound
We prove the lower bound in (<ref>). By scaling (Formula (<ref>)), it suffices to consider t=1. Let q(t,r,s) := p_ζ,η^(α)(t,r,s)/(H(r) H(s)) and μ(ds) = H(s)^2s^2ζ ds. By Doob conditioning, q(t,r,s) is an integral kernel of a semigroup in L^2(_+,μ). By (<ref>) in Theorem <ref>, (<ref>), and Corollary <ref>,
1 ≤∫_0^∞ q(1,r,s) μ(ds) ≤ M+1, r>0.
By the upper bound for p_ζ,η^(α) in Lemma <ref>,
q(1,r,s) ≲ p_ζ^(α)(1,r,s), r,s>0.
This estimate and the bounds (<ref>) show that there is R>2 such that
∫_R^∞ q(1,r,s)μ(ds) ≤1/4, 0<r≤1.
Furthermore, there is 0<ρ<1 such that
∫_0^ρ q(1,r,s)μ(ds) ≤1/4, r>0.
Combining the last two estimates shows
∫_ρ^R q(1,r,s)μ(ds) ≥1/2, 0<r<1.
We are now ready to prove the lower bound in (<ref>). We consider the regions 0<r<1<s, and r,s≶1 separately and start with the former. We first note that
p_ζ^(α)(1,z,s)
∼1/|z-s|^1+α(z+s)^2ζ + (1+z+s)^2ζ
≥1/R^2ζ+1+α + s^2ζ+1+α + (1+s+R)^2ζ
∼ p_ζ^(α)(1,1,s),
z∈(ρ,R), s>ρ,
which follows from (<ref>). By the semigroup property, (<ref>), and (<ref>), we have for 0<r≤1 and s≥ρ,
q(2,r,s)
≥∫_ρ^R q(1,r,z)q(1,z,s) μ(dz)
≥∫_ρ^R q(1,r,z)p_ζ^(α)(1,z,s)/H(ρ)^2μ(dz)
≳ p_ζ^(α)(1,1,s) ∫_ρ^R q(1,r,z)μ(dz)
≳ p_ζ^(α)(1,1,s),
0<r≤1, s≥ρ.
Similarly, we get
q(3,r,s)
≳ p_ζ^(α)(1,1,s)
≳ p_ζ^(α)(3,r,s), 0<r<1<s,
where the second estimate follows from (<ref>) if s≥2 and from (<ref>) and (<ref>) if s≤2.
Now let r,s≤1. By the semigroup property, we obtain
q(3,r,s)
≥∫_ρ^R q(1,r,z)q(2,z,s)μ(dz)
≳∫_ρ^R q(1,r,z) p_ζ^(α)(1,z,1) μ(dz)
≳_ρ,R p_ζ^(α)(1,1,s)∫_ρ^R q(1,r,z) μ(dz)
∼ p_ζ^(α)(1,1,s)
∼ p_ζ^(α)(1,r,s), r,s≤1,
where we used (<ref>) in the second estimate for q(2,z,s). The third and the final estimates in (<ref>) follow from (<ref>) and (<ref>) and (<ref>), since r,s,z≲1.
Finally, if r,s≥1, then q(3,r,s)≥ H(1)^2 p_ζ,η^(α)(3,r,s) ≳ p_ζ^(α)(3,r,s). Combining all the cases shows
q(3,r,s) ≳ p_ζ^(α)(3,r,s), r,s>0.
By the definition of q(t,r,s) = p_ζ,η^(α)(t,r,s)/(H(r) H(s)), the claimed lower bound in Theorem <ref> is proved.
§.§.§ Continuity
To show that p_ζ,η^(α)(t,r,s) is jointly continuous, we will use Duhamel's formula and the known joint continuity of p_ζ^(α)(t,r,s).
To that end, we first prepare the following estimate.
Let ζ∈(-1/2,∞), α∈(0,2), η∈[0,(2ζ+1-α)/2], T>0, and
G_η^(T)(t,r,s)
G_η^(T)(t,r,τ,s) := ∫_0^T dz z^2ζ· z^-α-η p_ζ^(α)(t,r,z)(τ^1/α+s+z )^-2ζ.
Then,
G_η^(T)(t,r,τ,s) ≲ r^-η-α s^-2ζ, ζ≥0, r,s,t,τ>0,
G_η^(T)(t,r,τ,s) ≲ r^-η-α (τ^1/α+T)^-2ζ, ζ∈(-1/2,0), r,t,τ>0, s∈(0,T).
Let ζ≥ 0. By Lemma <ref>,
G_η^(T)(t,r,τ,s) ≤ s^-2ζ∫_0^∞ dz z^2ζ· z^-α-η p_ζ^(α)(t,r,z) ≲ s^-2ζ r^-η-α.
If ζ∈ (-1/2,0), then
G_η^(T)(t,r,τ,s) ≤∫_0^∞ dz z^2ζ· z^-α-η p_ζ^(α)(t,r,z) (τ^1/α +2T)^-2ζ≲ (τ^1/α +2T)^-2ζ r^-η-α.
This concludes the proof.
Let ζ∈ (-1/2,∞), α∈(0,2), and η∈[0,(2ζ+1-α)/2]. Then, for every r,t>0, the function (0,∞)∋ s↦ p_ζ,η^(α)(t,r,s) is continuous.
The continuity of p_ζ^(α)(t,r,s) is well known, hence let η>0 from now on. By scaling, it suffices to consider t=1. Fix r,s>0 and let w>0. Then,
p_ζ,η^(α)(1,r,s) - p_ζ,η^(α)(1,r,w)
= p_ζ^(α)(1,r,s) - p_ζ^(α)(1,r,w)
+ ∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z)[p_ζ^(α)(τ,z,s)- p_ζ^(α)(τ,z,w)].
We claim that the integral vanishes as w converges to s. Indeed, let ϵ∈(0,1/2).
Then, for τ∈(ϵ,1) and |w-s| sufficiently small, we have p_ζ^(α)(τ,z,w)∼ p_ζ^(α)(τ,z,s) by (<ref>). Thus, by dominated convergence,
lim_w→ s∫_ϵ^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) [p_ζ^(α)(τ,z,s) - p_ζ^(α)(τ,z,w)] = 0.
Furthermore, by the upper heat kernel bounds in Theorem <ref> and the 3G inequality in Lemma <ref>, together with Lemma <ref>, we have, for any ξ∈(s/2,2s) and ϵ∈(0,1/2),
∫_0^ϵ dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,ξ)
≲∫_0^ϵ dτ∫_0^∞ dz z^2ζ p_ζ^(α)(1-τ,r,z) H(1-τ,r)H(1-τ,z)q(z) p_ζ^(α)(τ,z,ξ)
≲ p^(α)(1,r,s) H(r) ∫_0^ϵ dτ[G_η^2(r∨ξ∨1)(1-τ,r,τ,ξ) + G_0^2(r∨ξ∨1)(1-τ,r,τ,ξ) .
. + G_η^2(r∨ξ∨1)(τ,ξ,1-τ,r) + G_0^2(r∨ξ∨1)(τ,ξ,1-τ,r)]
+ H(r) ∫_0^ϵ dτ∫_2(r∨ξ∨ 1)^∞ dz z^-2ζ -3α-2· (1+z^-η)
≲ϵ H(r)p^(α)(1,r,ξ)[(r^-η-α+r^-α)ξ^-2ζ + (ξ^-η-α+ξ^-α)r^-2ζ]_ζ≥0
+ ϵ H(r) p^(α)(1,r,ξ) [r^-η-α+r^-α/(1+r+ξ)^2ζ + ξ^-η-α+ξ^-α/((1-τ)^1/α+1+r+ξ)^2ζ]_ζ∈(-1/2,0)
+ ϵ H(r) (r+ξ+1)^-2ζ-3α-1
≲ϵ H(r) c(r,s),
where c(r,s)=c_ζ,η,α(r,s) depends only on r,s>0 and the fixed parameters ζ,α, and η. Using (<ref>), we see that
lim_ϵ→0∫_0^ϵ dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z)[p_ζ^(α)(τ,z,s)- p_ζ^(α)(τ,z,w)] = 0,
uniformly in w∈(s/2,2s).
This yields the result.
Let ζ∈ (-1/2,∞), α∈(0,2), and η∈(0,(2ζ+1-α)/2]. Then, p_ζ,η^(α)(t,r,s) is jointly continuous in t,r,s>0.
By scaling, it suffices to consider t=1. Fix r,s>0 and let r̃ and s̃ be close to r and s respectively. As in Lemma <ref>, it suffices to show
∫_0^1 dτ∫_0^∞ dz z^2ζ| p_ζ,η^(α)(1-τ,r̃,z) p_ζ^(α)(τ,z,s̃) - p_ζ,η^(α)(1-τ,r,z) p_ζ^(α)(τ,z,s)| q(z) → 0
as r̃→ r and s̃→ s. In addition to (<ref>) we get (with the same arguments)
∫_1-ϵ^1 dτ∫_0^∞ dz z^2ζ p_ζ,η^(α)(1-τ,r,z)q(z) p_ζ^(α)(τ,z,s)
≲∫_0^ϵ dτ∫_0^∞ dz z^2ζ p_ζ^(α)(τ,r,z) H(τ,r)H(τ,z) q(z) p_ζ^(α)(1-τ,z,s) ≤ϵ c(r,s)
for a constant c(r,s)>0 independent of ε.
By (<ref>)–(<ref>), we get
∫_0^1 dτ∫_0^∞ dz z^2ζ| p_ζ,η^(α)(1-τ,r̃,z) p_ζ^(α)(τ,z,s̃) - p_ζ,η^(α)(1-τ,r,z)p_ζ^(α)(τ,z,s)| q(z)
≤∫_0^ϵ dτ∫_0^∞ dz z^2ζ[ p_ζ,η^(α)(1-τ,r̃,z) p_ζ^(α)(τ,z,s̃) + p_ζ,η^(α)(1-τ,r,z) p_ζ^(α)(τ,z,s)] q(z)
+ ∫_1-ϵ^1 dτ∫_0^∞ dz z^2ζ[ p_ζ,η^(α)(1-τ,r̃,z) p_ζ^(α)(τ,z,s̃) + p_ζ,η^(α)(1-τ,r,z) p_ζ^(α)(τ,z,s)] q(z)
+ ∫_ϵ^1-ϵ dτ∫_0^∞ dz z^2ζ| p_ζ,η^(α)(1-τ,r̃,z) p_ζ^(α)(τ,z,s̃) - p_ζ,η^(α)(1-τ,r,z) p_ζ^(α)(τ,z,s)| q(z)
≤ϵ c(r,s) + ∫_ϵ^1-ϵ∫_0^∞ dz z^2ζ| p_ζ,η^(α)(1-τ,r̃,z) p_ζ^(α)(τ,z,s̃) - p_ζ,η^(α)(1-τ,r,z)p_ζ^(α)(τ,z,s) |.
By the upper and lower heat kernel estimates in Theorem <ref>, we have, for ϵ<τ<1-ϵ, the estimates
p_ζ,η^(α)(1-τ,r̃,z) ∼ p_ζ,η^(α)(1-τ,r,z) and
p_ζ^(α)(τ,z,s̃) ∼ p_ζ^(α)(τ,z,s).
By Lemma <ref> and dominated convergence, the last integral is arbitrarily small. This concludes the proof of Lemma <ref> and in particular that of the continuity statement in Theorem <ref>.
§.§.§ Blowup
We now prove Corollary <ref> for all α∈(0,2] using Theorem <ref>. Note that the pointwise bounds stated in Theorem <ref> extend to α=2 in view of Remark <ref>.
Let η_*:=(2ζ+1-α)/2 be the parameter corresponding to the critical coupling constant κ_ c(ζ,α) and recall our assumption κ>κ_ c(ζ,α).
According to the Appendix <ref>,
for t,r,s>0, we define the Schrödinger perturbation p̃_κ(t,r,s)
of p_ζ^(α)(t,r,s) by κ/r^α:
p̃_κ(t,r,s)
:= ∑_n≥0 p_κ^(n)(t,r,s), where
p_κ^(0)(t,r,s) := p_ζ^(α)(t,r,s), and
p_κ^(n)(t,r,s) := ∫_0^t dτ∫_0^∞ dz z^2ζ p_ζ^(α)(t,r,z) κ/z^α p_κ^(n-1)(t-τ,z,s), n∈.
Our goal is to prove that p̃_κ(t,r,s)=∞ for s<t.
By the definitions of p̃_κ and p_ζ,η^(α), we have p̃_κ≥ p_ζ,η_*^(α).
In fact, according to <cit.>, p̃_κ may be considered as a Schrödinger perturbation of p_ζ,η_*^(α) by the positive potential (κ-κ_ c(ζ,α))r^-α. Using the second term in the resulting perturbation series
and the lower heat kernel bound in Theorem <ref>, we get
p̃_κ(t,r,s)
≥(κ-κ_ c(ζ,α)) ∫_0^t dτ∫_0^∞ dz z^2ζ p_ζ,η_*^(α)(t-τ,r,z) z^-α p_ζ,η_*^(α)(τ,z,s)
= ∞,
for s<t. This concludes the proof.
§.§ The case η∈(-α,0)
Throughout this section, we assume α∈(0,2).
Similarly as in (<ref>)–(<ref>), and inspired by Proposition <ref> and
Theorem <ref>, we consider the total mass of the kernel p_ζ,η^(α),
M(t,r) := ∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s).
By the scaling (<ref>), we have the scaling relation
M(t,r)
= ∫_0^∞ ds s^2ζ t^-(2ζ+1)/α p_ζ,η^(α)(1,t^-1/αr,t^-1/αs)
= M(1,t^-1/αr).
Note that, by Theorem <ref>, p_ζ,η^(α) satisfies the Chapman–Kolmogorov equations (<ref>), the Duhamel formulae (<ref>), and
0 < p_ζ,η^(α)(t,r,s) ≤ p_ζ^(α)(t,r,s) for all r,s,t>0.
In particular, by the normalization (<ref>), we have
0 ≤ M(t,r) ≤∫_0^∞ ds s^2ζ p_ζ^(α)(t,r,s) = 1.
§.§.§ Upper bound
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0). Then, there is C>0 such that for all r>0,
M(1,r) ≤ C(1∧ r^-η).
By Chapman–Kolmogorov and p_ζ,η^(α)(1/3,z,w)≤ c for all z,w>0 (by (<ref>)), we have
p_ζ,η^(α)(1,r,s)
= ∫_0^∞ dz z^2ζ∫_0^∞ dw w^2ζ p_ζ,η^(α)(1/3,r,z) p_ζ,η^(α)(1/3,z,w) p_ζ,η^(α)(1/3,w,s)
≤ c ∫_0^∞ dz z^2ζ∫_0^∞ dw w^2ζ p_ζ,η^(α)(1/3,r,z) p_ζ,η^(α)(1/3,w,s)
= c M(1/3,r) M(1/3,s) = c M(1,3^1/αr) M(1,3^1/αs).
Next, let ρ>0 be so small that δ:=c· (2ζ+1)^-1ρ^2ζ+1 < 3^η/α. Recall η<0. Then, by (<ref>), (<ref>), and (<ref>), we have for any r>0,
M(1,r)
≤∫_0^ρ ds s^2ζ p_ζ,η^(α)(1,r,s) + ρ^η∫_ρ^∞ ds s^2ζ p_ζ,η^(α)(1,r,s) s^-η
≤∫_0^ρ ds s^2ζ p_ζ,η^(α)(1,r,s) + A r^-η
≤ c∫_0^ρ ds s^2ζ M(1,3^1/αr) M(1,3^1/αs) + A r^-η
≤δ M(1,3^1/αr) + A r^-η,
where A:=ρ^η. Iterating (<ref>) yields for all r>0,
M(1,r)
≤δ M(1,3^1/αr) + A r^-η
≤δ[δ M(1,3^2/αr) + A (3^1/αr)^-η] + A r^-η
≤δ^2 [δ M(1,3^3/αr) + A (3^2/αr)^-η] + A (1+δ 3^-η/α) r^-η
≤⋯
≤δ^n M(1,3^n/αr) + A [1 + δ 3^-η/α + ⋯ + (δ 3^-η/α)^n-1] r^-η.
Since δ<1 and δ 3^-η/α<1, the above geometric series converges. Hence, by (<ref>), we obtain
M(1,r) ≤A/1-δ 3^-η/α r^-η,
which concludes the proof.
Applying Proposition <ref> to (<ref>) immediately yields
Let ζ∈(-1/2,∞), α∈(0,2), η∈(-α,0). Then, there is a constant C>0 such that
p_ζ,η^(α)(1,r,s)
≤ C (1∧ r^-η) (1∧ s^-η),
r,s>0.
We now refine these upper bounds for p_ζ,η^(α)(t,r,s).
Let ζ∈(-1/2,∞), α∈(0,2), and η∈ (-α,0). Then, for any r,s,t>0, we have
∫_|z-s|<|r-s|/2 dz z^2ζ p_ζ^(α)(t,r,z) p_ζ^(α)(t,z,s)
≤p_ζ^(α)(2t,r,s)/2.
Fix r,s,t>0. Then, by p_ζ^(α)(t,r,s) = p_ζ^(α)(t,s,r) for all r,s,t>0,
∫_|z-s|<|r-s|/2 dz z^2ζ p_ζ^(α)(t,r,z) p_ζ^(α)(t,z,s)
= ∫_|z-r|<|r-s|/2 dz z^2ζ p_ζ^(α)(t,r,z) p_ζ^(α)(t,z,s).
By Chapman–Kolmogorov,
2∫_|z-s|<|r-s|/2 dz z^2ζ p_ζ^(α)(t,r,z)p_ζ^(α)(t,z,s)
= ∫_|z-s|<|r-s|/2 dz z^2ζ p_ζ^(α)(t,r,z)p_ζ^(α)(t,z,s)
+ ∫_|z-r|<|r-s|/2 dz z^2ζ p_ζ^(α)(t,r,z)p_ζ^(α)(t,z,s)
≤∫_0^∞ dz z^2ζ p_ζ^(α)(t,r,z) p_ζ^(α)(t,z,s)
= p_ζ^(α)(2t,r,s),
which concludes the proof.
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0).
Let also h(t,r):=(r t^-1/α)^-η.
Then, there is a constant A>0 such that for all r,s,t>0, we have
p_ζ,η^(α)(t,r,s)
≤∫_|z-s|<|r-s|/2 dz z^2ζ p_ζ,η^(α)(t/2,r,z) p_ζ,η^(α)(t/2,z,s) + A h(t,r) p_ζ^(α)(t,r,s).
By Chapman–Kolmogorov for p_ζ,η^(α), we have for any r,s,t>0,
p_ζ,η^(α)(t,r,s)
= ∫_|z-s|≤|r-s|/2 dz z^2ζ p_ζ,η^(α)(t/2,r,z) p_ζ,η^(α)(t/2,z,s)
+ ∫_|z-s|≥|r-s|/2 dz z^2ζ p_ζ,η^(α)(t/2,r,z) p_ζ,η^(α)(t/2,z,s).
By (<ref>) and Proposition <ref>, we have
∫_0^∞ ds s^2ζ p_ζ,η^(α)(t,r,s)
= M(t,r) = M(1,t^-1/αr)
≤ c h(t,r), r,t>0.
For t>0 and r,s,z>0 with |z-s|>|r-s|/2, by (<ref>) and (<ref>), we have
p_ζ,η^(α)(t/2,z,s)
≤ p_ζ^(α)(t/2,z,s)
≲ p_ζ^(α)(t,r,s).
This implies for r,s,t>0,
∫_|z-s|≥|r-s|/2dz z^2ζ p_ζ,η^(α)(t/2,r,z) p_ζ,η^(α)(t/2,z,s)
≲ p_ζ^(α)(t,r,s) ∫_|z-s|≥|r-s|/2dz z^2ζ p_ζ,η^(α)(t/2,r,z)
≲ p_ζ^(α)(t,r,s) · h(t/2,r)
= 2^-η/α h(t,r)p_ζ^(α)(t,r,s),
which concludes the proof.
We are now in position to derive the upper bounds in Theorem <ref> in the following key lemma.
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0). Then, for all r,s,t>0,
p_ζ,η^(α)(t,r,s)
≲_ζ,α,η (1∧ (t^-1/αr)^-η) · (1∧ (t^-1/αs)^-η) · p_ζ^(α)(t,r,s).
By the scaling relation (<ref>), it suffices to consider t=1. Let δ=1/2 and ν=2^(-η-α)/α<1. As in Lemma <ref>, let h(t,r)=(r/t^1/α)^-η. Note that
δ h(t/2,r)
= 1/2 ((t/2)^-1/αr)^-η
= 2^(-η-α)/α (t^-1/αr)^-η
= ν h(t,r), r,t>0.
Let A be the constant from Lemma <ref>. We claim that for any n∈_0, we have
p_ζ,η^(α)(t,r,s)
≤[δ^n+1 + (1+ν+⋯+ν^n) A h(t,r)] p_ζ^(α)(t,r,s),
r,s,t>0.
We prove (<ref>) by induction. For n=0, t>0, and r,s>0, Lemmas <ref> (together with
p_ζ,η^(α)(t,r,s) ≤ p_ζ^(α)(t,r,s)) and <ref> imply
p_ζ,η^(α)(t,r,s)
≤[ δ + A h(t,r) ] p_ζ^(α)(t,r,s).
We now make the induction step. Thus, suppose (<ref>) was true for fixed n∈, i.e.,
p_ζ,η^(α)(t,r,s)
≤[δ^n + (1+ν+⋯+ν^n-1) A h(t,r) ] p_ζ^(α)(t,r,s),
r,s,t>0.
Then, for any r,s,t>0, Lemmas <ref> and <ref>, together with (<ref>) yield
p_ζ,η^(α)(t,r,s)
≤∫_|z-s|≤|r-s|/2dz z^2ζ p_ζ,η^(α)( t2,r,z) p_ζ,η^(α)( t2,z,s) + A h(t,r) p_ζ^(α)(t,r,s)
≤∫_|z-s|≤|r-s|/2dz z^2ζ[δ^n + (1+ν+⋯+ν^n-1) A h( t2,r)] p_ζ^(α)( t2,r,z) p_ζ^(α)( t2,z,s)
+ A h(t,r) p_ζ^(α)(t,r,s)
≤[ δ^n + (1+ν+⋯+ν^n-1) A h(t/2,r) ] ·δ p_ζ^(α)(t,r,s) + A h(t,r) p_ζ^(α)(t,r,s)
≤[δ^n+1 + (ν+⋯+ν^n) A h(t,r) ] · p_ζ^(α)(t,r,s) + A h(t,r)p_ζ^(α)(t,r,s)
= [ δ^n+1 + (1+ν+⋯+ν^n+1) A h(t,r)] · p_ζ^(α)(t,r,s),
which proves (<ref>). Taking t=1, observing h(1,r)=r^-η and letting n→∞ in (<ref>) thus yields
p_ζ,η^(α)(1,r,s)
≤A/1-ν r^-η p_ζ^(α)(1,r,s),
r,s>0.
This estimate allows us to conclude the claimed upper bound (<ref>) (with t=1). By the symmetry of p_ζ,η^(α) it suffices to assume 0<r<s.
For 1<r<s, the claim (<ref>) follows from p_ζ,η^(α)≤ p_ζ^(α)(1,r,s).
For 0<r<s<1 we use Corollary <ref> as well as p_ζ^(α)(1,r,s)≳1.
Finally, for 0<r<1<s, the claim (<ref>) follows from (<ref>).
This concludes the proof of Lemma <ref> and thereby the upper bound in Theorem <ref>.
§.§.§ Lower bound
We begin with a lower bound on the function M(t,r), defined in (<ref>).
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0). Then, there is C>0 such that
M(1,r) ≥ C(1∧ r^-η), r>0.
Let R>0 and 0<r<R/2. Then, by (<ref>), there are R-independent numbers c_1,c_2>0 such that
∫_R^∞ ds s^2ζ p_ζ^(α)(1,r,s) s^-η≤ c_1 ∫_R^∞ ds s^2ζ·s^-η/s^2ζ+1+α
= c_2 R^-η-α,
and the right-hand side vanishes as R→∞. Let C≡ C_ζ,α,η be the constant in the upper heat kernel bounds (<ref>) and choose R≥1 so large that c_2CR^-η-α≤1/2. Then, by (<ref>) in Theorem <ref>, for ρ≥ R and r∈(0,ρ/2), we have
∫_0^∞ ds s^2ζ p_ζ,η^(α)(1,r,s)
≥ρ^η∫_0^ρ ds s^2ζ p_ζ,η^(α)(1,r,s) s^-η
= ρ^η( r^-η - ∫_ρ^∞ ds s^2ζ p_ζ,η^(α)(1,r,s) s^-η)
≥ρ^η(r^-η - C∫_ρ^∞ ds s^2ζ· r^-η p_ζ^(α)(1,r,s) s^-η)
≥r^-η/2ρ^-η.
Thus, we proved
∫_0^∞ ds s^2ζ p_ζ,η^(α)(1,r,s)
≥r^-η/2R^-η≥1∧ r^-η/2R^-η, 0<r<R/2.
On the other hand, if r>R/2, then, by taking ρ=2r+1 in (<ref>), we obtain
∫_0^∞ ds s^2ζ p_ζ,η^(α)(1,r,s)
≥r^-η/2(2r+1)^-η≥r^-η/2(4r)^-η≥1/4^-η+1/2, r>R/2.
Combining (<ref>)–(<ref>) yields (<ref>) and concludes the proof.
To get the desired lower bounds in Theorem <ref>, we compare p_ζ^(α)(t,r,s) with p_ζ,η^(α)(t,r,s) and distinguish between r∨ s≲1 and r∧ s≳1. We first focus on the case r∧ s≳1.
To that end, recall the function p_t^(1,D)(r,s) from the perturbation series in (<ref>).
By the scaling (<ref>) of p_ζ^(α), we have the scaling
p_t^(1,D)(r,s) = t^-2ζ+1/α p_1^(1,D)(t^-1/αr,t^-1/αs),
r,s,t>0.
The statement of the following lemma is the bound in (<ref>), proved in Appendix <ref>. It is closely related to estimates in <cit.> or <cit.>. It is helpful to obtain lower bounds for p_ζ,η^(α)(1,r,s) when r,s≳1 from upper bounds for p_1^(1,D), since Ψ_ζ(η)<0 for η<0.
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0). Then, for all r,s,t>0,
p_ζ,η^(α)(t,r,s)
≥ p_ζ^(α)(t,r,s) exp(p_t^(1,D)(r,s)/p_ζ^(α)(t,r,s)).
Our task thus consists of finding good upper bounds for p_1^(1,D)(r,s) when r,s≳1.
The following lemma, in combination with Lemma <ref>, is important for the derivation of lower bounds for p_ζ,η^(α)(1,r,s) when r,s≳1.
Let ζ∈(-1/2,∞), α∈(0,2). Then, for all r,s>1, we have
p_1^(1,D)(r,s)
≲ p_ζ^(α)(1,r,s).
Note that <cit.> proved both upper and lower bounds. However, the upper bounds suffice for the subsequent analysis.
It suffices, by Chapman–Kolmogorov and (<ref>), to estimate
∫_0^1 dτ∫_0^∞ dz z^2ζ p_ζ^(α)(1-τ,r,z) z^-α p_ζ^(α)(τ,z,s)
≤ 2^α p_ζ^(α)(1,r,s)
+ ∫_0^1 dτ∫_0^1/2 dz z^2ζ p_ζ^(α)(1-τ,r,z) z^-α p_ζ^(α)(τ,z,s)
≲ p_ζ^(α)(1,r,s)
+ ∫_0^1 dτ∫_0^1/2 dz z^2ζ-α1/(1+r)^1+α+2ζ(1+s)^1+α+2ζ
≲ p_ζ^(α)(1,r,s).
In the last estimate we used again (<ref>) and the inequality [(1+r)(1+s)]^-1-α-2ζ≲ (1+|r-s|)^-1-α(1+r+s)^-2ζ valid for all ζ>-1/2.
Recall that Ψ_ζ(η)<0 for η<0. Thus, Lemmas <ref> and <ref> lead to the following useful estimate, which suffices to prove the aspired lower heat kernel bounds for p_ζ,η^(α)(1,r,s) when r∧ s≳1.
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0). Then,
p_ζ,η^(α)(t,r,s)
≳ p_ζ^(α)(t,r,s), r,s≳ t^1/α>0.
Since Ψ_ζ(η)<0, the result follows by Lemmas <ref> and <ref>.
Corollary <ref> suffices for the derivation of the lower heat kernel bounds for r∧ s≳1. The following lemma will be used to derive the lower heat kernel bounds when r∨ s≲1. In combination with Corollary <ref> it will also handle the case r≲ 1 ≲ s.
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0).
Then, there are R>1 and C_R>0 such that for all r,s>0 with r∨ s≤ R,
p_ζ,η^(α)(1,r,s) ≥ C_R (r· s)^-η.
By (<ref>) there is R_0>2· 3^1/α large enough such that for all 0<r<R_0/2,
∫_0^R_0 ds s^2ζ p_ζ,η^(α)(1,r,s)
≥r^-η/2R_0^-η.
On the other hand, the upper heat kernel bounds (<ref>) imply for all ρ_0>0 and r>0,
∫_0^ρ_0 ds s^2ζ p_ζ,η^(α)(1,r,s)
≤ C r^-η∫_0^ρ_0 ds s^2ζ p_ζ^(α)(1,r,s) s^-η≤ C r^-η·ρ_0^-η
where we used (<ref>) and -η>0 in the last step. We now take ρ_0=(4C)^1/ηR_0^-1. Then, for r<R_0/2,
∫_ρ_0^R_0 dz z^2ζ p_ζ,η^(α)(1,r,z)
≥ r^-η(1/2R_0^-η - Cρ_0^-η)
= r^-η/4R_0^-η.
Therefore, by Chapman–Kolmogorov for p_ζ,η^(α) and estimate (<ref>),
p_ζ,η^(α)(3,r,s)
≥∫_ρ_0^R_0 dw w^2ζ∫_ρ_0^R_0 dz z^2ζ p_ζ,η^(α)(1,r,z) p_ζ,η^(α)(1,z,w) p_ζ,η^(α)(1,w,s)
≥(rs)^-η/16R_0^-2ηinf_z,w∈(ρ_0,R_0) p_ζ,η^(α)(1,z,w).
The infimum on the right is estimated with the help of (<ref>) by
inf_z,w∈(ρ_0,R_0) p_ζ,η^(α)(1,z,w)
≳_ρ_0,R_0inf_z,w∈(ρ_0,R_0) p_ζ^(α)(1,z,w)
≳_ρ_0,R_0 1,
where we used (<ref>) (together with R_0≳1) to estimate the final infimum. Hence,
p_ζ,η^(α)(3,r,s) ≳ (rs)^-η, r∨ s<R_0/2.
Thus, by the scaling property of p_ζ,η^(α) we obtain
p_ζ,η^(α)(1,r,s)
≳_ζ,α,η,ρ_0,R_0 (rs)^-η,
r∨ s<R_0/2· 3^1/α.
This concludes the proof of Lemma <ref>.
We are now in position to prove the lower bounds in Theorem <ref>.
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0). Then,
p_ζ,η^(α)(1,r,s)
≳_ζ,α,η (1∧ r^-η)(1∧ s^-η) p_ζ^(α)(1,r,s).
By symmetry, it suffices to consider r≤ s. For 0<r<1/4, s>1, and 1/4≤ z≤1/2, Lemma <ref> and (<ref>) imply
p_ζ,η^(α)(1,r,z) ≳ r^-η
and
p_ζ,η^(α)(1,s,z)
≳ p_ζ^(α)(1,s,z)
∼ p_ζ^(α)(1,s,r)
where we used s≥ 2z≥ 2r, s>1, and (<ref>) in the final step. Hence, for any 0<r<1/4 and s>1, we have
p_ζ,η^(α)(1,r,s)
≥∫_1/4^1/2 dz z^2ζ p_ζ,η^(α)(1/2,r,z) p_ζ,η^(α)(1/2,z,s)
≳ r^-η p_ζ^(α)(1/2,r,s)
∼ r^-η p_ζ^(α)(1,r,s).
Finally, for r∧ s≥ 1/4, the claim follows from (<ref>), whereas for r∨ s≤ 1 the claim follows from Lemma <ref>.
This concludes the proof of Lemma <ref> and therefore the proof of the lower bound on Theorem <ref>.
§.§.§ Continuity
We now prove the continuity statement in Theorem <ref> for η<0.
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0). Then, for any fixed r>0, the function (0,∞)∋ s↦ p_ζ,η^(α)(t,r,s) is continuous.
Fix r,s>0 and let z>0 converge to s. Recall q(r)=Ψ_ζ(η)r^-α with Ψ_ζ(η)<0. By Duhamel's formula,
p_ζ,η^(α)(1,r,s) - p_ζ,η^(α)(1,r,z)
= p_ζ^(α)(1,r,s) - p_ζ^(α)(1,r,z)
+ ∫_0^1 dτ∫_0^∞ dw w^2ζ p_ζ,η^(α)(1-τ,r,w)q(w)(p_ζ^(α)(τ,w,s) - p_ζ^(α)(τ,w,z)).
For all sufficiently small ϵ, estimates (<ref>), (<ref>) and (<ref>) imply
-∫_0^ϵ dτ∫_0^∞ dw w^2ζ p_ζ,η^(α)(1-τ,r,w)q(w) p_ζ^(α)(τ,w,s)
≤ -∫_0^ϵ∫_0^∞ dw w^2ζ p_ζ^(α)(1-τ,r,w) q(w) p_ζ^(α)(τ,w,s)
≲ -∫_0^ϵ∫_0^∞ dw w^2ζ p_ζ^(α)(τ,w,s)q(w)
≲ϵ s^-α.
Analogously, we have
-∫_0^ϵ dτ∫_0^∞ dw w^2ζ p_ζ,η^(α)(1-τ,r,w)q(q)p_ζ^(α)(τ,w,z)
≲ϵ z^-α.
Next, for any ϵ<τ≤1 and w,s,z>0 with z→ s, we have
p_ζ^(α)(τ,w,s)∼ p_ζ^(α)(τ,w,z) (e.g., as a consequence of (<ref>)). By dominated convergence,
∫_ϵ^1 dτ∫_0^∞ dw w^2ζ p_ζ,η^(α)(1-τ,r,w)q(w)(p_ζ^(α)(τ,w,s)-p_ζ^(α)(τ,w,z)) →0 as z→ s.
Combining all previous estimates establishes the claim.
Let ζ∈(-1/2,∞), α∈(0,2), and η∈(-α,0). Then, p_ζ,η^(α)(t,r,s) is jointly continuous as a function of r,s,t>0.
By scaling, it suffices to prove the continuity of p_ζ,η^(α)(1,r,s) with respect to r,s>0. As indicated in the proof of Lemma <ref>, we only need to verify
∫_0^1 dτ∫_0^∞ dw w^2ζ| p_ζ,η^(α)(1-τ,r̃,w) p_ζ^(α)(τ,w,s̃) - p_ζ,η^(α)(1-τ,r,w) p_ζ^(α)(τ,w,s) | q(w) →0
for any r,s,r̃,s̃>0 with r̃→ r and s̃→ s. In addition to (<ref>), we have
-∫_1-ϵ^1 dτ∫_0^∞ dw w^2ζ p_ζ,η^(α)(1-τ,r,w) q(w) p_ζ^(α)(τ,w,s)
= - ∫_0^ϵ dτ∫_0^∞ dw w^2ζ p_ζ,η^(α)(τ,r,w) q(w) p_ζ^(α)(1-τ,w,s)
≤ -∫_0^ϵ dτ∫_0^∞ dw w^2ζ p_ζ^(α)(τ,r,w) q(w) p_ζ^(α)(1-τ,w,s)
≲ϵ r^-α
by the same argument as in the proof of (<ref>).
For any ϵ<τ<1-ϵ and r,s,z,r̃,s̃>0 with r→r̃ and s→s̃, we have, by Lemma <ref> both p_ζ^(α)(τ,z,s̃) ∼ p_ζ^(α)(τ,z,s) and p_ζ,η^(α)(1-τ,r̃,z)∼ p_ζ^(α)(1-τ,r,z). Then, by dominated convergence,
∫_ϵ^1-ϵ dτ∫_0^∞ dw w^2ζ|p_ζ,η^(α)(1-τ,r̃,w) p_ζ^(α)(τ,w,s̃) - p_ζ,η^(α)(1-τ,r,w)p_ζ^(α)(τ,w,s) | q(w) → 0
for any r,s,r̃,s̃>0 with r̃→ r and s̃→ s. This concludes the proof of Proposition <ref> and in particular the continuity in Theorem <ref>.
§ SCHRÖDINGER PERTURBATIONS
In this Appendix, we discuss Schrödinger perturbations of transition densities needed in Theorem <ref>.
However, the setting is more general and the results are of independent interest; in particular we allow transition densities inhomogeneous in time.
§.§ General setting and positive perturbations
Let X be a locally compact space with a countable base of open sets. Consider the Borel σ-algebra ℳ on X. Let m be a σ-finite measure on (X,ℳ).
We further consider an arbitrary nonempty interval on the real line , with the Borel σ-field ℬ and the Lebesgue measure ds. We call × X the space-time.
The functions considered below are assumed to be (jointly) Borel measurable.
Let p be a
transition density on X with time in . This means that p is a (jointly measurable) function on × X ×× X with values in [0,∞] such that p(s,x,t,y) =0 for s ≥ t and the Chapman-Kolmogorov equations hold for p, i.e.,
p(s,x,t,y)=∫_X p(s,x,u,z) p(u,z,t,y) m(dz), s,t, u∈, s<u<t, x,y∈ X.
Below, we say that p is finite and strictly positive if 0<p(s,x,t,y)<∞ for all s,t∈, s<t and x,y∈ X.
We consider a measurable function (potential) q≥ 0 on × X and let
p_0(s,x,t,y) := p(s,x,t,y)
p_n(s,x,t,y) := ∫_s^t ∫_X p_n-1(s,x,u,z) q(u,z) p(u,z,t,y)(u,z) m(dz) du.
Here and below, s,t∈, s<t, and x,y∈ X.
We define
p^q(s,x,t,y) := ∑_n=0^∞ p_n(s,x,t,y)
and say that p^q is a Schrödinger perturbation of p by q. Specifically, the perturbation is positive since q≥ 0 and mass-creating since p^q≥ p.
Of course, p^q is increasing in q≥ 0.
By <cit.>, we have the Chapman-Kolmogorov equations
p^q(s,x,t,y) = ∫_X p^q(s,x,u,z) p^q(u,z,t,y) m(dz), s<u<t.
By Tonelli,
p^q(s,x,t,y) also satisfies the following perturbation (Duhamel) formulas
p^q(s,x,t,y) = p(s,x,t,y) + ∫_s^t ∫_X p^q(s,x,u,z) q(u,z) p(u,z,t,y) m(dz) du
=
p(s,x,t,y) + ∫_s^t ∫_X p(s,x,u,z) q(u,z) p^q(u,z,t,y) m(dz) du.
As straightforward as positive Schrödinger perturbations are, p^q may be infinite. For example, this is so for the Gaussian kernel p(s,x,t,y)=(4π (t-s))^-d/2e^-|y-x|^2/(4(t-s)) in , d>2, and q(x)=c|x|^-2, x∈, if (and only if) c >(d-2)^2/4, which underlies the famous result of Baras and Goldstein <cit.>. Needless to say, the example is
time-homogeneous, meaning that p(s,x,s+t,y):=p_t(x,y),
s∈, t>0,
and q(s,x):=q(x),
s∈, yield
a (time-homogeneous) perturbation p_t^q(x,y):=p^q(0,x,t,y)=p^q(s,x,s+t,y), s∈, t>0.
It is in such a time-homogeneous setting that the results of this Appendix are used in Subsection <ref>.
§.§ Negative perturbations
We now focus on functions q (or -q) which may be negative. To facilitate discussion, we assume p is sub-Markov, i.e., we assume ∫_X p(s,x,t,y) m(dy)≤1.
Then we can extend the definitions
(<ref>)—first to the case
of bounded, but not necessarily positive q:× X→.
Indeed, in this case we can estimate the integrals by using |q| and q_∞:=sup_s∈,x∈ X |q(s,x)|.
In particular, by induction, we get
|p_n(s,x,t,y)|≤q_∞^n (t-s)^n/n! p(s,x,t,y), n=0,1,….
We see that the perturbation series converges absolutely and
|p^q(s,x,t,y)|≤ p^|q|(s,x,t,y)≤ e^q_∞ (t-s)p(s,x,t,y).
By <cit.>, p^q satisfies Chapman-Kolmogorov equations.
By Fubini, p^q also satisfies the Duhamel formulas (<ref>).
Next, we shall define and study rather general negative
perturbations of transition densities. To this end, we consider integral kernels P(s,x,B) on the space-time, where s∈, x∈ X, and B is a (measurable) subset of × X. Thus, by <cit.>,
(s,x)↦ P((s,x),B) B,
B↦ P((s,x),B) × X s x.
Then,
Pf(s,x):=∫ f(t,y)P(s,x,dt dy)≥ 0
is measurable for every measurable function f≥ 0 on × X. We will identify the kernel P with the operator P. In fact, every additive, positively homogeneous and monotone operator on nonnegative measurable functions
is an integral kernel, see <cit.>.
For instance, our transition density p defines, for functions f≥ 0 on × X,
Pf(s,x) := ∫_∫_X p(s,x,t,y) f(t,y)m(dy)dt, s∈, x∈ X.
Here is a variation on <cit.>, which points out a sub-Markov resolvent <cit.> associated with the kernel P in (<ref>). The proof applies to in the same way as to , with only minor adjustments; we provide it for the reader's convenience.
For (<ref>), a sub-Markov resolvent P^λ, λ>0, exists with sup_λ>0 P^λ = P.
For λ≥ 0, we let
p^-λ(s,x,t,y) := e^-λ(t-s)p(s,x,t,y), s,t∈, x,y∈ X,
and, for f≥ 0,
P^λ f(s,x) := ∫_∫_X p^-λ(s,x,t,y) f(t,y) m(dy) dt, s∈, x∈ X.
For clarity, note that p^-λ(s,x,t,y) = 0 for s≥ t, so in (<ref>), we have ∫_=∫_s^S, where S:=sup. Of course, sup_λ>0 P^λ = P as kernels. Furthermore, for s∈, x∈ X, λ>0,
λ P^λ (s,X) = λ∫_∫_X p^-λ(s,x,t,y) m(dy) dt ≤λ∫_s^∞ e^-λ(t-s) dt ≤ 1.
Finally, by the Chapman-Kolmogorov equations, for λ>μ, s∈, x∈ X, f≥ 0,
P^μ P^λ f(s,x) +(λ-μ)^-1P^λf(s,x)
= ∫_∫_X ∫_∫_X p^-μ(s,x,u,z)p^-λ(u,z,t,y)f(t,y) m(dy) dt m(dz) du+(λ-μ)^-1P^λf(s,x)
= ∫_s^S ∫_X p(s,x,t,y) f(t,y) e^μ s e^-λ t∫_s^t e^-u(μ-λ) du m(dy) dt+(λ-μ)^-1P^λf(s,x)
= (λ-μ)^-1∫_X ∫_s^S p(s,x,t,y) f(t,y) ( e^-μ(t-s) - e^-λ(t-s)) m(dy) dt
+ (λ-μ)^-1P^λf(s,x)
= (λ-μ)^-1 P^μf(s,x).
Thus, P^λ, λ>0, is a sub-Markov resolvent.
The notation p^-λ in (<ref>) agrees with that in (<ref>). In particular, the resolvent equation (<ref>) with μ=0 is a variant of the Duhamel formula for q≡ -λ.
The following lemma is crucial for handling negative perturbations.
If f,g≥ 0, Pf≤ Pg+1 on {f>0}, then Pf≤ Pg+1 on × X. In particular, if P|h|<∞ and Ph≤ 1 on {h>0}, then Ph≤ 1 on × X.
Here, of course, {f>0}:={(s,x)∈× X: f(s,x)>0}. We will call the first implication in Lemma <ref> the complete maximum principle, abbreviated as CMP. It is a variant of <cit.>, but here we do not assume that f is bounded and Pf finite. CMP may be also viewed as a variant of the domination principle <cit.>, but discussing this connection would take longer than the proof below.
If f≥ 0 is bounded, then Lemma <ref> and the proof of <cit.> give the implication.
In the case of general f≥ 0 with
Pf≤ Pg+1 on {f>0}, we let f_n:=f∧ n for
n∈.
On {f_n>0}⊂{f>0}, we have Pf_n≤ P f≤ Pg+1, so by the first part of the proof, Pf_n≤ Pg+1 on × X. Letting n→∞, we get the first statement. For the second one, we let f:=h_+, g:=h_-.
Let q≥ 0 be bounded. If p is a strictly positive finite sub-Markov transition density then so is p^-q. Furthermore, p^-q≤ p.
Let y∈ X, t∈, and _t:={s∈: s<t}.
For s,u∈_t, s<u, x,z∈ X, let
h(s,x,u,z) := p(s,x,u,z)p(u,z,t,y)/p(s,x,t,y).
The function is a transition density on the space-time _t× X, so by Lemma <ref>,
P_t,yf(s,x) := ∫_s^t ∫_X p(s,x,u,z)p(u,z,t,y)/p(s,x,t,y)f(u,z) m(dz) du, s<t, x ∈ X,
is an integral kernel that satisfies CMP. Let
P f(s,x) := ∫_s^t ∫_X p(s,x,u,z)p(u,z,t,y)/p(s,x,t,y)f(u,z)q(u,z) m(dz)du.
Then P satisfies CMP, too. Indeed, let f,g≥ 0 on _t× X and P f ≤P g + 1 on {f >0}. Let f' := f q, g':=g q. Since {f'>0}⊂{f>0}, we have P_t,y f'≤ P_t,yg'+1 on {f'>0}. By CMP for P_t,y, Pf=P_t,y f' ≤ P_t,yg'+ 1=Pg+1 on _t× X, which gives CMP for P.
Next, let h(s,x) := p^-q(s,x,t,y)/p(s,x,t,y). By (<ref>), P |h|<∞. By (<ref>),
h = 1 - P h.
Of course, if h(s,x) > 0, then P h(s,x) < 1. By CMP for P, we get P h ≤ 1, so h ≥ 0 and 0≤ p^-q≤ p everywhere.
Furthermore, we note that for all s,t∈ and x,y∈ X,
p^-q(s,x,t,y)
≥ p(s,x,t,y)-∑_n=1^∞ p_n(s,x,t,y)
≥(2-e^q_∞ (t-s))p(s,x,t,y)>0
if 0<t-s< ln 2/q_∞. By (<ref>), p^-q(s,x,t,y)>0 for all s,t∈, s<t, x,y∈ X.
We say that the perturbation of p in Lemma <ref> is negative since -q≤ 0 and mass-decreasing since p^-q≤ p. The main point of Lemma <ref> is that p^-q≥ 0, in fact, that p^-q is strictly positive for (bounded) q≥ 0. This is a similar phenomenon as e^-x> 0 for x∈ [0,∞).
In passing, we also note that for bounded functions q,ρ≥ 0 on × X, by <cit.>,
p^-q-ρ=(p^-q)^-ρ.
In the reminder of this Appendix, we analyze p^-q for possibly unbounded functions q≥ 0 on space-time.
As we shall see, the following integrability assumption,
p_1(s,x,t,y)=∫_s^t ∫_X p(s,x,u,z)q(u,z)p(u,z,t,y)m(dz)du<∞
for all s,t∈, s<t, x,y∈ X, will suffice for construction of p^-q. Note that
p^q<∞
implies (<ref>), but not conversely; see the above discussion of the result of Baras and Goldstein.
The advantage of (<ref>) is that for arbitrary λ>0, it holds for q≥ 0 if and only if it holds for λ q.
For the sake of
the discussion following Definition <ref>, we note that (<ref>) holds for the time-homogeneous semigroup p(s,x,t,y):=p_ζ^(α)(t-s,x,y) and potential q(s,x)=x^-α there,
thanks to 0<α<2ζ+1. This is because p^λ q<∞ for small λ=Ψ_ζ(η)>0, i.e., small η>0; see Lemma <ref>.
Here is the main result of this Appendix.
Let q:× X→ [0,∞]. Let p be a sub-Markov, strictly positive, finite transition density, and assume (<ref>).
Then there is a strictly positive transition density p^-q≤ p satisfying (<ref>) and (<ref>).
For n∈, we let q_n:=q∧ n. Then each p^-q_n are sub-Markov, strictly positive, finite transition densities decreasing in n. Let
p^-q:=inf_n p^-q_n.
Of course, 0≤ p^-q≤ p. By (<ref>) and the dominated convergence theorem, we get (<ref>) and (<ref>).
The strict positivity of p^-q follows from the lower bound
p^-q (s,x,t,y) ≥ p(s,x,t,y) exp[-p_1(s,x,t,y)p(s,x,t,y)],
where q≥ 0, s,t ∈, s<t, and x,y ∈ X.
For the proof of (<ref>), we first let q≥ 0 be bounded. Then the function (0,∞)∋λ↦ h(λ):=p^-λ q(s,x,t,y) is completely monotone, meaning that (-1)^n h^(n)(λ)≥ 0, n=0,1,…, λ>0.
Indeed, (-1)^n h^(n)(λ)=n! (p^-λ q)_n(s,x,t,y) by <cit.>, which is nonnegative in view of Lemma <ref> and (<ref>).
Since completely monotone functions are logarithmically convex, we get
p^-q(s,x,t,y)=h(1)
= h(0)exp[∫_0^1 (ln h(λ))'dλ]
≥ h(0)exp[∫_0^1 h'(0)/h(0)dλ]
= p(s,x,t,y)exp[-p_1(s,x,t,y)/p(s,x,t,y)].
For unbounded q≥ 0, we use this, (<ref>), and (<ref>) and we let n→∞.
(1) Estimate (<ref>) strengthens <cit.> and <cit.>.
(2) In the time-homogeneous setting, (<ref>) yields
p^-q_t (x,y) ≥ p_t(x,y) exp[-∫_0^t∫_X p_s(x,z)q(z)p_t-s(z,y)m(dz) ds/p_t(x,y)].
Here, of course, p_t is sub-Markov, q≥ 0, and the numerator is assumed finite.
We conclude this appendix with the following example,
which illustrates the importance of CMP, and therefore the Chapman–Kolmogorov equations, for
the positivity of p^-q.
Let X be the space containing only one point and m(dz) be the Dirac delta at this point. Let p(s,t) = (t-s)_+ and q(s) ≡ 1 (to simplify notation, we skip the space variables from the notation). Then,
p_n(s,t) = (t-s)_+^2n+1/(2n+1)!,
therefore
p^q(s,t) = sinh(t-s)_+ p^-λ q(s,t) = sin(√(λ)(t-s)_+)/√(λ).
Clearly, p^-λ q(s,t) takes on negative values, too. This does not contradict our findings because the operator P f(s) := ∫_s^∞ p(s,t) f(t) dt does not satisfy CMP, as may be verified by considering the function f(s) = - 1_[0,1)(s) + 2 · 1_[1,2](s).
'
KMV+18
[ABG+23]Armstrongetal2023
Gavin Armstrong, Krzysztof Bogdan, Tomasz Grzywny, Łukasz Leżaj, and
Longmin Wang.
Yaglom limit for unimodal Lévy processes.
Ann. Inst. Henri Poincaré Probab. Stat., 59(3):1688–1721,
2023.
[BD23]BuiDAncona2023
The Anh Bui and Piero D'Ancona.
Generalized Hardy operators.
Nonlinearity, 36(1):171–198, 2023.
[BDG11]Bruneauetal2011
Laurent Bruneau, Jan Dereziński, and Vladimir Georgescu.
Homogeneous Schrödinger operators on half-line.
Ann. Henri Poincaré, 12(3):547–590, 2011.
[BDK16]Bogdanetal2016
Krzysztof Bogdan, Bartłomiej Dyda, and Panki Kim.
Hardy inequalities and non-explosion results for semigroups.
Potential Anal., 44(2):229–247, 2016.
[BG60]BlumenthalGetoor1960
R. M. Blumenthal and R. K. Getoor.
Some theorems on stable processes.
Trans. Amer. Math. Soc., 95:263–273, 1960.
[BG84]BarasGoldstein1984
Pierre Baras and Jerome A. Goldstein.
The heat equation with a singular potential.
Trans. Amer. Math. Soc., 284(1):121–139, 1984.
[BGJP19]Bogdanetal2019
Krzysztof Bogdan, Tomasz Grzywny, Tomasz Jakubowski, and Dominika Pilarczyk.
Fractional Laplacian with Hardy potential.
Comm. Partial Differential Equations, 44(1):20–50, 2019.
[BH86]BliedtnerHansen1986
J. Bliedtner and W. Hansen.
Potential Theory.
Universitext. Springer-Verlag, Berlin, 1986.
An Analytic and Probabilistic Approach to Balayage.
[BHJ08]Bogdanetal2008
Krzysztof Bogdan, Wolfhard Hansen, and Tomasz Jakubowski.
Time-dependent Schrödinger perturbations of transition
densities.
Studia Math., 189(3):235–254, 2008.
[BHNV10]Betancoretal2010
Jorge J. Betancor, Eleonor Harboure, Adam Nowak, and Beatriz Viviani.
Mapping properties of fundamental operators in harmonic analysis
related to Bessel operators.
Studia Math., 197(2):101–140, 2010.
[BJ07]BogdanJakubowski2007
Krzysztof Bogdan and Tomasz Jakubowski.
Estimates of heat kernel of fractional Laplacian perturbed by
gradient operators.
Comm. Math. Phys., 271(1):179–198, 2007.
[BJK20]Bogdanetal2020
Krzysztof Bogdan, Sven Jarohs, and Edyta Kania.
Semilinear Dirichlet problem for the fractional Laplacian.
Nonlinear Anal., 193:111512, 20, 2020.
[BJKP23]Bogdanetal2023
Krzysztof Bogdan, Tomasz Jakubowski, Panki Kim, and Dominika Pilarczyk.
Self-similar solution for Hardy operator.
J. Funct. Anal., 285(5):Paper No. 110014, 40, 2023.
[BKW01]Bileretal2001
Piotr Biler, Grzegorz Karch, and Wojbor A. Woyczyński.
Critical nonlinearity exponent and self-similar asymptotics for
Lévy conservation laws.
Ann. Inst. H. Poincaré C Anal. Non Linéaire,
18(5):613–637, 2001.
[BM23]BogdanMerz2023S
Krzysztof Bogdan and Konstantin Merz.
Subordinated Bessel heat kernels.
arXiv e-prints, page arXiv:2308.15026, August 2023.
[BM24a]BogdanMerz2024
Krzysztof Bogdan and Konstantin Merz.
Ground state representation for the fractional Laplacian with
Hardy potential in angular momentum channels.
J. Math. Pures Appl. (9), 186:176–204, 2024.
[BM24b]BogdanMerz2024H
Krzysztof Bogdan and Konstantin Merz.
Heat kernel bounds for the fractional Laplacian with Hardy
potential in angular momentum channels.
In preparation, 2024.
[BN22]BuiNader2022
The Anh Bui and Georges Nader.
Hardy spaces associated to generalized Hardy operators and
applications.
NoDEA Nonlinear Differential Equations Appl., 29(4):Paper No.
40, 40, 2022.
[CKSV20]Choetal2020
Soobin Cho, Panki Kim, Renming Song, and Zoran Vondraček.
Factorization and estimates of Dirichlet heat kernels for non-local
operators with critical killings.
J. Math. Pures Appl. (9), 143:208–256, 2020.
[CPR13]Chaumontetal2013
Loïc Chaumont, Henry Pantí, and Víctor Rivero.
The Lamperti representation of real-valued self-similar Markov
processes.
Bernoulli, 19(5B):2494–2523, 2013.
[CS24]ChoSong2024
Soobin Cho and Renming Song.
Fractional Laplacian with supercritical killings.
arXiv e-prints, page arXiv:2403.03298, March 2024.
[DG21]DerezinskiGeorgescu2021
Jan Dereziński and Vladimir Georgescu.
On the domains of Bessel operators.
Ann. Henri Poincaré, 22(10):3291–3309, 2021.
[DLMF23]NIST:DLMF
NIST Digital Library of Mathematical Functions.
<https://dlmf.nist.gov/>, Release 1.1.9 of 2023-03-15, 2023.
F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider,
R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and
M. A. McClain, eds.
[FG16]FrankGeisinger2016
Rupert L. Frank and Leander Geisinger.
Refined semiclassical asymptotics for fractional powers of the
Laplace operator.
J. Reine Angew. Math., 712:1–37, 2016.
[Fit00]Fitzsimmons2000
P. J. Fitzsimmons.
Hardy's inequality for Dirichlet forms.
J. Math. Anal. Appl., 250(2):548–560, 2000.
[FLS08]Franketal2008H
Rupert L. Frank, Elliott H. Lieb, and Robert Seiringer.
Hardy-Lieb-Thirring inequalities for fractional Schrödinger
operators.
J. Amer. Math. Soc., 21(4):925–950, 2008.
[FMS21]Franketal2021
Rupert L. Frank, Konstantin Merz, and Heinz Siedentop.
Equivalence of Sobolev norms involving generalized Hardy
operators.
International Mathematics Research Notices, 2021(3):2284–2303,
February 2021.
[FMS23a]Franketal2023T
Rupert L. Frank, Konstantin Merz, and Heinz Siedentop.
The Scott conjecture for large Coulomb systems: a review.
Lett. Math. Phys., 113(1):Paper No. 11, 2023.
[FMS23b]Franketal2023
Rupert L. Frank, Konstantin Merz, and Heinz Siedentop.
Relativistic strong Scott conjecture: a short proof.
In Density functionals for many-particle systems—mathematical
theory and physical applications of effective equations, volume 41 of Lect. Notes Ser. Inst. Math. Sci. Natl. Univ. Singap., pages 69–79. World
Sci. Publ., Hackensack, NJ, March [2023] 2023.
[FMSS20]Franketal2020P
Rupert L. Frank, Konstantin Merz, Heinz Siedentop, and Barry Simon.
Proof of the strong Scott conjecture for Chandrasekhar atoms.
Pure Appl. Funct. Anal., 5(6):1319–1356, December 2020.
[GT21]GrzywnyTrojan2021
Tomasz Grzywny and Bartosz Trojan.
Subordinated Markov processes: sharp estimates for heat kernels and
Green functions.
arXiv e-prints, page arXiv:2110.01201, October 2021.
[HB23]HansenBogdan2023
Wolfhard Hansen and Krzysztof Bogdan.
Positive harmonically bounded solutions for semi-linear equations.
arXiv e-prints, page arXiv:2212.13999, December 2023.
[JKS23]Jakubowskietal2023
Tomasz Jakubowski, Kamil Kaleta, and Karol Szczypkowski.
Bound states and heat kernels for fractional-type Schrödinger
operators with singular potentials.
Comm. Math. Phys., 403(2):795–828, 2023.
[JKS24]Jakubowskietal2024
Tomasz Jakubowski, Kamil Kaleta, and Karol Szczypkowski.
Relativistic stable operators with critical potentials.
Doc. Math., 29(1):209–245, 2024.
[JM23a]JakubowskiMaciocha2023G
Tomasz Jakubowski and Paweł Maciocha.
Ground-state representation for the fractional Laplacian on the
half-line.
Probab. Math. Statist., 43(1):83–108, 2023.
[JM23b]JakubowskiMaciocha2023
Tomasz Jakubowski and Paweł Maciocha.
Heat kernel estimates of fractional Schrödinger operators with
Hardy potential on half-line.
arXiv e-prints, page arXiv:2311.06683, November 2023.
[JW20]JakubowskiWang2020
Tomasz Jakubowski and Jian Wang.
Heat kernel estimates of fractional Schrödinger operators with
negative Hardy potential.
Potential Anal., 53(3):997–1024, 2020.
[KMV+18]Killipetal2018
R. Killip, C. Miao, M. Visan, J. Zhang, and J. Zheng.
Sobolev spaces adapted to the Schrödinger operator with
inverse-square potential.
Math. Z., 288(3-4):1273–1298, 2018.
[KRS21]Kyprianouetal2021
Andreas E. Kyprianou, Victor Rivero, and Weerapat Satitkanitkul.
Stable Lévy processes in a cone.
Ann. Inst. Henri Poincaré Probab. Stat., 57(4):2066–2099,
2021.
[KSS21]Kinzebulatovetal2021
D. Kinzebulatov, Yu. A. Semënov, and K. Szczypkowski.
Heat kernel of fractional Laplacian with Hardy drift via
desingularizing weights.
J. Lond. Math. Soc. (2), 104(4):1861–1900, 2021.
[Mer21]Merz2021
Konstantin Merz.
On scales of Sobolev spaces associated to generalized Hardy
operators.
Math. Z., 299(1):101–121, 2021.
[MNS18]Metafuneetal2018
G. Metafune, L. Negro, and C. Spina.
Sharp kernel estimates for elliptic operators with second-order
discontinuous coefficients.
J. Evol. Equ., 18(2):467–514, 2018.
[MNS23]Metafuneetal2023
Giorgio Metafune, Luigi Negro, and Chiara Spina.
Elliptic and parabolic problems for a Bessel-type operator.
In Recent Advances in Mathematical Analysis—Celebrating the
70th Anniversary of Francesco Altomare, Trends Math., pages 397–424.
Birkhäuser/Springer, Cham, [2023] 2023.
[MS22]MerzSiedentop2022
Konstantin Merz and Heinz Siedentop.
Proof of the strong Scott conjecture for heavy atoms: the Furry
picture.
Annales Henri Lebesgue, 5:611–642, 2022.
[MSŻ16]Maleckietal2016
Jacek Małecki, Grzegorz Serafin, and Tomasz Żórawik.
Fourier-Bessel heat kernel estimates.
J. Math. Anal. Appl., 439(1):91–102, 2016.
[PS21]PatieSavov2021
Pierre Patie and Mladen Savov.
Spectral expansions of non-self-adjoint generalized Laguerre
semigroups.
Mem. Amer. Math. Soc., 272(1336):vii+182, 2021.
[RS75]ReedSimon1975
Michael Reed and Barry Simon.
Methods of Modern Mathematical Physics. II. Fourier
Analysis, Self-Adjointness.
Academic Press [Harcourt Brace Jovanovich Publishers], New York,
1975.
[SSV12]Schillingetal2012
René L. Schilling, Renming Song, and Zoran Vondraček.
Bernstein Functions, volume 37 of De Gruyter Studies in
Mathematics.
Walter de Gruyter & Co., Berlin, second edition, 2012.
Theory and Applications.
[SWW22]Songetal2022
Renming Song, Peixue Wu, and Shukun Wu.
Heat kernel estimates for non-local operator with multisingular
critical killing.
arXiv e-prints, page arXiv:2203.03891, March 2022.
[V1́8]Vazquez2018
Juan Luis Vázquez.
Asymptotic behaviour for the fractional heat equation in the
Euclidean space.
Complex Var. Elliptic Equ., 63(7-8):1216–1231, 2018.
|
http://arxiv.org/abs/2409.02761v1 | 20240904143808 | Sampling methods for recovering buried corroded boundaries from partial electrostatic Cauchy data | [
"Isaac Harris",
"Andreas Kleefeld",
"Heejin Lee"
] | math.AP | [
"math.AP"
] |
Sampling methods for recovering buried corroded boundaries from partial electrostatic Cauchy data
Isaac Harris and Heejin Lee
Department of Mathematics, Purdue University, West Lafayette, IN 47907
Email: and
Andreas Kleefeld
Forschungszentrum Jülich GmbH, Jülich Supercomputing Centre,
Wilhelm-Johnen-Straße, 52425 Jülich, Germany
University of Applied Sciences Aachen, Faculty of Medical Engineering and
Technomathematics, Heinrich-Mußmann-Str. 1, 52428 Jülich, Germany
Email:
§ ABSTRACT
We consider the inverse shape and parameter problem for detecting corrosion from partial boundary measurements. This problem models the non-destructive testing for a partially buried object from electrostatic measurements on the accessible part of the boundary. The main novelty is the extension of the linear sampling and factorization methods to an electrostatic problem with partial measurements. These methods so far have only mainly applied to recovering interior defects which is a simpler problem. Another important aspect of this paper is in our numerics, where we derive a system of boundary integral equations to recover the mixed Green's function which is needed for our inversion. With this, we are able to analytically and numerically solve the inverse shape problem. For the inverse parameter problem, we prove uniqueness and Lipschitz-stability (in a finite dimensional function space) assuming that one has the associated Neumann-to-Dirichlet operator on the accessible part of the boundary.
§ INTRODUCTION
In this paper, we consider an inverse shape and parameter problem coming from electrical impedance tomography (EIT). The model we study is for a partial buried object that was degraded via corrosion. This problem is motivated by non-destructive testing where one wishes to detect/recover the corroded part of the boundary without removing the object. To this end, we will study the linear sampling and factorization methods for recovering the corroded boundary. These methods were first introduced in <cit.>, respectively. This is novel due to the fact that we have data only on the accessible part of the boundary and we wish to recover the rest of the boundary. Our inversion is done by embedding the defective region into a `healthy' region and comparing the gap in voltages. The linear sampling and factorization methods have been studied for similar problems in <cit.> where one wishes to recover interior defects from either full or partial boundary data. Again, this problem is different in the fact that we have partial boundary data and we wish to recover unaccessible part of the boundary.
In order to solve the inverse shape problem we will consider two well known qualitative reconstruction methods i.e. the linear sampling and factorization methods. These methods have been greatly studied over the years for different inverse shape problems, see <cit.> as well as the manuscripts <cit.> and the references therein. Iterative methods for this problem were studied in <cit.> which extends the method presented in <cit.>. In the aforementioned papers, a non-linear system of boundary integral equations was used to solve the inverse shape and parameter problems. We also note that in <cit.> the authors used a similar iterative method to solve the problem with a generalized impedance condition. One of the main advantageous for using a qualitative method is the fact that one needs little a priori information about the region of interest. On the other hand iterative methods will often converge to a local minima rather than the global minima if the initial guess is not sufficiently close to the target. Therefore, in many non-destructive testing applications it may be useful to use a qualitative method.
We also consider the inverse parameter problem. Here, we will assume that the corroded part of the boundary is known/recovered. Then, we prove that the knowledge of the Neumann-to-Dirichlet operator on the accessible part of the boundary can uniquely recover the corrosion(i.e. Robin) parameter. Once we have proven uniqueness, we then turn our attention to stability. Due to the fact that inverse EIT problem are exponentially ill-posed there is no hope to obtain a Lipschitz–stability estimate on standard function spaces. Here, we appeal to the techniques in <cit.> to prove Lipschitz–stability assuming the parameter is in a finite dimensional function space. This is useful for numerical reconstructions of the parameter since one will often discretize the unknown function to be a linear combination of finite basis functions. Numerical reconstructions of the parameter are not studied here but the algorithm in <cit.> can also be applied to this problem.
The rest of the paper is structured as follows. In Section <ref> we setup the direct and inverse problem under consideration. Then, in Section <ref> we consider the inverse shape problem where we give the theoretical justification of the linear sampling and factorization methods for our model. This will give a computationally simple yet analytically rigorous method for recovering the corroded part of the boundary. Next, in Section <ref> we consider the inverse parameter problem assuming that the corroded boundary is known/recovered, where we prove uniqueness and stability with respect to the Neumann-to-Dirichlet operator. Then, we provided numerical examples in Section <ref> for recovering the corroded boundary. Finally, a summary and conclusion is given.
§ THE DIRECT PROBLEM
In this section, we will discuss the direct problem associated with the inverse problems under consideration. Again, this problem comes from EIT where one applies a current on the accessible part of the boundary and measures the resulting voltage. To begin, we let the known region D ⊂ℝ^m for m=2, 3 be an open bounded and simply connected domain with the piecewise C^1 boundary ∂ D. The boundary ∂ D can be decomposed into
∂ D = Γ_N∪Γ_D where Γ_N ∩Γ_D = ∅
and are relatively open subsets of ∂ D. We assume that part of the region D has been buried such that Γ_N is the accessible part of the boundary with Γ_D being the part of the boundary that has been buried. Being buried has caused part of the region to be corroded away. The part of the region that has corroded away will be denoted Ω. To this end, we let Ω⊂ D be an open subset of D such that a part of the boundary ∂Ω is Γ_D and the other part of the boundary is C^1 and denoted by Γ_C. Therefore, we have that
∂ (D ∖Ω) = Γ_N∪Γ_C where Γ_N ∩Γ_C = ∅
and are relatively open. In Figure <ref>, we have illustrated the aforementioned setup.
In order to determine if there is a non-trivial corroded region Ω we assume that a current denoted g is applied to the accessible part of the boundary Γ_N. This will produce an electrostatic potential function u for the defective material . This gives that the direct problem can be modeled by the mixed Neumann–Robin boundary value problem: given g ∈ L^2(Γ_N), determine u ∈ H^1() such that
Δ u= 0 in D∖Ω, u = g on Γ_N, and u +γ u = 0 on Γ_C.
Here ν denotes the outward unit normal to and the corrosion coefficient γ∈ L^∞(Γ_C). We will assume that there are two real–valued constants γ_max and γ_min such that the corrosion coefficient satisfies
0< γ_min≤γ (x) ≤γ_max for a.e. x ∈Γ_C.
Note that our notation is that Γ_N is the Neumann boundary and that Γ_C corresponds to the corroded/Robin boundary.
Now, we wish to establish the well-posedness of the direct problem (<ref>). This can be done by considering the equivalent variational formulation of (<ref>). To this end, we can take φ∈ H^1() and using Green's first identity we have that
∫_∇ u ·∇φ x + ∫_Γ_Cγ u φ s
=
∫_Γ_N g φ s.
Note that (<ref>) is satisfied for any test function φ∈ H^1(). Clearly this implies that the variational formulation is given by
A(u,φ) = ℓ (φ)
where the sesquilinear form A(· , ·): H^1() × H^1() →ℝ is given by
A(u, φ) =
∫_∇ u ·∇φ x + ∫_Γ_Cγ u φ s
and
the conjugate linear functional ℓ: H^1() →ℝ is given by
ℓ(φ) = ∫_Γ_N g φ s.
Here, the integrals over Γ_C and Γ_N are interpreted as the inner–product on L^2(Γ_C) and L^2(Γ_N), respectively. These integrals are well defined by the Trace Theorem. For the well-posedness, notice that
A(u, u) ≥ u ^2_L^2() +γ_minu^2_L^2(Γ_C)
which implies that A(· , ·) is coercive on H^1() by appealing to a standard Poincaré type argument (see for e.g. <cit.> p. 487). From the Trace Theorem, we have that
|ℓ(φ)| ≤ Cg_L^2(Γ_N)φ_H^1().
With this we have the following result.
The mixed Neumann–Robin boundary value problem (<ref>) has a unique solution u ∈ H^1() that satisfies the estimate
u_H^1()≤ C g_L^2(Γ_N)
with C independent of g ∈ L^2(Γ_N).
With the well-poseness of (<ref>) established, we now consider an auxiliary boundary value problem for the electrostatic potential in the healthy domain D: given g ∈ L^2(Γ_N), determine u_0 ∈ H^1(D) such that
Δ u_0= 0 in D, u_0 = g on Γ_N, and u_0 = 0 on Γ_D.
Similarly, it can be shown that the above boundary value problem (<ref>) is well-posed with the estimate
u_0_H^1(D)≤ C g_L^2(Γ_N).
This can be done by again appealing to the variational formulation as well as the fact that u_0 satisfies the Poincaré estimate
u_0_H^1(D)≤ C u_0 _L^2(D)
due to the zero trace on Γ_D (see <cit.> p. 486). Note that in our notation Γ_D is the part of the boundary where we impose the homogeneous Dirichlet condition. Since D is known a priori we have that u_0 can always be computed numerically.
In order to determine the corroded subregion Ω, we will assume that the u|_Γ_N can be measured and that u_0|_Γ_N can be computed for any current g∈ L^2(Γ_N). Now we define the Neumann-to-Dirichlet (NtD) operators
Λ and Λ_0 : L^2(Γ_N) → L^2(Γ_N) given by Λ g = u|_Γ_N and Λ_0 g = u_0|_Γ_N,
where u and u_0 are the unique solutions to (<ref>) and (<ref>), respectively. From the well-posedness of the boundary value problems it is clear that the operators Λ and Λ_0 are well-defined bounded linear operators. The inverse problems that we are interested in are the inverse shape problem of determining the corroded region Ω from the knowledge of the difference (Λ-Λ_0) and the inverse impedance problem of recovering the corrosion coefficient γ provided that Γ_C is known. In the following section, we will study the linear sampling and factorization methods to recover Ω. Then, we will turn our attention to proving that Λ uniquely recovers the corrosion coefficient γ as well as provide a stability estimate.
§ THE INVERSE SHAPE PROBLEM
In this section, we are interested in the inverse shape problem of recovering Ω from the knowledge of the NtD operators. In order to solve this problem we will consider the linear sampling and factorization methods associated with (Λ-Λ_0). Our analysis will show that the linear sampling method can give an approximate reconstruction of Ω where as the factorization method can be used under a stricter set of assumptions on the corrosion coefficient. The factorization method is mathematically more advantageous to use due to the fact that it gives an explicit characterization of the region of interest from the spectral decomposition of an operator defined by the difference of the NtD maps. In either case, we need to decompose the operator (Λ-Λ_0) to obtain a more explicit relationship with the unknown region Ω.
To begin, we first derive an initial factorization of the operator (Λ-Λ_0). Therefore, we first notice that the difference of the electrostatic potentials satisfies
Δ (u-u_0)= 0 in , (u-u_0) = 0 on Γ_N, and (u- u_0)|_Γ_C∈ H^1/2(Γ_C).
This motivates us to consider the auxiliary boundary value problem: given φ∈ H^1/2(Γ_C), determine w ∈ H^1() such that
Δ w= 0 in , w = 0 on Γ_N, and w=φ on Γ_C.
Arguing similarly to the previous section, we have that (<ref>) is well-posed which implies that we can define
G: H^1/2(Γ_C) → L^2(Γ_N) given by Gφ=w|_Γ_N,
where w is the solution to (<ref>) as a bounded linear operator by appealing to the Trace Theorem. By the well-posedness of (<ref>), we see that if
φ = (u- u_0)|_Γ_C we obtain that w = (u- u_0) in .
Now, we further define the bounded linear operators
L and L_0: L^2(Γ_N) → H^1/2(Γ_C) given by Lg=u|_Γ_C and L_0 g=u_0|_Γ_C.
With this we have our initial factorization (Λ-Λ_0) = G(L-L_0).
With our initial factorization in hand we will analyze the properties of the operators defined above. First, we notice that due to the compact embedding of H^1/2(Γ_N) into L^2(Γ_N) we have compactness of the operator G defined in (<ref>). We also notice that by Holmgren's theorem (see for e.g. <cit.>) if φ is in the null-space of G, this would imply that
w = w = 0 on Γ_N giving that w = 0 in .
By the Trace Theorem φ = 0 which gives injectivity of the operator G. With this we now present a result that gives the analytical properties of the source-to-trace operator G.
The operator G: H^1/2(Γ_C) → L^2(Γ_N) as defined in (<ref>) is compact and injective.
With this, in order to further analyze the operator G we need to compute its adjoint. The adjoint operator G^* will be a mapping from L^2(Γ_N) into H^-1/2(Γ_C). Note that the adjoint is computed via the relationship
(G φ , ψ)_L^2(Γ_N) = ⟨φ , G^*ψ⟩_Γ_C
where ⟨· , ·⟩_Γ_C is the sesquilinear dual–product between the
Hilbert Space H^±1/2(Γ_C) and its dual space H^∓ 1/2 (Γ_C )
where L^2 (Γ_C) is the associated Hilbert pivot space, see <cit.> p. 99 for details. The Sobolev space H^s (Γ_C ) is the closure of C^∞_0(Γ_C) with respect to the H^s (Γ_C )–norm for any s∈. Now, with this in mind we can give another result for the analytical properties of G.
The adjoint G^*: L^2(Γ_N) →H^-1/2(Γ_C) is given by G^*ψ = - ∂_ν v |_Γ_C where v ∈ H^1 () satisfies
Δ v= 0 in , v = ψ on Γ_N, and v =0 on Γ_C.
Moreover, the operator G has a dense range.
To prove the claim, we first note that (<ref>) is well-posed for any ψ∈ L^2(Γ_N). Now, in order to compute the adjoint operator we apply Green's second identity to obtain
0 = ∫_Γ_Nv ∂_νw - w ∂_νv ds + ∫_Γ_Cv ∂_νw - w ∂_νv ds,
where we have used the fact that both w and v are harmonic in as well as the fact that ∂ (D ∖Ω) = Γ_N∪Γ_C with Γ_N ∩Γ_C = ∅. Using the boundary conditions in (<ref>) and (<ref>) we have that
∫_Γ_N w ψ ds = - ∫_Γ_Cφ ∂_νv ds.
Notice, that the left hand side of the above equality is a bounded linear functional of φ∈ H^1/2(Γ_C). Therefore, by definition we have that Gφ=w|_Γ_N which implies that
(G φ , ψ)_L^2(Γ_N) =∫_Γ_N w ψ ds = - ∫_Γ_Cφ ∂_νv ds = ⟨φ , G^*ψ⟩_Γ_C
proving that G^*ψ = - ∂_ν v |_Γ_C.
Now, proving that the operator G has a dense range is equivalent to proving that the adjoint G^* is injective (see <cit.>, p. 46). So we assume that ψ is in the null-space of G^* which implies that
v = v = 0 on Γ_C giving that v = 0 in
where we again appeal to Holmgren's Theorem proving the claim by the Trace Theorem.
Now that we have analyzed the operator G we will turn our attention to studying (L-L_0). This is the other operator used in our initial factorization of difference of the NtD operators. Notice that the dependance on the unknown region Ω is more explicit for these operators since they map to the traces of function on Γ_N ⊊∂Ω.
The operator (L-L_0):L^2(Γ_N) → H^1/2(Γ_C) as defined in (<ref>) is injective provided that γ_max is sufficiently small or γ_min is sufficiently large.
We begin by assuming g is in the null-space of (L-L_0) which implies that
Δ(u-u_0)= 0 in , (u-u_0) = 0 on Γ_N, and (u- u_0) = 0 on Γ_C.
It is clear that the above boundary value problem only admits the trivial solution. Therefore, we have that u=u_0 in and hence ∂_νu_0+γ u_0=0 on Γ_C. Notice, that by (<ref>) we have that u_0∈ H^1(Ω) is the solution of the boundary value problem
Δ u_0 = 0 in Ω with u_0 = 0 on Γ_D, u_0 + γ u_0 = 0 on Γ_C.
Recall, that ν is the inward unit normal to Ω (see Figure <ref>). From Green's second identity applied to u_0 in Ω and the Trace Theorem, we have that
0 = ∫_Ω |∇ u_0|^2 x - ∫_Γ_Cγ |u_0|^2 s
≥∇ u_0^2_L^2(Ω) - γ_maxu_0^2_L^2(Γ_C)
≥∇ u_0^2_L^2(Ω) - γ_max C u_0^2_H^1(Ω),
since γ≤γ_max a.e. on Γ_C from our assumptions. Notice, that since u_0|_Γ_D=0 we have the Poincaré estimate u_0_H^1(Ω)≤ C∇ u_0_L^2(Ω). Therefore,
0 ≥ (1-γ_max C) ∇ u_0^2_L^2(Ω)
which implies that if γ_max is small enough, then |∇ u_0| = 0 in Ω and hence u_0 = 0 in Ω due to the zero trace on Γ_D. By the unique continuation principle (see for e.g. <cit.>, p. 276), we obtain that u_0 = 0 in D, which implies that g=0 by the Trace Theorem. The other case can be proven similarly by considering the opposite sign of the above equality.
With this, we wish to prove that (L-L_0) is compact with a dense range just as we did for the operator G. Note that the compactness is not obvious as in the previous case and to prove the density of the range we need to compute the adjoint operator (L-L_0)^*. To this end, let us consider the solution p∈ H^1() to
Δ p = 0 in with p = 0 on Γ_N , p + γ p = ξ on Γ_C
and the solution q ∈ H^1(D) to
Δ q = 0 in D∖Γ_C with q|_Γ_N = 0, q|_Γ_D=0, and [[ q]] |_Γ_C = ξ
for any ξ∈H^-1/2(Γ_C). Here, we define the notation
[[ q]] |_Γ_C = ( ∂_νq^+ - ∂_νq^-) |_Γ_C,
where + and - indicate the limit obtained by approaching the boundary Γ_C from and Ω, respectively. Note that since q∈ H^1(D) it has continuous trace across Γ_C. With this we can further analyze the operator (L-L_0 ).
The adjoint (L-L_0)^*: H^-1/2(Γ_C) → L^2(Γ_N) is given by
(L-L_0)^* ξ = (p-q)|_Γ_N,
where p and q are the solution to (<ref>) and (<ref>), respectively. Moreover, the operator (L-L_0) is compact with a dense range provided that γ_max is sufficiently small or γ_min is sufficiently large.
To prove the claim, we first compute the adjoints L^* and L^*_0 separately. We begin with computing L^*. Just as in Theorem <ref> we use Green's second identity to obtain that
0 = ∫_Γ_Np ∂_νu - u ∂_νp ds + ∫_Γ_Cp ∂_νu - u ∂_νp ds,
where we have used the fact that the functions are both harmonic. By the boundary conditions in (<ref>) and (<ref>) we have that
( g , L^*ξ)_L^2(Γ_N) =∫_Γ_Npg ds = ∫_Γ_Cu [ ∂_νp +γp] ds = ∫_Γ_Cuξ s =⟨ L g , ξ⟩_Γ_C,
which gives L^*ξ = p|_Γ_N.
Now, for computing L^*_0 we proceed is a similar manner where we apply Green's second identity in Ω and to obtain that
0 = - ∫_Γ_Cu_0∂_νq^- - ∂_νu_0q s and 0= - ∫_Γ_N∂_νu_0 q s + ∫_Γ_Cu_0∂_νq^+ - ∂_νu_0 q s,
where we have used that the functions are harmonic as well as ∂_ν q =0 on Γ_N and q = u_0=0 on Γ_D. By adding the above equations and using the boundary conditions in (<ref>) and (<ref>) we obtain that
⟨ξ, L_0 g ⟩_Γ_N= ∫_Γ_Cu_0ξ s = ∫_Γ_Ng q s = (L_0^*ξ , g)_L^2(Γ_N),
which gives L_0^*ξ = q|_Γ_N.
With this, it is clear that (L-L_0)^* is compact by the compact embedding of H^1/2(Γ_N) into L^2(Γ_N) which implies that (L-L_0) is compact. Now, let ξ be in the null-space of (L-L_0) which gives that
Δ(p-q)= 0 in , (p-q)=(p-p_0) = 0 on Γ_N.
Therefore, by Holmgren's Theorem we have that p=q in . By the boundary conditions on Γ_C
∂_νp +γp = ξ = ∂_νq^+ - ∂_νq^- on Γ_C
which implies that
Δ q = 0 in Ω with q = 0 on Γ_D, q^- + γ q = 0 on Γ_C.
Here, we have used that ∂_νp =∂_νq^+ and p=q on Γ_C. Then, by arguing just as in Theorem <ref> we have that q=0 provided that γ_max is small enough or γ_min is sufficiently large, which gives that ξ=0.
§.§ The Linear Sampling Method
Now that we have the above results we can infer the analytical properties of the difference of the NtD operators (Λ-Λ_0 ). These properties of the operator are essential for applying the linear sampling method (LSM) for solving the inverse shape problem. This method has been used to solve many inverse shape problems (see for e.g. <cit.>). This method connects the unknown region to range of the data operator (Λ-Λ_0 ) via the solution to an ill-posed operator equation. To proceed, we will discuss the necessary analysis to show that the linear sampling method can be applied to this problem. From the analysis in the previous section, we have the following result for the difference of the NtD operators.
The difference of the NtD operators (Λ-Λ_0 ): L^2(Γ_N) → L^2(Γ_N) given by (<ref>) has the factorization
(Λ-Λ_0) = G(L-L_0),
where G and (L-L_0) are defined in (<ref>) and (<ref>), respectively.
Moreover, the operator (Λ-Λ_0 ) is compact and injective with a dense range provided that γ_max is sufficiently small or γ_min is sufficiently large.
To proceed, we need to determine an associated function that depends on the sampling point z ∈ D to derive a `range test' to reconstruct the unknown subregion Ω. To this end, we define the mixed Green's function (also referred to as the Zaremba function <cit.>, p.
B209): for any z ∈ D, let 𝔾(·, z) ∈ H_loc^1(D∖{z}) be the solution to
-Δ𝔾(·, z) = δ(· -z) in D, 𝔾(·, z) = 0 on Γ_N, and 𝔾(·, z) = 0 on Γ_D.
The following result shows that the range of the operator G given by (<ref>) uniquely determines the region of interest Ω.
Let G: H^1/2(Γ_C) → L^2(Γ_N) as defined in (<ref>). Then,
𝔾(·, z)|_Γ_N∈Range(G) if and only if z ∈Ω.
To prove the claim, we first start with the case when the sampling point z ∈Ω. With this we see that 𝔾(·, z) ∈ H^1() satisfies
Δ𝔾(·, z) =0 in , 𝔾(·, z) = 0 on Γ_N, and 𝔾(·, z)|_Γ_C := φ_z ∈ H^1/2(Γ_C).
From this we obtain that Gφ_z = 𝔾(·, z)|_Γ_N proving this case.
Now, we consider the case when z ∈ and we proceed by contradiction. To this end, we assume that there is a w_z ∈ H^1() such that
Δ w_z =0 in , w_z = 0 on Γ_N, and w_z = φ_z on Γ_C
for some φ_z ∈ H^1/2(Γ_C) where w_z = 𝔾(·, z) on Γ_N. By appealing to Holmgren's Theorem we can obtain that w_z = 𝔾(·, z) in the set ( ) ∖{z}. Using interior elliptic regularity
(see <cit.> p. 536) we have that w_z is continuous at the sampling point z. By the singularity at z for the mixed Green's function 𝔾 ( · ,z) we have that
| w_z (x)| < ∞ whereas | 𝔾 (x,z) | →∞ as x → z.
This proves the claim by contradiction.
With the result in Theorem <ref> we can prove that the linear sampling method can be used to recover Ω from the NtD mapping. This is useful in non-destructive testing since there is no initial guess needed for this algorithm. With this, we can now state the main result in this subsection.
Let the difference of the NtD operators (Λ-Λ_0 ): L^2(Γ_N) → L^2(Γ_N) be given by (<ref>). Then for any sequence {g_z , }_ >0∈ L^2(Γ_N) for z ∈ D satisfying
(Λ - Λ_0) g_z , - 𝔾(·, z)|_Γ_N_ L^2(Γ_N)⟶ 0 as → 0
we have that g_z , _ L^2(Γ_N)⟶∞ as → 0 for all z ∉Ω provided that γ_max is sufficiently small or γ_min is sufficiently large.
To prove the claim, we first note that by Theorem <ref> we have that (Λ-Λ_0 ) has a dense range in L^2(Γ_N). Therefore, for all z ∈ D we have that there exists an approximating sequence {g_z , }_ >0 such that (Λ - Λ_0) g_z , converges in norm to 𝔾(·, z)|_Γ_N. For a contradiction, assume that there is such an approximating sequence such that g_z , _L^2(Γ_N) is bounded as → 0. Then we can assume that (up to a subsequence) it is weakly convergent such that g_z , g_z,0 as → 0. By the compactness of the operator (Λ - Λ_0) we have that as → 0
(Λ - Λ_0) g_z , ⟶ (Λ - Λ_0) g_z,0 which implies that (Λ - Λ_0) g_z , 0 = 𝔾(·, z)|_Γ_N.
By the factorization (Λ-Λ_0) = G(L-L_0) this would imply that 𝔾(·, z)|_Γ_N∈Range(G). This clearly contradicts Theorem <ref> proving the claim by contradiction.
Notice, we have shown that the linear sampling method can be used to recover Ω in Theorem <ref>. In order to use this result to recover the corroded part of the region we find an approximate solution to
(Λ - Λ_0) g_z = 𝔾(·, z)|_Γ_N.
Since, the operator (Λ - Λ_0) is compact this implies that the above equation is ill-posed. But the fact that (Λ - Λ_0) has a dense range means we can construct an approximate solution using a regularization strategy (see for e.g. <cit.>). Here, we can take >0 to be the regularization parameter then we can recover Ω by plotting the imaging functional
W_LSM(z) = 1/ g_z , _L^2(Γ_N )
where g_z , is the regularized solution to (<ref>). Theorem <ref> implies that W_LSM(z) ≈ 0 for any z ∉Ω. Note that we can not infer that W_LSM(z) will be bounded below in z ∈Ω. Therefore, we will consider the factorization method in the proceeding section.
§.§ The Factorization Method
In this section, we will consider using the factorization method (FM) to recover the corroded region Ω. Even though we have already studied the linear sampling method, we see that Theorem <ref> does not prove that the corresponding imaging functional is bounded below for z ∈Ω. With this in mind, we consider the factorization method since it gives an exact characterization of the region of interest Ω using the spectral decomposition of an operator associated to (Λ - Λ_0).
To begin, we need to derive a `symmetric' factorization of the operator (Λ - Λ_0). Therefore, we recall that by Theorem <ref> we have that (Λ - Λ_0)=G(L - L_0). Now, we define the bounded linear operator
T: H^-1/2(Γ_C) → H^1/2(Γ_C) given by Tξ = (p-q)|_Γ_C.
where p and q satisfy (<ref>) and (<ref>), respectively. It is clear that the boundedness of T follows from the well-posedness of (<ref>) and (<ref>) along with the Trace Theorem. With this, we notice that
GTξ = G((p-q)|_Γ_C) = (p-q)|_Γ_N = (L-L_0)^*ξ
where we have used the fact that (p-q) ∈ H^1() satisfies
Δ (p-q)= 0 in , (p-q) = 0 on Γ_N, and (p-q)|_Γ_C∈ H^1/2(Γ_C)
along with the definition of G in (<ref>) and (L-L_0)^* given in Theorem <ref>. Since this is true for any ξ∈H^-1/2(Γ_C) we have that GT = (L-L_0)^*. We now have the desired factorization of (Λ - Λ_0)=G T^* G^* by the calculation that (L-L_0) = T^* G^*.
With this new factorization acquired we now prove that (Λ - Λ_0) is self-adjoint. This would imply that (Λ - Λ_0)=G T G^* where G and T are defined in (<ref>) and (<ref>), respectively. To this end, we notice that
(g_1, (Λ - Λ_0)g_2)_L^2(Γ_N) = ∫_Γ_N g_1 [u^(2)-u^(2)_0] s = ∫_Γ_Nu^(2)u^(1) - u^(2)_0u^(1)_0 s,
where the superscript (j) corresponds to the solution of (<ref>) and (<ref>) for g_j ∈ L^2(Γ_N) with j=1,2. We now apply Green's first identity to u^(j) and u^(j)_0 in and D, respectively to obtain
(g_1, (Λ - Λ_0)g_2)_L^2(Γ_N)= ∫_∇ u^(1)·∇ u^(2) x + ∫_Γ_Cγ u^(1) u^(2) s - ∫_D u^(1)_0 · u^(2)_0 x,
where we have used the boundary conditions. This implies that (Λ-Λ_0)=G T G^* is self-adjoint.
In order to apply the theory of the factorization method <cit.> we need to study the operator T defined in (<ref>). In particular, we wish to show that under some assumptions that ± T is coercive on the range of G^*. This can be achieved by showing that ± T is a coercive operator from H^-1/2(Γ_C) to H^1/2(Γ_C). With this in mind, notice that by the boundary conditions on the corroded boundary Γ_C we have that
⟨ T ξ , ξ⟩_Γ_C = ∫_Γ_C(p-q)ξ s = ∫_Γ_C p [ ∂_νp +γp] - q [[ q]] s
and by appealing to Green's first identity
⟨ T ξ , ξ⟩_Γ_C = ∫_ | p|^2 x + ∫_Γ_Cγ |p|^2 s - ∫_D | q|^2 x.
By (<ref>) we see that there is no way for ± T to be coercive without some extra assumptions because of the negative multiplying the L^2(D)–norm of the gradient of q. Therefore, to proceed we will consider two cases 0< γ <1 or 1<γ a.e. on Γ_C.
For the first case when 0< γ <1 a.e. on Γ_C we have that
⟨ T ξ , ξ⟩_Γ_C ≥γ_min[ p^2_L^2() + p_L^2(Γ_C)^2 ] - q_L^2(D)^2 since 0< γ <1
≥γ_min[ p_L^2()^2 + p_L^2(Γ_C)^2 ] - Cξ_H^-1/2(Γ_C)^2 by the well-posedness.
Now, notice that by the boundary condition in (<ref>) we can estimate
ξ_H^-1/2(Γ_C) = ∂_νp +γp_H^-1/2(Γ_C)
≤∂_νp_H^-1/2(Γ_C) + p_H^1/2(Γ_C) since 0< γ <1
≤ C p_H^1() by Trace Theorems
≤ C √( p^2_L^2() + p_L^2(Γ_C)^2)
where we have used that
p^2_H^1() is equivalent to p^2_L^2() + p_L^2(Γ_C)^2.
With this we see that ∃ C_j>0 independent of γ for j=1,2 where
⟨ T ξ , ξ⟩_Γ_C≥ (C_1γ_min -C_2 )ξ_H^-1/2(Γ_C)^2.
This implies that for C_2/C_1 < γ <1 a.e. on Γ_C then we have that T is coercive.
Now, for the case 1<γ a.e. on Γ_C we have that
∫_ | p|^2 x + ∫_Γ_Cγ |p|^2 s =∫_Γ_Cξp s
by Green's first identity. With this, by the Trace Theorem we have the estimate
p^2_L^2() + p_L^2(Γ_C)^2 ≤ C ξ_H^-1/2(Γ_C)p_H^1() .
By the aforementioned norm equivalence, we further establish that
√( p^2_L^2() + p_L^2(Γ_C)^2)≤ C ξ_H^-1/2(Γ_C).
Using the boundary condition in (<ref>) we have that
ξ_H^-1/2(Γ_C) = ∂_νq^+ - ∂_νq^-_H^-1/2(Γ_C)
≤ C [ q _H^1() + q _H^1(Ω)] by Trace Theorem
≤ C q_L^2(D) by the Poincaré estimate since q|_Γ_D = 0.
Therefore, by (<ref>) we have that ∃ C_j>0 independent of γ for j=3,4 where
- ⟨ T ξ , ξ⟩_Γ_C ≥q^2_L^2(D) - γ_max[ p^2_L^2() + p_L^2(Γ_C)^2 ]
≥ (C_3 -C_4 γ_max)ξ_H^-1/2(Γ_C)^2.
This implies that for C_3/C_4 > γ >1 a.e. on Γ_C then we have that -T is coercive.
Even though we have proven the coercivity it is unclear if the assumption that
C_2/C_1 < γ <1 or C_3/C_4 > γ >1 a.e. on Γ_C
is satisfied. This is due to the fact that the constants C_j are unknown and depend on the geometry. In order to continue in our investigation, we make the assumption that there exists regions and Ω such that the above assumptions are valid for some given γ. With this we have the main result of this subsection.
Let the difference of the NtD operators (Λ-Λ_0 ): L^2(Γ_N) → L^2(Γ_N) be given by (<ref>). Provided that either ± T defined by (<ref>) is coercive, then
𝔾(·, z)|_Γ_N∈Range( |Λ-Λ_0 |^1/2) if and only if z ∈Ω.
This is due to the fact that with the factorization (Λ - Λ_0)=G T G^* and provided that ± T is coercive we have that Range( |Λ-Λ_0 |^1/2) = Range(G) by the result in <cit.>. Here, we note that |Λ-Λ_0 |^1/2 is defined in the standard way by the spectral decomposition of a self-adjoint compact operator. Then by appealing to Theorem <ref> proves the claim.
With this result, we have another way to recover the corroded region Ω. Notice that since (Λ - Λ_0) is a self-adjoint compact operator Theorem <ref> can be reformulated as
∑_j=1^∞1/σ_j|( 𝔾(·, z) , g_j )_L^2(Γ_N)|^2 < ∞ if and only if z ∈Ω
by appealing to Picard's criteria (see for e.g. <cit.>) where (σ_j , g_j ) ∈_+× L^2(Γ_N) is the eigenvalue decomposition of the absolute value for the difference of the NtD operators. This result is stronger than Theorem <ref> since the result is an equivalence that implies that Λ uniquely determines the subregion Ω. Also, to numerically recover Ω we can use the imaging functional
W_FM(z) = [ ∑_j=1^∞1/σ_j|( 𝔾(·, z) , g_j )_L^2(Γ_N)|^2 ]^-1
which is positive only when z ∈Ω. Since (Λ - Λ_0) is compact we have that the eigenvalues σ_j tend to zero rapidly which can cause instability in using the imaging functional W_FM(z). In <cit.> it has been shown that adding a regularizer to the sum can regain stability while still given the unique reconstruction of Ω.
§ INVERSE IMPEDANCE PROBLEM
In this section, we consider the inverse impedance problem, i.e. determine the corrosion parameter γ on Γ_C from the knowledge NtD mapping Λ_γ. Here, we will assume that the corroded boundary Γ_C is known. This would be the case, if it was reconstructed as discussed in the previous section. We will prove that γ↦Λ_γ is injective as a mapping from L^∞(Γ_C) into ℒ(L^2(Γ_N)) i.e the set of bounded linear operator acting on L^2(Γ_N). Then we will prove a Lipschitz–stability estimate for the inverse impedance problem. Similar result have been proven in <cit.> just to name a few recent works. This will imply that one can reconstruct γ on Γ_C from the known Cauchy data g and Λ_γ g on Γ_N. In order to show the uniqueness, let us first consider the following density result associated with solutions to (<ref>).
Let
𝒰 = { u|_Γ_C∈ L^2(Γ_C) : u ∈ H^1() solves (<ref>) for any g ∈ L^2(Γ_N)}.
Then, 𝒰 is dense subspace in L^2(Γ_C).
It is enough to show that 𝒰^⊥ is trivial. To this end, notice that for any ϕ∈𝒰^⊥ there exists v ∈ H^1() that is the unique solution of
Δ v= 0 in D∖Ω, v = 0 on Γ_N, and v +γ v = ϕ on Γ_C .
From the boundary conditions, we have that
0 = ∫_Γ_Cu ϕ s = ∫_Γ_C u(v + γ v) s = ∫_Γ_C uv - vu s.
Then, by appealing to Green's second identity in we obtain
0 = - ∫_Γ_N uv - vu s = ∫_Γ_N g v s, for any g ∈ L^2(Γ_C)
where we have used that both u and v are harmonic in . Therefore, v|_Γ_N=0 and v|_Γ_N=0 from the boundary condition, so we conclude that v vanishes in by Holmgren's theorem. Hence, ϕ=0 on Γ_C by the Trace Theorem.
Now, we will show that the NtD operator Λ uniquely determines the boundary coefficient γ on Γ_C. To this end, consider the solutions u and u_0 to (<ref>) and (<ref>), respectively and let 𝔾(·, z) be the mixed Green's function defined in (<ref>). Then, the following lemma allows one to rewrite (u-u_0)(z) for any z ∈ in terms of a boundary integral operator.
For any z∈,
-(u-u_0)(z) = ∫_Γ_C u(x) [𝔾(x, z) + γ(x) 𝔾(x, z)] s(x).
For any z ∈, from the boundary conditions and Green's second identity,
-(u-u_0)(z) =∫_ (u-u_0) Δ𝔾(· , z) - 𝔾(· , z) Δ(u-u_0) x
= ∫_Γ_C (u-u_0)𝔾(· , z)- 𝔾(· , z)(u-u_0) s
= ∫_Γ_C u 𝔾(· , z)- 𝔾(· , z)u s - ∫_Γ_C u_0 𝔾(· , z)- 𝔾(·, z)u_0 s.
Applying Green's second identity to u_0 and 𝔾(· , z) in Ω,
- ∫_Γ_C u_0𝔾(· , z) - 𝔾( · , z)u_0 s= ∫_Γ_D u_0𝔾(· , z)- 𝔾( · , z)u_0 s = 0,
where we have used the fact that u_0 and 𝔾(· , z) have zero trace on Γ_D which completes the proof.
The result in Lemma <ref> will now be used to prove that the NtD operator uniquely determines the corrosion coefficient γ. We would like to also note that the representation formula above can be used as an integral equation to solve for γ. Assuming that the Cauchy data for u is known on Γ_N, we can recover the Cauchy data on Γ_C numerically as in <cit.>. Therefore, by restricting the representation formula in Lemma <ref> onto Γ_C (or Γ_N) gives an integral equation for the unknown coefficient. We now prove our uniqueness result.
Assume that γ∈ L^∞(Γ_C) and satisfies the inequality in Section <ref>. Then, the mapping γ↦Λ_γ from L^∞(Γ_C) →ℒ(L^2(Γ_N)) is injective.
To prove the claim, let γ_j for j=1,2 be the corrosion coefficient in (<ref>) such that Λ_γ_1 = Λ_γ_2 for any g ∈ L^2(Γ_N). Then the corresponding solutions
u_γ_1=u_γ_2 and u_γ_1=u_γ_2 on Γ_N,
which implies that u_γ_1=u_γ_2 in for any g ∈ L^2(Γ_N) by Holmgren's Theorem. If we denote u_γ_1=u_γ_2 by u, then by subtracting (<ref>) we have that
0 = ∫_Γ_C (γ_1-γ_2)(x) u(x) 𝔾(x, z) s(x) for all z ∈ and for any g ∈ L^2(Γ_N).
From Lemma <ref>, we obtain that (γ_1-γ_2)(x) 𝔾(x, z) = 0 for a.e. x ∈Γ_C and for all z ∈. Notice that by interior elliptic regularity for any x ∈ D∖{z} the mixed Green's is continuous at x.
Now, by way of contradiction assume that γ_1-γ_2 _L^∞(Γ_C)≠ 0. This would imply that there exists a subset Σ⊂Γ_C with positive boundary measure such that |γ_1-γ_2| >0 on Σ. Therefore, for some x^* ∈Σ we hat that 𝔾(x^* , z) = 0 for all z ∈. Then, we can take a sequence z_n ∈ such that z_n → x^* as n →∞. This gives a contradiction since
𝔾(x^* , z_n ) = 0 for all n ∈ and | 𝔾 (x^* , z_n ) | →∞ as n →∞.
This implies that γ_1-γ_2 _L^∞(Γ_C) = 0, proving the claim.
Now that we have proven our uniqueness result we turn our attention to proving a stability estimate. We will prove a Lipschitz–stability estimate using similar techniques in <cit.>. This will employ a monotonicity estimate for the NtD operator Λ_γ with respect to the corrosion parameter γ as well as our density result in Lemma <ref>. With this in mind, we now present the monotonicity estimate.
Let the NtD operators Λ_γ_j: L^2(Γ_N) → L^2(Γ_N) be given by (<ref>) with corrosion parameter γ_j ∈ L^∞(Γ_C) for j=1,2 and satisfies the inequality in Section <ref>. Then,
∫_Γ_C(γ_1 - γ_2) |u_γ_2|^2 s
≥∫_Γ_N g (Λ_γ_2- Λ_γ_1)g s.
The proof is identical to what is done in <cit.> so we omit the proof to avoid repetition.
With this we are ready to prove our Lipschitz–stability estimate. We will show that the inverse of the nonlinear mapping γ↦Λ_γ is Lipschitz continuous from the set of bounded linear operators ℒ(L^2(Γ_N)) to a finite dimensional subspace of L^∞(Γ_C). To this end, we let 𝒜 be a finite dimensional subspace of L^∞(Γ_C) and define the compact set
𝒜_[γ_min,γ_max] = {γ∈𝒜 : γ_min≤γ≤γ_max on Γ_C }.
This would imply that inverse impedance problem has a unique solution that depends continuously on the NtD mapping. This fits nicely with the results from the previous section that assuming the factorization method is valid the inverse shape problem has a unique solution that depends continuously on the NtD mapping.
Let the NtD operators Λ_γ_j: L^2(Γ_N) → L^2(Γ_N) be given by (<ref>) with corrosion parameter γ_j ∈𝒜_[γ_min,γ_max] for j=1,2 and satisfies the inequality in Section <ref>. Then,
γ_1-γ_2_L^∞(Γ_C)≤ C Λ_γ_1 - Λ_γ_2_ℒ(L^2(Γ_N)),
where C>0 is independent of γ_j ∈𝒜_[γ_min,γ_max].
To prove the claim, notice that from Lemma <ref>, we have that
- ∫_Γ_N g (Λ_γ_2 g- Λ_γ_1 g) s ≥∫_Γ_C(γ_2 - γ_1) |u_γ_2|^2 s
and interchanging the roles of γ_1 and γ_2, we obtain
∫_Γ_N g (Λ_γ_2 g- Λ_γ_1 g) s ≥∫_Γ_C(γ_1 - γ_2) |u_γ_1|^2 s.
Therefore, we have that
Λ_γ_1 - Λ_γ_2_ℒ(L^2(Γ_C))
= sup_g=1| ∫_Γ_N g (Λ_γ_2- Λ_γ_1)g s |
= sup_g=1max{±∫_Γ_N g (Λ_γ_2- Λ_γ_1)g s }
≥sup_g=1max{∫_Γ_C(γ_1 - γ_2) |u_γ_1|^2 s, ∫_Γ_C(γ_2 - γ_1) |u_γ_2|^2 s }.
Notice that we have used the fact that Λ_γ_j is self-adjoint. Here we let · denote the L^2(Γ_N)–norm. This implies that
Λ_γ_1 - Λ_γ_2_ℒ(L^2(Γ_C))/γ_1- γ_2_L^∞(Γ_C)
≥sup_g=1max{∫_Γ_C(γ_1 - γ_2)/γ_1- γ_2_L^∞(Γ_C) |u_γ_1|^2 s, ∫_Γ_C- (γ_1 - γ_2)/γ_1- γ_2_L^∞(Γ_C) |u_γ_2|^2 s }.
Provided that γ_1 ≠γ_2, we now let
ζ= (γ_1 - γ_2)/γ_1- γ_2_L^∞(Γ_C)
and define Ψ : L^2(Γ_N ) → given by
Ψ(g; ζ , γ_1 , γ_2) = max{∫_Γ_Cζ |u_γ_1^(g)|^2 s, ∫_Γ_C- ζ |u_γ_2^(g)|^2 s }.
Then, to complete the proof, it suffices to show that
inf_ζ∈𝒞,
κ_1, κ_2 ∈𝒜_[a,b]sup_g=1Ψ(g; ζ , κ_1 , κ_2)>0
where 𝒞 = {ζ∈𝒜 : ζ_L^∞(Γ_C)=1}. Notice that since 𝒜_[γ_min,γ_max] and 𝒞 are finite dimensional, we have that they are compact sets.
To this end, since we have that ζ_L^∞(Γ_C)=1, then there exists a subset Σ⊂Γ_C with positive boundary measure such that for a.e. x ∈Σ either ζ(x) ≥ 1/2 or -ζ(x) ≥ 1/2. Without loss of generality assume that ζ(x) ≥ 1/2 a.e. for x ∈Σ and the other case can be handled in a similar way. From Lemma <ref>, there exists a sequence {g_n}_n=1^∞∈ L^2(Γ_N) such that the corresponding solution u_γ_1^(g_n) of (<ref>) satisfies
u_γ_1^(g_n)→2 χ_Σ/√(∫_Σ s) as n →∞ in the L^2(Γ_C)–norm.
With the above convergence we have that
lim_n →∞∫_Σ | u_γ_1^(g_n) |^2 s = 4 and lim_n →∞∫_Γ_C ∖Σ |u_γ_1^(g_n)|^2 s = 0.
Then, there exists ĝ∈ L^2(Γ_N) such that
∫_Σ |u_γ_1^(ĝ)|^2 s ≥ 2 and ∫_Γ_C ∖Σ |u_γ_1^(ĝ)|^2 s ≤ 1/2.
If ζ(x) ≥ 1/2 for x ∈Σ, then since ζ(x) ≥ -1 for x ∈Γ_C ∖Σ we have the estimate
Ψ(ĝ; ζ , γ_1 , γ_2) = ∫_Γ_Cζ |u_γ_1^(ĝ)|^2 s ≥1/2∫_Σ |u_γ_1^(ĝ)|^2 s - ∫_Γ_C ∖Σ |u_γ_1^(ĝ)|^2 s ≥1/2 .
By the linearity of (<ref>) we have that
Ψ(ĝ/ĝ ; ζ , γ_1 , γ_2) = Ψ(ĝ; ζ , γ_1 , γ_2) /ĝ^2 ≥1/2ĝ^2 >0
which implies that sup_g=1Ψ(g; ζ , γ_1 , γ_2)≥1/2. Now by the proof of Theorem <ref> we have that the mapping
(ζ , κ_1 , κ_2) ↦sup_g=1Ψ(g; ζ , κ_1 , κ_2)
is semi-lower continuous on the compact set 𝒞×𝒜_[γ_min,γ_max]×𝒜_[γ_min,γ_max]. This implies that it obtains its global minimum which is strictly positive by the above inequality, proving the claim.
With this result we have completed our analytical study of the inverse shape and inverse parameter problem. To reiterate, we have prove that the inverse shape and inverse parameter problem have uniquely solvable solutions given the full NtD mapping of Γ_N.
§ NUMERICAL RESULTS
In this section, we provide numerical examples for the reconstruction of Γ_C using the NtD mapping. To this end, we first derive the corresponding integral equations to obtain the solution u_0 and u on Γ_N for (<ref>) and (<ref>), respectively. For more details on the definition of the integral operators and their jump relations we refer the reader to <cit.>. Next, we explain how to obtain 𝔾(,z) on Γ_N for a given set of points z (see equation (<ref>)). Then, we illustrate how to discretize the NtD operator Λ-Λ_0 using the Galerkin approximation in order to apply the LSM (see equation (<ref>)) or the FM (see Theorem <ref>). Finally, we provide some reconstructions using both FM and LSM, respectively.
In order to provide numerical evidence of the effectiveness of the sampling methods, we need the following definitions.
We define
Φ(x,y)=-log(|x-y|)/2π , x≠ y
to be the fundamental solution of the Laplace equation in ^2. Assume that A ⊂^2 is an arbitrary domain with boundary ∂ A.
The single-layer potential for the Laplace equation over a given boundary ∂ A is denoted by
SL^∂ A[ϕ](x) = ∫_∂ AΦ(x, y) ϕ(y) ds(y) , x∈ A ,
where ϕ is some density function. Now, we let ∂ A= Γ_α∪Γ_β with Γ_α∩Γ_β =∅ be the boundary of the domain A.
The single- and double-layer boundary integral operators over the boundary Γ_i evaluated at a point of Γ_j are given as
S^Γ_i→Γ_j[ϕ|_Γ_i](x) = ∫_Γ_iΦ(x,y)ϕ(y) ds(y) , x∈Γ_j ,
T^Γ_i→Γ_j[ϕ|_Γ_i](x) = ∫_Γ_i∂_ν_j(x)Φ(x,y)ϕ(y) ds(y) , x∈Γ_j ,
where i,j∈{α,β}. Here, ∂_ν_j(x) denotes the normal derivative, where ν_j(x) is the exterior normal at x∈Γ_j.
§.§ Integral equation for computing u_0 on Γ_N
We first consider the uncorroded (healthy) object D, refer also to (<ref>). Now, we are in position to explain how to obtain u_0 at any point of Γ_N.
Let D be the domain representing the uncorroded object with boundary Γ_N∪Γ_D satisfying Γ_N∩Γ_D=∅. Then, the solution u_0 to (<ref>) on Γ_N for the uncorroded object is given by
u_0|_Γ_N(x)=S^Γ_N→Γ_N[ϕ_0|_Γ_N](x)+S^Γ_D→Γ_N[ϕ_0|_Γ_D](x) , x∈Γ_N ,
where ϕ_0|_Γ_N and ϕ_0|_Γ_D are given by the solution of
([ S^Γ_N→Γ_D S^Γ_D→Γ_D; 1/2I+T^Γ_N→Γ_N T^Γ_D→Γ_N ])
([ ϕ_0|_Γ_N; ϕ_0|_Γ_D ])=
([ 0; g ]) .
We use a single-layer ansatz to represent the solution u_0 inside D as
u_0(x) = SL^∂ D[ϕ_0 ](x) = SL^Γ_N[ϕ_0|_Γ_N](x)+SL^Γ_D[ϕ_0|_Γ_D](x) , x∈ D ,
where we used the fact that the given boundary is a disjoint union of Γ_N and Γ_D. Here, ϕ_0|_Γ_N and ϕ_0|_Γ_D are yet unknown functions.
Letting D∋ x→ x∈Γ_D in (<ref>) together with the jump relation and the boundary condition u_0|_Γ_D=0 gives the first boundary integral equation
0= S^Γ_N→Γ_D[ϕ_0|_Γ_N](x)+S^Γ_D→Γ_D[ϕ_0|_Γ_D](x) , x∈Γ_D .
Taking the normal derivative of (<ref>) and letting D∋ x→ x∈Γ_N along with the jump and the boundary condition ∂_νu_0|_Γ_N=g yields the second boundary integral equation
g(x)=T^Γ_N→Γ_N[ϕ_0|_Γ_N](x)+1/2ϕ_0|_Γ_N(x)+T^Γ_D→Γ_N[ϕ_0|_Γ_D](x) , x∈Γ_N .
Equations (<ref>) and (<ref>) can be written together as the system (<ref>)
which have to be solved for ϕ_0|_Γ_N and ϕ_0|_Γ_D. Here, I denotes the identity operator. With this, we can use (<ref>) to obtain u_0 at any point within D. Letting D∋ x→ x∈Γ_N along with the jump relations yields (<ref>).
§.§ Integral equation for computing u on Γ_N
Next, we consider the corroded object D\Ω, refer also to (<ref>) . Now, we are in position to explain how to obtain u at any point of Γ_N.
Let D\Ω be the domain representing the corroded object with boundary Γ_N ∪Γ_C satisfying Γ_N∩Γ_C=∅. Then, the solution u to (<ref>) on Γ_N for the corroded object is given by
u|_Γ_N(x)=S^Γ_N→Γ_N[ϕ|_Γ_N](x)+S^Γ_C→Γ_N[ϕ|_Γ_C](x) , x∈Γ_N ,
where ϕ|_Γ_N and ϕ|_Γ_C are given by the solution of
([ T^Γ_N→Γ_C+γS^Γ_N→Γ_C 1/2I+T^Γ_C→Γ_C+ γS^Γ_C→Γ_C; 1/2I+T^Γ_N→Γ_N T^Γ_C→Γ_N ])
([ ϕ|_Γ_N; ϕ|_Γ_C ])=
([ 0; g ]) .
Using a single-layer ansatz
u(x) = SL^∂ (D\Ω)[ϕ](x) = SL^Γ_N[ϕ|_Γ_N](x)+SL^Γ_C[ϕ|_Γ_C](x) , x∈ D\Ω ,
where ϕ|_Γ_N and ϕ|_Γ_C are again unknown functions. As before, we obtain on Γ_C
u|_Γ_C(x) = S^Γ_N→Γ_C[ϕ|_Γ_N](x)+S^Γ_C→Γ_C[ϕ|_Γ_C](x) , x∈Γ_C ,
∂_ν u|_Γ_C(x) = T^Γ_N→Γ_C[ϕ|_Γ_N](x)+T^Γ_C→Γ_C[ϕ|_Γ_C](x)+1/2ϕ|_Γ_C , x∈Γ_C ,
and hence using the boundary condition ∂_ν u+γ u=0 on Γ_C we obtain the first boundary integral equation
0 = T^Γ_N→Γ_C[ϕ|_Γ_N](x)+T^Γ_C→Γ_C[ϕ|_Γ_C](x)+1/2ϕ|_Γ_C
+ γS^Γ_N→Γ_C[ϕ|_Γ_N](x)+γS^Γ_C→Γ_C[ϕ|_Γ_C](x) , x∈Γ_C .
Using the boundary condition ∂_ν u|_Γ_N=g yields the second boundary integral equation
g(x)=T^Γ_N→Γ_N[ϕ|_Γ_N](x)+1/2ϕ|_Γ_N(x)+T^Γ_C→Γ_N[ϕ|_Γ_C](x) , x∈Γ_N .
Equations (<ref>) and (<ref>) can be written together as the system (<ref>)
which have to be solved for ϕ|_Γ_N and ϕ|_Γ_C. With this, we can use (<ref>) to obtain u at any point within D\Ω. Letting D\Ω∋ x→ x∈Γ_N along with the jump condition yields (<ref>).
§.§ Integral equation for computing 𝔾(,z)
In order to solve the inverse shape problem, for fixed z∈ D, we need to compute 𝔾(,z) on Γ_N (refer also to (<ref>)). Recall, that 𝔾(,z) satisfies
-Δ𝔾(·, z) = δ(· -z) in D, 𝔾(·, z) = 0 on Γ_N, and 𝔾(·, z) = 0 on Γ_D.
Just as in <cit.>, we assume that
𝔾(,z)=w(,z)+Φ(,z) ,
where Φ(,z) is again the fundamental solution of the Laplace equation in ℝ^2. Then, w(,z) obviously satisfies
Δ w(,z)= 0 in D, ∂_νw(,z)=-∂_νΦ(,z) on Γ_N , w(,z)=-Φ(,z) on Γ_D .
Our task now is to compute w(,z) on Γ_N in order to approximate 𝔾(,z) on Γ_N.
Let z∈ D be fixed. Then 𝔾(,z) on Γ_N is given by
𝔾(,z)|_Γ_N=w(,z)|_Γ_N+Φ(,z)|_Γ_N .
Here, w(,z)|_Γ_N is obtained through
w(x,z)|_Γ_N=S^Γ_N→Γ_N[ϕ_z|_Γ_N](x)+S^Γ_D→Γ_N[ϕ_z|_Γ_D](x) , x∈Γ_N ,
where ϕ_z|_Γ_N and ϕ_z|_Γ_D are given by the solution of
([ S^Γ_N→Γ_D S^Γ_D→Γ_D; 1/2I+T^Γ_N→Γ_N T^Γ_D→Γ_N ])
([ ϕ_z |_Γ_N; ϕ_z |_Γ_D ])=
([ -Φ(,z) |_Γ_N; -∂_νΦ(,z) |_Γ_D ]) .
We make the ansatz
w(x,z) = SL^∂ D[ϕ_z](x) =SL^Γ_N[ϕ_z|_Γ_N](x)+SL^Γ_D[ϕ_z|_Γ_D](x) , x∈ D .
Here, ϕ_z|_Γ_N and ϕ_z|_Γ_D are yet unknown functions.
Letting D∋ x→ x∈Γ_D in (<ref>) together with the jump conditions and the boundary condition w(,z)|_Γ_D=-Φ(,z)|_Γ_D gives the first boundary integral equation
-Φ(x,z)= S^Γ_N→Γ_D[ϕ_z|_Γ_N](x)+S^Γ_D→Γ_D[ϕ_z|_Γ_D](x) , x∈Γ_D .
Taking the normal derivative of (<ref>) and letting D∋ x→ x∈Γ_N along with the jump condition and the boundary condition ∂_νw(,z)|_Γ_N=-∂_νΦ(,z)|_Γ_N yields the second boundary integral equation
-∂_νΦ(x,z)=T^Γ_N→Γ_N[ϕ_z|_Γ_N](x)+1/2ϕ_z|_Γ_N(x)+T^Γ_D→Γ_N[ϕ_z|_Γ_D](x) , x∈Γ_N .
Equations (<ref>) and (<ref>) can be written together as the system (<ref>) which we have to solve for ϕ_z|_Γ_N and ϕ_z|_Γ_D. With this, we can use (<ref>) to obtain w(,z) at any point within D. Letting D∋ x→ x∈Γ_N along with the jump conditions yields (<ref>).
§.§ Discretization of the DtN operator Λ-Λ_0
Now, we illustrate how to approximate the equation
(Λ-Λ_0)g_z=𝔾(,z)|_Γ_N
with the Galerkin method for a fixed z∈ D. Without loss of generality, we assume that the functions on Γ_N are parametrized by x(θ) such that θ∈ (θ_1,θ_2)⊆ [0,2π]. For a given set of `Fourier' basis functions φ_n(θ) and yet unknown `Fourier' coefficients g_n^(z)
g_z(θ)=∑_n=0^∞ g_n^(z)φ_n(θ)
which is approximated by a finite sum
g_z(θ) ≈∑_n=0^N_B g_n^(z)φ_n(θ)
(i.e. N_B +1 denotes the number of basis functions). Now, we insert this into (<ref>) to obtain
∑_n=0^N_Bg_n^(z) (Λ-Λ_0)φ_n(θ)=𝔾(θ,z) .
Multiplying this equation with φ_m (θ) for m∈{0,1,…,N_B} and integrating over [θ_1,θ_2] yields the linear system of size (N_B+1)× (N_B+1)
∑_n=0^N_Bg_n^(z)∫_θ_1^θ_2φ_m(θ) (Λ-Λ_0)φ_n(θ) ds(θ) =∫_θ_1^θ_2φ_m(θ)𝔾(θ,z) ds(θ) ,
where the unknown `Fourier' coefficients g_n^(z) are to be determined. We write this linear system abstractly as
Bg^(z)=b^(z) .
To compute the matrix entries for each m and n numerically
B_mn=∫_θ_1^θ_2φ_m(θ) (Λ-Λ_0)φ_n(θ) ds(θ)
we subdivide the interval [θ_1,θ_2] into n_f equidistant panels and apply to each panel Gauß-Legendre quadrature using three quadrature nodes and three weights.
Note that the matrix B should become symmetric for increasing n_f, since the operator (Λ-Λ_0) is self-adjoint. In the same way, we approximate the right hand side for each m
b^(z)_m=∫_θ_1^θ_2φ_m(θ)𝔾(θ,z) ds(θ) .
To obtain (u-u_0 )|_Γ_N = (Λ-Λ_0)φ_n(θ) as well as 𝔾(θ,z) for fixed z∈ D (compare also (<ref>) and (<ref>) as well as (<ref>) for the corresponding integral equation), we use as discretization the boundary element collocation method as done in <cit.> using the wave number 0. We use α=(1-√(3/5))/2, then the collocation nodes on Γ_N are exactly the three Gauß-Legendre nodes on each panel that are needed in the approximation of (<ref>) and (<ref>), respectively. Hence, we are now in position to create the matrix B which approximates (Λ-Λ_0) and the right hand side b^(z) for different domains. That is, we can now create synthetic data which then can be used for the reconstruction algorithms LSM or FM.
§.§ Reconstructions with FMreg and LSMreg
Now, we present some reconstructions using FM and LSM. For the details on the implementation of the FM, we refer the reader to <cit.>. We compute W_FM(z) (see section <ref> for the definition) for z ∈𝒢, where 𝒢 is a set of grid points covering the domain of interest. In the following, we plot W_FM^log(z)=log (W_FM(z)). With this definition, we have that for z∈Ω W_FM^log(z)≫ 0 and z∈ D\Ω W_FM^log(z)≪ 0. We will use regularization for the factorization method (FMreg) by only using the singular values that are greater than 10^-5 since the problem is severely ill-posed. We denote the regularized version of W_FM^log(z) by W_FMreg^log(z). The regularized version of the linear sampling method (LSMreg) which is denoted by W_LSMreg^log(z) (refer to section <ref> for details), where we solve
(B^∗ B+α I)g^(z)=B^∗ b^(z) with α=10^-5
the Tikhonov regularization of Bg^(z)=b^(z).
Example 1:
Let the domain be a square D=[0,2π]× [-2π,0] completely buried and only the upper part of the domain is visible. Parts of the square are corroded as shown in Figure <ref>. Precisely, the corroded part is given by the polygon with vertices (2π,0), (0,0), (π/2,-3π/2), and (3π/2,-3π/2).
To create the data, we use the 20 basis functions φ_n(θ)=cos(nθ) for n∈{0,1,…,19} and θ∈ [0,2π] and n_f=300 on Γ_N for the boundary element collocation method. Furthermore, for simplicity we use γ=1/2 and γ=2. For the FMreg and LSMreg, we use an equidistant grid of size 100× 100 of D. We choose the level–curve=3/2 as threshold value for recovering Γ_C by the FMreg imaging functional. For the LSMreg imaging functional we choose the level–curve=-1/2 as threshold value. In Figure <ref> and <ref>, we present the reconstruction results with the FMreg and the LSMreg.
We observe reasonable reconstructions using the FMreg and LSMreg although not perfect which is expected as the problem is severely ill-posed.
Example 2: We now consider a wedge-shaped domain using the angle of π/2. A certain part of it is corroded as shown in Figure <ref>. Precisely, the corroded part is given by the triangle with vertices (1,0), (0,1), and (0,0).
We use as 20 basis functions φ_n(θ)=cos(4nθ) for n∈{0,1,…,19} with θ∈ [0,π/2] and n_f=300 on Γ_N for the boundary element collocation method. Further, we use γ=1/2 and γ=2. For the FMreg and LSMreg, we use an equidistant grid of size 100× 100 of [0,1]× [0,1]. We choose the level–curve=0.25 and =-1 as threshold value for the FMreg imaging functional when γ=1/2 and γ=2, respectively. For the LSMreg imaging functional we choose level–curve=-1 and =-1.5 as threshold value for γ=1/2 and γ=2. In Figure <ref> and <ref>, we present the reconstruction results with the FMreg and the LSMreg.
We observe reasonable reconstructions using the FMreg and LSMreg which is much better than the one for the previous example. The FMreg performs much better.
Example 3: Next, we use an ellipse with half-axis a=1.1 and b=1 that is half buried. Hence, Γ_N and Γ_D are given in parameter form as x_1 (θ)=1.1cos(θ) and x_2 (θ)=sin(θ) with θ∈ [0,π] and θ∈ (π,2π), respectively. The boundary Γ_C is given by x_1 (θ)=1.1cos(θ) and x_2(θ) =0.5sin(θ) with θ∈ (π,2π). The situation is depicted in Figure <ref>.
We use as 20 basis functions φ_n(θ)=cos(2nθ) for n∈{0,1,…,19} with θ∈ [0,π/2] and n_f=300 on Γ_N for the boundary element collocation method. For this example, we use an equidistant grid of size 100× 100 of [-1.1,1.1]× [-1.1,1.1]. To reconstruct Γ_C we choose the level–curve=2.5 and =1.5 as threshold value for the FMreg imaging functional when γ=1/2 and γ=2, respectively. In Figure <ref>, we present the reconstruction results with the FMreg.
We observe good reconstructions using the FMreg which is much better than the one for the previous two examples.
§ SUMMARY AND CONCLUSION
We have extended the applicability of the LSM and FM for recovering an unknown corroded boundary from partial Cauchy data. This main idea that we employed was to embed the `defective' region into a `healthy' region. This allows use to consider the problem of finding the corroded region. We also want to note that the analysis presented here also implies that the generalized linear sampling method <cit.> is a valid reconstruction method. In the numerical results, we have seen that the threshold value seems to depend on γ. To obtain the correct dependence, a further investigation is needed. Moreover, the FMreg seems to provide better reconstructions than the LSMreg which might be due to the choice of the regularization parameter. A thorough investigation is needed to find out if the correct choice of the regularization parameter on both methods can give similar reconstruction results. In sum, we are able to obtain reasonable reconstructions with both the FMreg and the LSMreg giving us a good idea of how much of the buried obstacle is corroded although the problem at hand is severely ill-posed.
Acknowledgments: The research of I. Harris and H. Lee is partially supported by the NSF DMS Grant 2107891.
99
Zaremba
H. Ammari, O. Bruno, K. Imeri and N. Nigam,
Wave enhancement through optimization of boundary conditions,
SIAM J. Sci. Comput. 42(1), B207–B224 (2019).
atkinson1997
K. E. Atkinson,
“The Numerical Solution of Integral Equations of the Second Kind”,
Cambridge University Press 1997.
GLSM
L. Audibert and H. Haddar,
A generalized formulation of the linear sampling method with exact characterization of targets in terms of far-field measurements,
Inverse Problems 30, 035011 (2014).
fm-waveguide
L. Borcea and S. Meng,
Factorization method versus migration imaging in a waveguide,
Inverse Problems 35, 124006 (2019).
Data-completion
Y. Boukari and H. Haddar,
A convergent data completion algorithm using surface integral equations.
Inverse Problems 31, 035011 (2015).
Brezis
H. Brezis,
“Functional Analysis, Sobolev Spaces and Partial Differential Equations”.
Springer 2011.
FM-wave
F. Cakoni, H. Haddar and A. Lechleiter,
On the factorization method for a far field inverse scattering problem in the time domain,
SIAM J. Math. Anal. 51(2), 854–872 (2019).
CK-gibc
F. Cakoni , Y. Hu and R. Kress,
Simultaneous reconstruction of shape and generalized impedance functions in electrostatic imaging,
Inverse Problems 30, 105009 (2014).
CK-ibc
F. Cakoni and R. Kress,
Integral equations for inverse problems in corrosion detection from partial Cauchy data,
Inverse Problems and Imaging 1(2), 229–245 (2007).
CK-ibc2
F. Cakoni, R. Kress and C. Schuft,
Simultaneous reconstruction of shape and impedance in corrosion detection,
J. Methods and Applications of Analysis 17(4), 357–378 (2010).
EIT-InnerStability
S. Chaabane and M. Jaoua,
Identification of Robin coefficients by the means of boundary measurements,
Inverse Problems 15, 1425 (1999).
fm-gbc
M. Chamaillard, N. Chaulet, and H. Haddar,
Analysis of the factorization method for a general class of boundary conditions,
J. Inverse Ill-Posed Probl. 22(5), 643–670 (2014).
CK
D. Colton and A. Kirsch,
A simple method for solving inverse scattering problems in the resonance region,
Inverse Problems 12, 383–393 (1996).
Colto2013
D. Colton and R. Kress.
Inverse Acoustic and Electromagnetic Scattering Theory.
Springer, 3rd edition, 2013.
embry
M.R. Embry,
Factorization of operators on Banach space
Proc. Amer. Math. Soc. 38, 587–590 (1973).
EIT-granados1
G. Granados and I. Harris,
Reconstruction of small and extended regions in EIT with a Robin transmission condition.
Inverse Problems 38, 105009 (2022).
EIT-granados2
G. Granados, I. Harris and H. Lee,
Reconstruction of extended regions in EIT with a generalized Robin transmission condition.
Comm. on Analysis & Computation 1(4), 347–368 (2023).
fm-GR
R. Griesmaier and H.-G. Raumer,
The factorization method and Capon’s method for random source identification in experimental aeroacoustics.
Inverse Problems 38, 115004 (2022).
GLSM-elastic
B. Guzina and T.-P. Nguyen,
Generalized Linear Sampling Method for the inverse elastic scattering of fractures in finite solid bodies,
Inverse Problems 35, 104002 (2019).
EIT-finiteElectrode
B. Harrach,
Uniqueness and Lipschitz stability in electrical impedance tomography with finitely many electrodes,
Inverse Problems 35, 024005 (2019).
eit-transmission2
B. Harrach,
Uniqueness, stability and global convergence for a discrete inverse elliptic Robin transmission problem,
Numer. Math. 147, 29–70 (2021).
eit-transmission1
B. Harrach and H. Meftahi,
Global Uniqueness and Lipschitz-Stability for the Inverse Robin Transmission Problem,
SIAM J. App. Math. 79(2), 525–550 (2019).
eit-harris
I. Harris,
A direct method for reconstructing inclusions and boundary conditions from electrostatic data,
Applicable Analysis 102(5), 1511–1529 (2023).
regfm1
I. Harris,
Regularization of the Factorization Method applied to diffuse optical tomography,
Inverse Problems 37, 125010 (2021).
regfm2
I. Harris,
Regularized factorization method for a perturbed positive compact operator applied to inverse scattering,
Inverse Problems 39, 115007 (2023).
holmgren
H. Hedenmalm,
On the uniqueness theorem of Holmgren,
Math. Z. 281, 357–378 (2015).
firstFM
A. Kirsch,
Characterization of the shape of the scattering obstacle by the spectral data of the far field operator,
Inverse Problems 14, 1489–512 (1998).
FMiso
A. Kirsch,
The MUSIC-algorithm and the factorization method in inverse scattering theory for inhomogeneous media,
Inverse Problems 18, 1025 (2002).
kirschipbook
A. Kirsch,
“An Introduction to the Mathematical Theory of Inverse Problems”,
2^nd edition, Springer 2011.
kirschbook
A. Kirsch and N. Grinberg,
“The Factorization Method for Inverse Problems”,
Oxford University Press, 2008.
kleefeldhotspot
A. Kleefeld,
The hot spots conjecture can be false: some numerical examples,
Advances in Computational Mathematics 47, 85 (2021).
JIIP
A. Laurain and H. Meftahi,
Shape and parameter reconstruction for the Robin transmission inverse problem.
J. Inverse Ill-Posed Probl. 24(6), 643–662 (2016).
glsm-stokes
M. Liu and J. Yang,
The Sampling Method for Inverse Exterior Stokes Problems,
SIAM J. Sci. Computing 44(3), B429–B456 (2022).
McLean
W. McLean, “Strongly elliptic systems and boundary integral equations”, Cambridge University Press 2000.
invrobin-Meftahi
H. Meftahi,
Stability analysis in the inverse Robin transmission problem,
Math. Methods Appl. Sci. 40, 2505–2521 (2017).
lsm-heat
G. Nakamura and H. Wang,
Linear sampling method for the heat equation with inclusions,
Inverse Problems 29, 104015 (2013).
applied-lsm
F. Pourahmadian and H. Yue,
Laboratory application of sampling approaches to inverse scattering,
Inverse Problems 37, 055012 (2021).
Rundell
W. Rundell.
Recovering an obstacle and its impedance from Cauchy data.
Inverse Problems 21, 045003 (2008).
salsa
S. Salsa,
“Partial Differential Equations in Action From Modelling to Theory”,
Springer Italia, 2nd edition (2015).
LSM-periodic
J. Yang, B. Zhang and R. Zhang,
A sampling method for the inverse transmission problem for periodic media,
Inverse Problems 28, 035004 (2012).
|
http://arxiv.org/abs/2409.03064v1 | 20240904203302 | A posteriori error estimates for a bang-bang optimal control problem | [
"Francisco Fuica"
] | math.OC | [
"math.OC",
"cs.NA",
"math.NA",
"49M25, 65N15, 65N30"
] |
Facultad de Matemáticas, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Santiago, Chile.
[email protected]
[2010]Primary
49M25,
65N15,
65N30.
§ ABSTRACT
We propose and analyze a posteriori error estimates for a control-constrained optimal control problem with bang-bang solutions.
We consider a solution strategy based on the variational approach, where the control variable is not discretized; no Tikhonov regularization is made.
We design, for the proposed scheme, a residual-type a posteriori error estimator that can be decomposed as the sum of two individual contributions related to the discretization of the state and adjoint equations.
We explore reliability and efficiency properties of the aforementioned error estimator.
We illustrate the theory with numerical examples.
A posteriori error estimates for a bang-bang optimal control problem
Francisco Fuica
====================================================================
§ INTRODUCTION
The main purpose of this work is the design and analysis of a posteriori error estimates for a bang-bang optimal control problem governed by an elliptic partial differential equation (PDE) as state equation.
This PDE-constrained optimization problem entails the minimization of a cost functional in which the cost of the control is negligible, so no Tikhonov regularization term is considered.
To make matters precise, let Ω⊂ℝ^d (d∈{2, 3}) be an open, bounded and convex polygonal/polyhedral domain with boundary ∂Ω.
Given a desired state y_Ω∈ L^2(Ω), we define the cost functional
J(u):=1/2y_u - y_Ω_Ω^2.
We will consider the following optimal control problem:
min J(u) subject to -Δ y_u = u in Ω,
y = 0 on ∂Ω, and u∈𝕌_ad,
where 𝕌_ad:={v ∈ L^2(Ω): a ≤ v(x) ≤ b a.e. x ∈Ω} with a < b.
Problem (<ref>) has been considered in several works.
One of the main sources of difficulty for deriving error estimates within this type of problems is that the cost function J does not incorporate the standard Tikhonov regularization term αu_Ω^2 with α > 0.
Note that without this term we cannot directly derive nor bound the standard error term αu̅ - u̅_h_Ω^2, where u̅ denotes the optimal control and u̅_h denotes a suitable approximation of u̅.
To the best of our knowledge, the first work that provides an a priori error analysis for problem (<ref>) is <cit.>.
In such a work, the authors used the so-called variational approach, introduced in <cit.>, in order to discretize problem (<ref>).
In addition, they proved estimates for the approximation error associated to the optimal state and adjoint state without assuming that the control is of bang-bang type; see <cit.>.
For the case when the optimal control is of bang-bang type, the authors proved an error estimate for all the optimal variables <cit.>.
A suitable Tikhonov regularization of problem (<ref>) and its convergence were studied in <cit.>, under an additional assumption on the structure of the optimal control.
The parabolic counterpart of (<ref>) was studied in <cit.>, where the authors proved, using Petrov-Galerkin schemes in time and conforming finite elements in space, a priori estimates for the error between a discretized regularized problem and the limit problem.
In the particular case of bang-bang controls, the estimates were further improved; see <cit.>.
We also mention the work <cit.>, in which the authors considered the Tikhonov regularization of problem (<ref>) with a semilinear elliptic PDE and derived a priori regularization error estimates for the control; a suitable extension to sparse optimal control problems was derived as well.
Finally, for second-order analysis and approximation results for bang-bang optimal control problems, we refer the reader to <cit.>.
Among the different numerical methods that exist in the literature to approximate solutions to PDE-constrained optimization problems (and PDEs in general), a particular class stands out for its competitive performance, improving the quality of discrete approximations of the corresponding problem within a prescribed tolerance using a minimal amount of work.
These are the adaptive finite element methods (AFEMs).
The main tools present in these iterative methods are a posteriori error estimates, which provide global and local information on the error of discrete solutions and that can be easily computed from the given numerical solution and problem data.
Regarding the use of these methods in the context of control–constrained linear–quadratic optimal control problems, several advances has been made in recent years.
For a discussion on this matter, we refer the interested reader to the following non-exhaustive list of works: <cit.>.
As opposed to these advances, the analysis of AFEMs for bang-bang optimal control problems is rather scarce.
To the best of our knowledge, the work <cit.> appears to be the first one that provides a posteriori error estimates for problem (<ref>).
In this article, the author investigated Tikhonov regularization and discretization of bang-bang control problems, developing a parameter choice rule that adaptively selects the Tikhonov regularization parameter depending on a posteriori computable quantities.
However, the error estimates were not robust with respect to α.
We also mention the work <cit.>, where robust global reliability estimates were provided.
We note that no efficiency estimates were provided in <cit.>.
In the present manuscript, we consider the variational discretization <cit.> to approximate the optimal control problem (<ref>).
In particular, we use piecewise linear functions to approximate the solution of the state and adjoint equations whereas the admissible control set is not discretized.
To perform the analysis, we follow <cit.> and do not consider a Tikhonov regularization.
This approach allows us to circumvent the necessity of choosing a suitable regularization parameter for each mesh, cf. <cit.>.
Within this framework, we devise a residual–based a posteriori error estimator that is formed by only two contributions that are related to the discretization of the state and adjoint equations.
In two- and three-dimensional convex polygonal/polyhedral domains, we prove efficiency estimates; reliability properties of the a posteriori error estimator are studied as well.
More precisely, we prove that the corresponding local error indicators associated to the discretization of the state and adjoint equations are locally efficient; see Theorem <ref>.
We recall that reliability estimates are sufficient to obtain a numerical solution with an accuracy below a prescribed tolerance, whereas efficiency estimates are of importance since they ensure that the mesh is correctly refined, so that one obtains a numerical solution with a prescribed tolerance using a (nearly) minimal number of degrees of freedom <cit.>.
Based on the proposed a posteriori error estimator, we design a simple adaptive loop that delivers optimal experimental rates of convergence for all the involved individual contributions of the corresponding error.
In particular, the aforementioned loop delivers quadratic rates of convergence for the approximation error associated to all the optimal variables.
In addition, and in contrast to <cit.>, the error indicator that we consider for the adjoint variable in the L^∞(Ω)–norm allows for unbounded forcing terms.
This is of importance since, as it can be observed from (<ref>), the discrete adjoint equation has y̅_ℓ(𝔲̅_ℓ) - y_Ω as a forcing term and, in general, the latter term does not necessarily belong to L^∞(Ω).
The rest of the manuscript is organized as follows.
The remaining of this section is devoted to introduce the notation that we will use throughout the manuscript.
In section <ref> we present a weak formulation for the optimal control problem under consideration and introduce a finite element discretization scheme.
The main part of the paper is section <ref>, where we design an a posteriori error estimator for the proposed approximation scheme and analyze reliability and efficiency properties.
Finally, in section <ref>, we present a series of two-dimensional numerical examples that illustrate the theory and reveal a competitive performance of the devised AFEMs.
§.§ Notation
Throughout this work, we use standard notation for Lebesgue and Sobolev spaces and their corresponding norms.
Given an open and bounded domain G, we denote by (·,·)_G and ·_G the inner product and norm of L^2(G), respectively.
If 𝒳 and 𝒴 are Banach function spaces, we write 𝒳↪𝒴 to denote that 𝒳 is continuously embedded in 𝒴.
The relation 𝔞≲𝔟 indicates that 𝔞≤ C 𝔟, with a positive constant that depends neither on 𝔞, 𝔟 nor on the discretization parameters.
The value of C might change at each occurrence.
§ THE OPTIMAL CONTROL PROBLEM
In this section, we briefly present a weak formulation for problem (<ref>) and recall first-order optimality conditions.
We also introduce a finite element discretization scheme.
§.§ Weak formulation
We consider the following weak version of problem (<ref>): Find
min{ J(u): u ∈𝕌_ad}
subject to
y_u∈ H_0^1(Ω) : (∇ y_u,∇ v)_Ω=(u,v)_Ω ∀ v ∈ H_0^1(Ω).
Problem (<ref>)–(<ref>) admits a unique optimal solution u̅∈𝕌_ad <cit.> and the optimal control u̅ satisfies the first-order optimality condition <cit.>
(p̅, u - u̅)_Ω≥ 0 ∀ u ∈𝕌_ad,
where the optimal adjoint state p̅∈ H_0^1(Ω) solves the following adjoint equation:
(∇ v,∇p̅)_Ω=(y̅ - y_Ω,v)_Ω ∀ v ∈ H_0^1(Ω),
with y̅ := y_u̅.
Moreover, inequality (<ref>) implies that, for a.e. x∈Ω, we have
u̅(x) = a if p̅(x) > 0, u̅(x) ∈ [a,b] if p̅(x) = 0, u̅(x) = b if p̅(x) < 0;
see <cit.>.
§.§ Finite element approximation
Let us introduce some ingredients of standard finite element approximations <cit.>.
Let 𝒯 = {T} be a conforming partition of Ω into simplices T with size h_T := diam(T).
Let us denote by 𝕋 the collection of conforming and shape regular meshes that are refinements of 𝒯_0, where 𝒯_0 represents an initial mesh.
Given a mesh 𝒯_ℓ∈𝕋 with ℓ∈ℕ_0, we denote by ℰ_ℓ the set of internal (d-1)-dimensional interelement boundaries e of 𝒯_ℓ.
For T ∈𝒯_ℓ, we let ℰ_T denote the subset of ℰ_ℓ which contains the sides of the element T.
We denote by 𝒩_e ⊂𝒯_ℓ the subset that contains the two elements that have e as a side, namely, 𝒩_e={T^+,T^-}, where T^+, T^- ∈𝒯_ℓ are such that e = T^+ ∩ T^-.
For T ∈𝒯_ℓ, we define the star associated with the element T as
𝒩_T:= { T^'∈𝒯_ℓ: ℰ_T∩ℰ_T^'≠∅}.
In an abuse of notation, below we denote by 𝒩_T either the set itself or the union of its elements.
Given a mesh 𝒯_ℓ∈𝕋 with ℓ∈ℕ_0, we define the finite element space of continuous piecewise polynomials of degree one that vanish on the boundary as
𝕍_ℓ = {v_ℓ∈ C(Ω): v_ℓ|_T∈ℙ_1(T) ∀ T∈𝒯_ℓ}∩ H_0^1(Ω).
Given a discrete function v_ℓ∈𝕍_ℓ, we define, for any internal side e ∈ℰ_ℓ, the jump or interelement residual ∇ v_ℓ·𝐧 on e by
∇ v_ℓ·𝐧|_e:= 𝐧^+·∇ v_ℓ|^_T^+ + 𝐧^-·∇ v_ℓ|^_T^-,
where 𝐧^± denote the unit exterior normal to the element T^±. Here, T^+, T^-∈_ℓ are such that T^+≠ T^- and ∂ T^+∩∂ T^- = e.
We consider the following semi-discrete version of the optimal control problem (<ref>)–(<ref>): Find
min{ J(𝔲): 𝔲∈𝕌_ad}
subject to the discrete state equation
(∇ y_ℓ(𝔲),∇ v_ℓ)_Ω
=
(𝔲, v_ℓ)_Ω ∀ v_ℓ∈𝕍_ℓ.
Existence of an optimal control 𝔲̅ for (<ref>)–(<ref>) can be proved by standard arguments. However, uniqueness of 𝔲̅ is not guaranteed <cit.>. In spite of this fact, we can characterize optimal solutions: every optimal control 𝔲̅ for (<ref>)–(<ref>) satisfies
(p̅_ℓ, u - 𝔲̅)_Ω≥ 0 ∀ u ∈𝕌_ad,
where p̅_ℓ∈𝕍_ℓ solves the discrete adjoint equation
(∇ v_ℓ,∇p̅_ℓ)_Ω=(y̅_ℓ(𝔲̅) - y_Ω,v_ℓ)_Ω ∀ v_ℓ∈𝕍_ℓ.
Here, y̅_ℓ(𝔲̅) solves problem (<ref>) with 𝔲 = 𝔲̅. We immediately notice that y̅_ℓ(𝔲̅) and p̅_ℓ are unique, even if 𝔲̅ is not unique; see <cit.>.
As in the continuous case, from (<ref>) it stems the following characterization for optimal controls 𝔲̅, for a.e. x∈Ω:
𝔲̅(x) = a if p̅_ℓ(x) > 0, 𝔲̅(x) ∈ [a,b] if p̅_ℓ(x) = 0, 𝔲̅(x) = b if p̅_ℓ(x) < 0.
From (<ref>) it follows that, if p̅_ℓ admits a zero level set of measure 0, then 𝔲̅(x)=a or 𝔲̅(x)=b for a.e. x∈Ω and thus 𝔲̅ is both unique and of bang-bang type.
Finally, since 𝔲̅ implicitly depends on 𝒯_ℓ, in what follows we shall adopt the notation 𝔲̅_ℓ.
§ A POSTERIORI ERROR ESTIMATES
In the present section we design an error estimator for optimal control problem (<ref>)–(<ref>).
We explore its reliability properties in section <ref>, whereas its local efficiency is proved in section <ref>.
§.§ Reliability
The upcoming analysis mainly hinges on approximations of the error between a solution to the discrete optimal control problem and suitable auxiliary variables that we shall define in what follows.
§.§.§ Auxiliary upper bounds
Let 𝔲̅_ℓ be a solution of the semi-discrete optimal control problem associated to a mesh 𝒯_ℓ∈𝕋. We introduce the auxiliary variable y_𝔲̅_ℓ∈ H_0^1(Ω), defined as the unique solution to
(∇ y_𝔲̅_ℓ,∇ v)_Ω
=
(𝔲̅_ℓ, v)_Ω ∀ v ∈ H_0^1(Ω).
We note that the discrete optimal state y̅_ℓ(𝔲̅_ℓ) corresponds to the finite element approximation of y_𝔲̅_ℓ in 𝕍_ℓ.
Hence, since we have assumed that Ω is convex, we can use the results from <cit.> to obtain that
y_𝔲̅_ℓ - y̅_ℓ(𝔲̅_ℓ)_Ω≲η_st,2,
where the error estimator η_st,2 and its local error indicators are defined by
η_st,2^2:=∑_T∈_ℓE_st,T^2,
E_st,T^2:=h_T^4𝔲̅_ℓ_T^2 + ∑_e∈ℰ_Th_T^3∇y̅_ℓ(𝔲̅_ℓ) ·𝐧|_e_e^2.
We define p_y̅_ℓ∈ H_0^1(Ω) as the unique solution to
(∇ v, ∇ p_y̅_ℓ)_Ω
=
(y̅_ℓ(𝔲̅_ℓ) - y_Ω, v)_Ω ∀ v ∈ H_0^1(Ω),
and immediately note that p̅_ℓ corresponds to the finite element approximation of p_y̅_ℓ in 𝕍_ℓ. Using again that Ω is convex, we invoke <cit.> to obtain that
p_y̅_ℓ - p̅_ℓ_Ω≲η_adj,2,
where the error estimator η_adj,2 and its corresponding local indicators are given by
η_adj,2^2:=∑_T∈_ℓ E_adj,T^2, E_adj,T^2:=h_T^4y̅_ℓ(𝔲̅_ℓ) - y_Ω_T^2 + ∑_e∈ℰ_Th_T^3∇p̅_ℓ·𝐧|_e_e^2.
We introduce, for each T∈_ℓ, the following a posteriori local error indicators
E_adj,∞,T:= h_T^2-d/2y̅_ℓ(𝔲̅_ℓ) - y_Ω_T + h_Tmax_e∈ℰ_T∇p̅_ℓ·𝐧|_e_L^∞(e),
and the error estimator η_adj,∞:=max_T∈𝒯_ℓE_adj,∞,T. The proof of the following reliability estimate can be found in <cit.>:
p_y̅_ℓ - p̅_ℓ_L^∞(Ω)≲ι_ℓη_adj,∞,
where the term ι_ℓ is defined by
ι_ℓ:= |log(max_T∈𝒯_ℓ1/h_T)|.
An important feature of the local error indicator in (<ref>) is that it incorporates the L^2(T)-norm of the element residual instead of the L^∞(T)-norm, which is the common consideration in the literature; see, e.g., <cit.>.
In particular, the error estimator η_adj,∞ allows for a pointwise a posteriori error analysis with unbounded right-hand sides, cf. <cit.>.
This is of importance since, as it can be observed in (<ref>), its indicators contain the term y̅_ℓ(𝔲̅_ℓ) - y_Ω which is not necessarily bounded.
§.§.§ Reliability estimates
Let us define the set S:={x∈Ω : p̅(x) ≠ 0}⊂Ω, i.e., the support of p̅.
In what follows we shall assume that (cf. <cit.>)
∃β∈ (0,1], ∃ℭ > 0, ∀ε > 0 such that |{x∈ S : |p̅(x)|≤ε}|≤ℭε^β,
where, given a set A⊂Ω, |A| denotes the Lebesgue measure of it.
Let u̅∈𝕌_ad be the unique solution to problem (<ref>)–(<ref>) with y̅ and p̅ being the corresponding state and adjoint state variables, respectively. Let 𝔲̅_ℓ∈𝕌_ad be a solution to the semi-discrete problem with y̅_ℓ(𝔲̅_ℓ) and p̅_ℓ being the corresponding discrete state and discrete adjoint state variables, respectively. If assumption (<ref>) holds, then
y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^2 + p̅ - p̅_ℓ_L^∞(Ω)^2≲η_st,2^2 + η_adj,2 + ι_ℓ^β + 1η_adj,∞^β + 1 + ι_ℓ^2/2 - βη_adj,∞^2/2 - β ,
and
u̅ - 𝔲̅_ℓ_L^1(S)^2≲η_st,2^2β + η_adj,2^β + ι_ℓ^β(β + 1)η_adj,∞^β(β + 1) + ι_ℓ^2β/2 - βη_adj,∞^2β/2 - β .
The hidden constants are independent of the continuous and discrete optimal variables, the size of the elements in the mesh 𝒯_ℓ, and #𝒯_ℓ.
We invoke assumption (<ref>) and the same arguments that lead to inequality (2.13) in <cit.> to obtain
u̅ - 𝔲̅_ℓ_L^1(S)≲p̅ - p̅_ℓ_L^∞(Ω)^β.
We now proceed on the basis of three steps.
Step 1. (estimation of y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω) We consider u=𝔲̅_ℓ in (<ref>) and u=u̅ in (<ref>).
Adding the resulting inequalities and using the auxiliary term p_y̅_ℓ, defined as the solution to (<ref>), we arrive at
0 ≤ (p̅ - p̅_ℓ, 𝔲̅_ℓ - u̅)_Ω = (p_y̅_ℓ - p̅_ℓ, 𝔲̅_ℓ - u̅)_Ω + (p̅ - p_y̅_ℓ, 𝔲̅_ℓ - u̅)_Ω =: 𝐈 + 𝐈𝐈.
We now bound the terms 𝐈 and 𝐈𝐈 in (<ref>).
To estimate 𝐈, we use (<ref>) and the a posteriori error estimates (<ref>) and (<ref>):
𝐈 = (p_y̅_ℓ - p̅_ℓ,𝔲̅_ℓ - u̅)_Ω∖ S + (p_y̅_ℓ - p̅_ℓ,𝔲̅_ℓ - u̅)_S
≲p_y̅_ℓ - p̅_ℓ_Ω + p_y̅_ℓ - p̅_ℓ_L^∞(Ω)𝔲̅_ℓ - u̅_L^1(S)
≲η_adj,2 + ι_ℓη_adj,∞p̅ - p̅_ℓ_L^∞(Ω)^β.
To control the term p̅ - p̅_ℓ_L^∞(Ω)^β in (<ref>), we use the triangle inequality and the a posteriori error estimate (<ref>) to arrive at
p̅ - p̅_ℓ_L^∞(Ω)^β≲ p̅ - p_y̅_ℓ_L^∞(Ω)^β + p_y̅_ℓ - p̅_ℓ_L^∞(Ω)^β
≲ p̅ - p_y̅_ℓ_L^∞(Ω)^β + ι_ℓ^βη_adj,∞^β.
Now, we note that p̅ - p_y̅_ℓ∈ H_0^1(Ω) solves
(∇ v, ∇ (p̅ - p_y̅_ℓ))_Ω = (y̅ - y̅_ℓ(𝔲̅_ℓ), v)_Ω ∀ v∈ H_0^1(Ω).
Hence, the convexity of Ω, the continuous embedding H^2(Ω) ↪ C(Ω), and the stability of problem (<ref>) yield the bound p̅ - p_y̅_ℓ_L^∞(Ω)^β≲p̅ - p_y̅_ℓ_H^2(Ω)^β≲y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^β. Using the latter in (<ref>) results in
p̅ - p̅_ℓ_L^∞(Ω)^β≲y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^β + ι_ℓ^βη_adj,∞^β.
Thus, the use (<ref>) and Young's inequality in (<ref>) with exponents 𝗉=2β and 𝗊=22-β, yield
𝐈≤ C(η_adj,2 + ι_ℓ^β + 1η_adj,∞^β + 1 + ι_ℓ^2/2 - βη_adj,∞^2/2 - β) + 1/4y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^2, C > 0.
We notice that if ι_ℓ>1 and η_adj,∞<1, then
ι_ℓ^β + 1η_adj,∞^β + 1 + ι_ℓ^2/2 - βη_adj,∞^2/2 - β≤ 2 ι_ℓ^β + 1η_adj,∞^2/2-β≤ 2ι_ℓ^2η_adj,∞^2/2-β,
due to the fact that 2/2 - β≤β + 1 ≤ 2 when β∈(0,1].
To estimate 𝐈𝐈, we first note that y_𝔲̅_ℓ - y̅∈ H_0^1(Ω) solves
(∇ (y_𝔲̅_ℓ - y̅), ∇ v)_Ω = (𝔲̅_ℓ - u̅, v)_Ω ∀ v∈ H_0^1(Ω).
Then, replacing v = p̅ - p_y̅_ℓ in (<ref>) and v = y_𝔲̅_ℓ - y̅ in (<ref>) we obtain the identity (𝔲̅_ℓ - u̅, p̅ - p_y̅_ℓ)_Ω = (y̅ - y̅_ℓ(𝔲̅_ℓ), y_𝔲̅_ℓ - y̅)_Ω, which, in turns, yields
𝐈𝐈 = (y̅ - y̅_ℓ(𝔲̅_ℓ), y_𝔲̅_ℓ - y̅_ℓ(𝔲̅_ℓ))_Ω - y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^2.
This identity, in light of Young's inequality and estimate (<ref>), allows us to obtain
𝐈𝐈≤ Cη_st,2^2 - 1/2y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^2, C>0.
We have thus proved, in view of (<ref>), (<ref>), and (<ref>), the error bound
y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^2≲η_st,2^2 + η_adj,2 + ι_ℓ^β + 1η_adj,∞^β + 1 + ι_ℓ^2/2 - βη_adj,∞^2/2 - β.
Step 2. (estimation of p̅ - p̅_ℓ_L^∞(Ω)) The use of triangle inequality, a posteriori error estimate (<ref>), and the bound p̅ - p_y̅_ℓ_L^∞(Ω)≲y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω results in
p̅ - p̅_ℓ_L^∞(Ω)^2≲p̅ - p_y̅_ℓ_L^∞(Ω)^2 + p_y̅_ℓ - p̅_ℓ_L^∞(Ω)^2≲y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^2 + ι_ℓ^2η_adj,∞^2.
Consequently, using (<ref>) in the latter estimate we conclude that
p̅ - p̅_ℓ_L^∞(Ω)^2≲η_st,2^2 + η_adj,2 + ι_ℓ^β + 1η_adj,∞^β + 1 + ι_ℓ^2/2 - βη_adj,∞^2/2 - β.
Step 3. (estimation of u̅ - 𝔲̅_ℓ_L^1(S)) To obtain the estimate (<ref>) it suffices to use the bound (<ref>) in (<ref>).
When β=1 in (<ref>), we obtain the following simplified upper bound for the total approximation error:
u̅ - 𝔲̅_ℓ_L^1(S)^2 + y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^2 + p̅ - p̅_ℓ_L^∞(Ω)^2≲η_st,2^2 + ι_ℓ^2η_adj,∞^2 + η_adj,2.
A sufficient condition to ensure that assumption (<ref>) is fulfilled with β=1 was given in <cit.>.
§.§ Efficiency
In this section, we study efficiency properties for the local a posteriori error estimators η_st,2, η_adj,2, and η_adj,∞, defined in section <ref>.
Before proceeding with the analysis, we introduce some notation: for an edge, triangle or tetrahedron G, let 𝒱(G) be the set of vertices of G. We also introduce, given T ∈_ℓ and e ∈ℰ_T, the standard interior and edge bubble functions φ_T and φ_e, respectively; see, e.g., <cit.>.
Let T ∈_ℓ and e ∈ℰ_T. Recall that 𝒩_e denotes the patch composed by the two elements T^+ and T^- sharing e. We introduce the following edge/face bubble function
ψ_e|_𝒩_e=d^4d(∏_∈𝒱(e)ϕ_^T^+ϕ_^T^-)^2,
where, for ∈𝒱(e), ϕ_^T^± denotes the barycentric coordinates of T^±, respectively, which are understood as functions over 𝒩_e.
The bubble function ψ_e has the following properties: ψ_e∈ℙ_4d(𝒩_e), ψ_e∈ C^2(𝒩_e), and ψ_e = 0 on ∂𝒩_e.
In addition, it satisfies
∇ψ_e = 0 on ∂𝒩_e, ∇ψ_e·𝐧= 0 on e.
Given T∈𝒯_ℓ, let Π_T : L^2(T) →ℙ_0(T) be the orthogonal projection operator into constant functions over T. In particular, for v∈ L^2(Ω), we have Π_Tv:=1|T|∫_Tv and Π_T v_T≤v_T.
With all these ingredients at hand, we are ready to prove the local efficiency of the aforementioned error estimators. We start with η_st,2, defined in (<ref>).
Let u̅∈𝕌_ad be the unique solution to problem (<ref>)–(<ref>) with y̅ being the corresponding optimal state. Let 𝔲̅_ℓ∈𝕌_ad be a solution to the semi-discrete problem with y̅_ℓ(𝔲̅_ℓ) being the corresponding discrete state variable.
Then, for T∈𝒯_ℓ, the local error indicator E_st,T, defined as in (<ref>), satisfies
E_st,T^2
≲y̅ - y̅_ℓ(𝔲̅_ℓ)_𝒩_T^2
+
h_T^4 - du̅-𝔲̅_ℓ_L^1(𝒩_T)^2
+
∑_T'∈𝒩_Th_T^4𝔲̅_ℓ - Π_T𝔲̅_ℓ_T'^2,
where 𝒩_T is defined as in (<ref>) and the hidden constant is independent of continuous and discrete optimal variables, the size of the elements in the mesh 𝒯_ℓ, and #𝒯_ℓ.
Let v ∈ H_0^1(Ω) be such that v|_T∈ C^2(T) for all T∈𝒯_ℓ. Use v as a test function in (<ref>) and apply elementwise integration by parts to obtain
( ∇ (y̅ - y̅_ℓ(𝔲̅_ℓ)), ∇ v)_Ω - (u̅ - 𝔲̅_ℓ,v)_Ω
=
∑_T∈_ℓ ( 𝔲̅_ℓ, v )_T - ∑_e∈ℰ_ℓ(∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_e,v)_e.
At the same time, we integrate by parts, again, to arrive at
(∇ (y̅ - y̅_ℓ(𝔲̅_ℓ)), ∇ v)_Ω = ∑_e∈ℰ_ℓ (∇ v·𝐧|_e, y̅-y̅_ℓ(𝔲̅_ℓ))_e - ∑_T∈_ℓ (y̅-y̅_ℓ(𝔲̅_ℓ), Δ v)_T.
Hence, combining the two previous identities we obtain, for all v ∈ H_0^1(Ω) such that v|_T∈ C^2(T) for all T∈𝒯_ℓ, the equality
∑_e∈ℰ_ℓ (∇ v·𝐧|_e, y̅-y̅_ℓ(𝔲̅_ℓ))_e - ∑_T∈_ℓ (y̅-y̅_ℓ(𝔲̅_ℓ), Δ v)_T - (u̅ - 𝔲̅_ℓ,v)_Ω
=
∑_T∈_ℓ[ (Π_T𝔲̅_ℓ, v )_T + (𝔲̅_ℓ - Π_T𝔲̅_ℓ, v )_T] - ∑_e∈ℰ_ℓ ( ∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_e, v)_e.
We now proceed on the basis of two steps.
Step 1. (estimation of h_T^4𝔲̅_ℓ_T^2) Let T∈_ℓ.
An application of the triangle inequality gives
h_T^4𝔲̅_ℓ_T^2≲
h_T^4Π_T𝔲̅_ℓ_T^2 + h_T^4𝔲̅_ℓ - Π_T𝔲̅_ℓ_T^2.
To estimate h_T^4Π_T𝔲̅_ℓ_T^2, we evaluate v = φ_T^2Π_T𝔲̅_ℓ in (<ref>).
Then, we use that ∇ (φ^2Π_T𝔲̅_ℓ)=0 on ∂ T, standard properties of the bubble function φ_T, and the inverse estimate φ_T^2Π_T𝔲̅_ℓ_L^∞(T)≲ h_T^-d/2φ_T^2Π_T𝔲̅_ℓ_T.
These arguments yield
Π_T𝔲̅_ℓ_T^2≲ y̅-y̅_ℓ(𝔲̅_ℓ)_TΔ (φ_T^2Π_T𝔲̅_ℓ)_T
+ (h_T^-d/2u̅ - 𝔲̅_ℓ_L^1(T) + 𝔲̅_ℓ - Π_T𝔲̅_ℓ_T)Π_T𝔲̅_ℓ_T.
Using that Δ (φ_T^2Π_T𝔲̅_ℓ) = Π_T(𝔲̅_ℓ)Δφ_T^2 in combination with properties of φ_T it follows that Δ (φ_T^2Π_T𝔲̅_ℓ)_T≲ h_T^-2Π_T𝔲̅_ℓ_T.
Thus, we obtain
h_T^4Π_T(𝔲̅_ℓ)_T^2≲y̅-y̅_ℓ(𝔲̅_ℓ)_T^2 + h_T^4-du̅ - 𝔲̅_ℓ_L^1(T)^2 + h_T^4𝔲̅_ℓ - Π_T𝔲̅_ℓ_T^2.
A combination of (<ref>) and (<ref>) concludes the estimation.
Step 2. (estimation of h_T^3∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_e_e^2)
Let T∈_ℓ and e∈ℰ_T.
In what follows, we extend the jump term ∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_e, defined on e, to the patch 𝒩_e.
Such extension can be done, for example, by using a continuation operator as in <cit.>.
Hereinafter we make no distinction between the jump term and its extension.
We invoke the bubble function ψ_e, evaluate v = ∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_eψ_e in (<ref>) and use that ∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_e∈ℝ, ψ_e ∈ H^2_0(𝒩_e), and ∇ψ_e·𝐧|_e = 0.
These, the fact Δ v = ∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_eΔψ_e since ∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_e∈ℝ, and basic inequalities imply that
∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_eψ_e^1/2_e^2≲ ∑_T' ∈𝒩_e( h_T'^-2y̅-y̅_ℓ(𝔲̅_ℓ)_T' +𝔲̅_ℓ_T'.
. + h_T'^-d/2u̅ - 𝔲̅_ℓ_L^1(T')) h_T^1/2∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_e_e,
upon using the bound ∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_eψ_e_T'≲ h_T^1/2∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_e_e.
With these estimates at hand, we thus use standard bubble functions arguments, estimate (<ref>), and the shape regularity property of the family {𝒯_ℓ} to arrive at
h_T^3/2∇y̅_ℓ(𝔲̅_ℓ)·𝐧|_e_e≲∑_T'∈𝒩_e
(
y̅-y̅_ℓ(𝔲̅_ℓ)_T' + h_T^2𝔲̅_ℓ_T' + h_T^2-d/2u̅ - 𝔲̅_ℓ_L^1(T')
),
which concludes the proof.
We now continue with the study of the local efficiency properties of the estimator η_adj,2 defined in (<ref>).
Let u̅∈𝕌_ad be the unique solution to problem (<ref>)–(<ref>) with y̅ and p̅ being the corresponding optimal state and adjoint state, respectively.
Let 𝔲̅_ℓ∈𝕌_ad be a solution to the semi-discrete problem with y̅_ℓ(𝔲̅_ℓ) and p̅_ℓ being the corresponding discrete state and adjoint state variables, respectively.
Then, for T∈𝒯_ℓ, the local error indicator E_adj,T, defined as in (<ref>), satisfies that
E_adj,T^2
≲
h_T^dp̅-p̅_ℓ_L^∞(𝒩_T)^2 + h_T^4y̅-y̅_ℓ(𝔲̅_ℓ)_𝒩_T^2 + ∑_T'∈𝒩_Th_T^4y_Ω - Π_T(y_Ω)_T'^2,
where 𝒩_T is defined as in (<ref>) and the hidden constant is independent of continuous and discrete optimal variables, the size of the elements in the mesh 𝒯_ℓ, and #𝒯_ℓ.
Similar arguments to the ones that lead to (<ref>) yield, for every v ∈ H_0^1(Ω) such that v|_T∈ C^2(T) (T∈𝒯_ℓ), the identity
∑_e∈ℰ_ℓ (∇ v·𝐧|_e, p̅-p̅_ℓ)_e - ∑_T∈_ℓ (p̅-p̅_ℓ, Δ v)_T - (y̅ - y̅_ℓ(𝔲̅_ℓ),v)_Ω
=
∑_T∈_ℓ( Π_Ty_Ω - y_Ω,v)_T +
∑_T∈_ℓ ( y̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω, v )_T - ∑_e∈ℰ_ℓ ( ∇p̅_ℓ·𝐧|_e, v)_e.
We now estimate the terms in (<ref>) on the basis of two steps.
Step 1. (estimation of h_T^4y̅_ℓ(𝔲̅_ℓ) - y_Ω_T^2) Let T∈_ℓ. The use of the triangle inequality yields
h_T^4y̅_ℓ(𝔲̅_ℓ) - y_Ω_T^2≲
h_T^4y̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω_T^2
+
h_T^4Π_Ty_Ω - y_Ω_T^2.
We recall that Π_T denotes the orthogonal projection operator into constant functions over T.
To control h_T^4y̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω_T^2 in the above inequality, we invoke the interior bubble function φ_T and choose v=φ_T^2(y̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω) in (<ref>). Hence, using standard properties of φ_T and the inequality Δ(φ_T^2(y̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω))_L^1(T)≲
h_T^d/2-2y̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω_T, which follows from the inverse estimate <cit.>, we obtain
y̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω_T≲Π_Ty_Ω - y_Ω_T + y̅ - y̅_ℓ(𝔲̅_ℓ)_T + h_T^d/2-2p̅ - p̅_ℓ_L^∞(T),
from which we conclude that
h_T^4y̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω_T^2≲
h_T^4Π_Ty_Ω - y_Ω_T^2
+ h_T^4y̅ - y̅_ℓ(𝔲̅_ℓ)_T^2 + h_T^dp̅ - p̅_ℓ_L^∞(T)^2.
The use of the latter in (<ref>) results in the desired bound.
Step 2. (estimation of h_T^3∇p̅_ℓ·𝐧|_e_e^2)
Let T∈_ℓ and e∈ℰ_T.
As in Lemma <ref>, we extend the jump term ∇p̅_ℓ·𝐧|_e to the patch 𝒩_e, making no distinction between the jump term and its extension.
We choose v = ∇p̅_ℓ·𝐧|_eψ_e in (<ref>) and use that ∇p̅_ℓ·𝐧|_e∈ℝ, ψ_e ∈ H^2_0(𝒩_e), and ∇ψ_e·𝐧|_e = 0.
Consequently, basic inequalities imply that
∇p̅_ℓ·𝐧|_e_e^2≲ ∑_T'∈𝒩_e(h_T^d/2-2p̅ - p̅_ℓ_L^∞(T')
+ y̅ - y̅_ℓ(𝔲̅_ℓ)_T' + y̅_ℓ(𝔲̅_ℓ) - y_Ω_T')ψ_e∇p̅_ℓ·𝐧|_e_T'.
Therefore, using that ψ_e∇p̅_ℓ·𝐧|_e_T'≲ h_T^1/2∇p̅_ℓ·𝐧|_e_e in combination with estimate (<ref>), we conclude that
h_T^3∇p̅_ℓ·𝐧|_e_e^2≲ ∑_T'∈𝒩_e(h_T^dp̅ - p̅_ℓ_L^∞(T')^2
+ h_T^4y̅ - y̅_ℓ(𝔲̅_ℓ)_T'^2 + h_T^4Π_Ty_Ω - y_Ω_T'^2),
which ends the proof.
We now continue with the study of the local efficiency properties of the estimator η_adj,2 defined in (<ref>).
In the framework of Lemma <ref> we have that, for T∈𝒯_ℓ, the local error indicator E_adj,∞,T, defined as in (<ref>), satisfies
E_adj,T,∞^2
≲p̅-p̅_ℓ_L^∞(𝒩_T)^2 + h_T^4 - dy̅-y̅_ℓ(𝔲̅_ℓ)_𝒩_T^2 + ∑_T'∈𝒩_Th_T^4-dy_Ω - Π_Ty_Ω_T'^2,
where 𝒩_T is defined as in (<ref>) and the hidden constant is independent of continuous and discrete optimal variables, the size of the elements in the mesh 𝒯_ℓ, and #𝒯_ℓ.
We proceed on the basis of two steps.
Step 1. (estimation of h_T^4-dy̅_ℓ(𝔲̅_ℓ) - y_Ω_T^2) Let T∈_ℓ. To estimate this term in (<ref>), we first use a triangle inequality and obtain
h_T^4-dy̅_ℓ(𝔲̅_ℓ) - y_Ω_T^2≤
h_T^4-dy̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω_T^2
+
h_T^4-dΠ_Ty_Ω - y_Ω_T^2.
The estimate for h_T^4-dy̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω_T^2 stems from (<ref>) after multiplying by h^-d_T:
h_T^4-dy̅_ℓ(𝔲̅_ℓ) - Π_Ty_Ω_T^2≲p̅ - p̅_ℓ_L^∞(T)^2 + h_T^4-dy̅ - y̅_ℓ(𝔲̅_ℓ)_T^2 + h_T^4-dΠ_Ty_Ω - y_Ω_T^2.
Step 2. (estimation of h_T^2∇p̅_ℓ·𝐧|_e_L^∞(e)^2)
Let T∈_ℓ and e∈ℰ_T.
Since ∇p̅_ℓ·𝐧|_e∈ℝ, we deduce that
∇p̅_ℓ·𝐧|_e_L^∞(e)^2
=
|∇p̅_ℓ·𝐧|_e|^2
=
|e|^-1∇p̅_ℓ·𝐧|_e_e^2,
where |e| denotes the measure of e.
In view of the shape regularity of the mesh _ℓ we have that |e|≈ h_T^d-1 and consequently h_T^2+d∇p̅_ℓ·𝐧|_e_L^∞(e)^2≈
h_T^3 ∇p̅_ℓ·𝐧|_e_e^2.
Hence, using the latter in estimate (<ref>) and multiplying by h_T^-d, we arrive at
h_T^2∇p̅_ℓ·𝐧|_e_L^∞(e)^2≲ ∑_T'∈𝒩_e(p̅ - p̅_ℓ_L^∞(T')^2
+ h_T^4-dy̅ - y̅_ℓ(𝔲̅_ℓ)_T'^2 + h_T^4-dΠ_Ty_Ω - y_Ω_T'^2),
which concludes the proof.
The next result is a direct consequence of Lemmas <ref>, <ref> and <ref>.
In the framework of Lemma <ref> we have, for T∈𝒯_ℓ, that
E_st,T^2 + E_adj,T^2 +
E_adj,T,∞^2
≲
(1 + h_T^4 + h_T^4-d)y̅ - y̅_ℓ(𝔲̅_ℓ)_𝒩_T^2
+ (1 + h_T^d)p̅-p̅_ℓ_L^∞(𝒩_T)^2
+ h_T^4 - du̅-𝔲̅_ℓ_L^1(𝒩_T)^2 +
∑_T'∈𝒩_Th_T^4𝔲̅_ℓ - Π_T𝔲̅_ℓ_T'^2 + ∑_T'∈𝒩_T(h_T^4 + h_T^4-d)y_Ω - Π_T(y_Ω)_T'^2,
where 𝒩_T is defined as in (<ref>) and the hidden constant is independent of continuous and discrete optimal variables, the size of the elements in the mesh 𝒯_ℓ, and #𝒯_ℓ.
§ NUMERICAL EXAMPLES
In this section we conduct a series of numerical examples in 2D that support our theoretical findings and illustrate the performance of the error estimator
E^2
:=
η_st,2^2 + η_adj,∞^2 + η_adj,2^2
that we proposed and analyzed in section <ref>.
We immediately note that we have considered the instance when β = 1 (cf. Remark <ref>) in the total error estimator E.
We shall consider this configuration for E in all the following numerical examples.
In order to simplify the construction of exact optimal solutions, we have incorporated an extra forcing term f∈ L^2(Ω) in the state equation.
The right-hand side of the state equation reads now as follows: (f + u̅, v)_Ω.
Moreover, in sections <ref> and <ref> below, we go beyond the presented theory and perform numerical experiments with a non-convex domain.
The considered numerical examples have been carried out with the help of a code that we implemented using .
When assembling all system matrices we have used exact integration whereas right-hand sides, approximation errors, and error indicators are computed by a quadrature formula which is exact for polynomials of degree 19.
For a given partition _ℓ, we seek y̅_ℓ(𝔲̅_ℓ) ∈𝕍_ℓ, p̅_ℓ∈𝕍_ℓ, and 𝔲̅_ℓ∈ U_ad that solve (<ref>), (<ref>), and (<ref>).
We solve such a nonlinear system of equations by using the solution technique devised in <cit.>.
Once a discrete solution is obtained, we compute the error indicator
E_T := (E_st,T^2 + E_adj,T^2 + E_adj,T,∞^2)^1/2
to drive the adaptive procedure described in Algorithm <ref>.
For the numerical results, we define the total number of degrees of freedom Ndofs = 2 dim𝕍_ℓ.
Finally, we define the effectivity index
ℐ_eff:= E/(u̅ - 𝔲̅_ℓ_L^1(Ω)^2 + y̅ - y̅_ℓ(𝔲̅_ℓ)_Ω^2 + p̅ - p̅_ℓ_L^∞(Ω)^2)^1/2.
§.§ Exact solution on convex domain
Following <cit.> (see also <cit.>) we set Ω:=(0,1)^2, a=-1, b=1, and take f and y_Ω such that
y̅(x_1,x_2) = sin(π x_1)sin(π x_2),
p̅(x_1,x_2) = -sin(2π x_1)sin(2π x_2)/(8π^2), u̅(x_1,x_2) = -sign(p̅(x_1,x_2))
for (x_1,x_2)∈Ω.
In Figures <ref> and <ref> we display the results obtained for this example.
We show, in Fig. <ref>, experimental rates of convergence for each contribution of the total error when uniform and adaptive refinement are considered.
We observe that all the approximation errors obtained for both schemes exhibit optimal rates of convergence.
In Fig. <ref> we show experimental rates of convergence for all the individual contributions of the error estimator E (see (<ref>)) and the effectivity index, when adaptive refinement is considered.
When the total number of degrees of freedom increases, we observe that the effectivity index stabilizes around the values 3 and 5.
§.§ Exact solution on non-convex domain
We set Ω=(-1,1)^2∖[0,1)×(-1,0], a=-1, b=1, and take f and y_Ω such that the exact optimal state and adjoint state are given, in polar coordinates (ρ,ω) with ω∈[0,3π/2], by
y̅(ρ,ω) =sin(π(ρsin(ω)+1)/2)sin(π(ρcos(ω)+1)/2)ρ^2/3sin(2ω/3),
p̅(ρ,ω) = (0.5-ρ)y̅(ρ,ω).
The purpose of this example is to investigate the performance of the devised a posteriori error estimator when we violate the convexity assumption considered on the domain.
We present the results obtained for this example in Figures <ref>, <ref> and <ref>.
In Fig. <ref>, we display experimental rates of convergence for each contribution of the total error when uniform and adaptive refinement are considered.
We observe that the designed adaptive procedure outperforms uniform refinement.
In particular, it exhibits optimal rates of convergence for each contribution of the total error.
In Fig. <ref> we show experimental rates of convergence for all the individual contributions of the error estimator E and the effectivity index, when adaptive refinement is considered.
We observe that the effectivity seems to stabilize around the values 3 and 4 when the total number of degrees of freedom increases.
Finally, in Fig. <ref>, we present the finite element approximations y̅_ℓ(𝔲̅_ℓ) and p̅_ℓ, and adaptively refined meshes.
It can be observed that the refinement is being concentrated on the re-entrant corner (0, 0).
§.§ Unknown solution on non-convex domain
We set Ω=(-1,1)^2∖[0,1)×[0,1), a=-0.5, b=0.5, and data
y_Ω(x_1,x_2) = 1/√(x_1^2 + x_2^2) - 10sin(x_1x_2), (x_1,x_2)∈Ω.
We note that y_Ω∈ L^2(Ω) ∖ L^∞(Ω).
In Fig. <ref> we show the results obtained for this example.
Similar conclusions to the ones presented for the example from section <ref> can be derived.
Particularly, we observe optimal experimental rates of convergence for all the individual contributions of the error estimator E within the adaptive loop.
siam
|
http://arxiv.org/abs/2409.03749v1 | 20240905175828 | Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron | [
"Christian Schmid",
"James M. Murray"
] | cs.LG | [
"cs.LG",
"q-bio.NC",
"stat.ML"
] | |
http://arxiv.org/abs/2409.03124v1 | 20240904232313 | Up, Up, and Away: Winds and Dynamical Structure as a Function of Altitude in the Ultra-Hot Jupiter WASP-76b | [
"Aurora Y. Kesseli",
"Hayley Beltz",
"Emily Rauscher",
"I. A. G. Snellen"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Aurora Y. Kesseli
[email protected]
0000-0002-3239-5989]Aurora Y. Kesseli
These authors contributed equally to this work.
IPAC, Mail Code 100-22, Caltech, 1200 E. California Blvd., Pasadena, CA 91125, USA
0000-0002-6980-052X]Hayley Beltz
These authors contributed equally to this work.
Department of Astronomy, University of Maryland, College Park, MD 20742, USA
0000-0003-3963-9672]Emily Rauscher
Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA
0000-0003-1624-3667]I.A.G. Snellen
Leiden Observatory, Leiden University, Postbus 9513, 2300 RA, Leiden, The Netherlands
§ ABSTRACT
Due to the unprecedented signal strengths offered by the newest high-resolution spectrographs on 10-m class telescopes, exploring the 3D nature of exoplanets is possible with an unprecedented level of precision.
In this paper, we present a new technique to probe the vertical structure of exoplanetary winds and dynamics using ensembles of planet absorption lines of varying opacity, and apply it to the well-studied ultra-hot Jupiter WASP-76b. We then compare these results to state-of-the-art global circulation models (GCMs) with varying magnetic drag prescriptions. We find that the known asymmetric velocity shift in Fe I absorption during transit persists at all altitudes, and observe tentative trends for stronger blueshifts and more narrow line profiles deeper in the atmosphere.
By comparing three different model prescriptions (a hydrodynamical model with no drag, a magnetic drag model, and a uniform drag model) we are able to rule out the uniform drag model due to inconsistencies with observed trends in the data.
We find that the magnetic model is slightly favored over the the hydrodynamic model, and note that this 3-Gauss kinematic magnetohydrodynamical GCM is also favored when compared to low-resolution data.
Future generation high-resolution spectrographs on Extremely large telescopes (ELTs) will greatly increase signals and make methods like these possible with higher precision and for a wider range of objects.
§ INTRODUCTION
Transmission spectroscopy has dominated the study of exoplanet atmospheres since the first atmospheric detection <cit.>, and has moved the study of exoplanets from bulk properties to in-depth studies of composition, chemical processes, and dynamics in exoplanet atmospheres. Transmission spectroscopy performed at high spectral resolution has proven to be a powerful technique for detecting a range of atomic and molecular species that are often difficult to distinguish at lower spectral resolution <cit.>. By taking advantage of the fine radial velocity spacing of the spectrographs, high resolution spectroscopy has also been a valuable tool to uncover winds and atmospheric dynamics <cit.>.
Hot and ultra-hot Jupiters offer the best test beds for assessing both chemical models and dynamical global circulation models (GCMs) for exoplanetary atmospheres due to their large scale-heights and favorable star-to-planet radius ratios. Recent instrumentation advances from facilities such as JWST, and increased throughput and wavelength coverage of next generation high-resolution spectrographs like ESPRESSO <cit.>, MAROON-X <cit.>, KPF <cit.>, and IGRINS <cit.>, mean that more complex planetary models can be tested. GCMs are 3D numerical models that simulate the dynamics of planetary atmospheres. These models are computationally expensive, but can capture many of the complex 3D processes such as clouds <cit.>, scale height effects <cit.>, magnetic drag <cit.>, hydrogen dissociation <cit.>, and changing line contrasts due to a 3D temperature structure <cit.> that influence the atmosphere and resulting spectra.
The publicly available ESPRESSO dataset consisting of two transits of WASP-76b represents the highest-signal-to-noise-ratio high-resolution transmission spectrum that is publicly available to date. This dataset has been used in many papers due to its unprecedented quality, including the novel detection of asymmetric Fe I absorption during transit <cit.>, and subsequent detection and analysis of asymmetries in many other species <cit.>. It was also used to detect single absorption lines for a range of metals <cit.>, and in retrievals to produce phase- and spatially-resolved parameters (abundances, temperature-pressure profiles) for the first time <cit.>. Most recently, <cit.> also precisely constrained the abundances of eight different atomic species.
In this paper, we use the same WASP-76 b data and present a novel method to probe 3D atmospheric dynamics by exploring how radial velocities and line shapes change as a function of altitude (vertically). As was shown by <cit.>, measurements of how wind speeds and net Doppler shifts vary in altitude may help differentiate between different physical conditions input into GCMs, such as magnetic field strength and drag parameters. We aim to use the added vertical dimensionality offered by this technique to put more robust constraints on the physical processes (e.g., drag, magnetism, winds) occurring within WASP-76 b.
We then compare these results to output from our state-of-the-art GCMs. This paper focuses on the treatment of drag in GCM models as the drag prescription is the main process that sets the wind speeds, and in turn the main influencer on measured radial velocities. Here we use the RM-GCM[https://github.com/emily-rauscher/RM-GCM] which was the first GCM to be post-processed and directly compared to high resolution emission <cit.> and transmission <cit.> spectra. These works, among others <cit.> highlight the inherent “3Dness” present in high resolution spectroscopic observations. An extremely multidimensional aspect of our models is the inclusion of a spatially varying magnetic drag. This state-of-the-art active magnetic drag prescription allows for the strength of magnetic drag to be calculated based on local conditions. Most GCMs use a single “uniform” drag timescale to simulate the effects of magnetic drag <cit.>. This assumption is particularly problematic for ultra-hot Jupiters (UHJs), as it can imply unphysically large magnetic field strengths on the nightside of the planet <cit.>. Previous works have shown the inclusion of magnetic effects can alter predicted phase curves by reducing hotspot offsets and increasing amplitude due to an increased day-night contrast <cit.>. This is in agreement with observations of ultra-hot Jupiters, which often show smaller hotspot offsets than their hot Jupiter counterparts <cit.>. Additionally, the inclusion of magnetic drag can result in net Doppler shifts that can vary on the order of several kilometers per second in high resolution emission <cit.> and transmission <cit.> spectra. We conduct our analysis on a uniform drag as well as a drag-free model for comparison to the active drag case.
We present the data in Section <ref>, and our cross correlation analysis in Section <ref>. The GCM models are then described in Section <ref>. We present the results of the cross correlation analysis on the WASP-76 b ESPRESSO data in Section <ref> and <ref>, and compare the observed trends in altitude present in the data to state-of-the-art GCM models in Section <ref>. We discuss how our results fit in with previous work and potential future directions for GCMs in Section <ref>. Finally, we summarize our conclusions in Section <ref>.
§ OBSERVATIONS AND DATA ANALYSIS
We downloaded the 1D blaze-corrected, stitched spectra from the Data and Analysis Center for Exoplanets (DACE) database[<https://dace.unige.ch/dashboard/index.html>], which were reduced with version 1.3.2 of the ESPRESSO pipeline. More details on the observing strategy are given in <cit.>. Before any cleaning or analysis steps, we removed contamination by telluric H_2O and O_2 lines using molecfit <cit.>, which has proven adept at removing the shallow absorption lines that mainly permeate the optical portion of the spectrum <cit.>.
We outline the general cleaning steps we performed here, but refer the reader to <cit.> for more details, as the data presented here are the same as those used in that study. In order to remove the contribution from the host star so that the much weaker signal from the exoplanet can be seen, we created a uniform spectral time series along both the wavelength and flux axes. We performed the cleaning steps on each of the two nights separately, and then combined the nights after cross correlation. Using the host star's known radial velocity induced by the planet <cit.> and the system velocity <cit.> we shifted all of the spectra to rest. We then performed a simple 5-σ clipping to remove any spurious pixels due to cosmic rays. In order to preserve the relative flux values between different parts of the spectrum while removing any broadband noise or slight deviations in the shape of the blaze functions across observations, we placed all the spectra on a “common" blaze in a method similar to <cit.>. Finally, we interpolated all the spectra onto a uniform wavelength grid.
§.§ Cross Correlation Analysis
At this point the cleaned and uniform spectral time series are usually cross correlated with a model atmosphere created using the planet's parameters and one or more molecular or atomic species. This process has proven extremely successful at detecting a range of atoms and molecules in exoplanet atmospheres <cit.>. However, as discussed in detail in <cit.>, information on basic absorption line properties such as line amplitude, full width at half maximum (FWHM), or continuum dispersion are lost. Alternatively, a binary mask <cit.> can be used, which simply consists of a list of absorption line positions for the atom or molecule of interest. The main choice that affects the binary mask is the number of lines used, as some species have hundreds or even thousands of absorption lines. By including a large number of weak absorption lines that are unlikely to penetrate above the continuum of the planet, the signal will be diluted. This mask is then applied to the spectrum and any pixels that fall within the mask are averaged together (the weight of every mask pixel is treated the same and given a value of one in our analysis). In this way, the FWHM or amplitude of the resulting cross correlation function (CCF) from the binary mask is the average FWHM or amplitude of however many absorption lines were included in the mask. Hence, the amplitude of the co-added 1D CCF in the planet's rest frame can be expressed as an average excess absorption from a number of lines in parts per million (ppm) or percent, and is similar to what is reported for single lines resolved by high-resolution spectroscopy.
Using this binary mask CCF approach, we created multiple masks from the opacity function of the atomic species Fe I. By comparing behaviors between different binary mask CCFs of the same chemical species, we attempt to isolate effects that are due to altitude from effects that may be present when CCFs of different species are compared to each other (i.e., chemical effects). The opacities were downloaded from the DACE opacity database[<https://dace.unige.ch/opacityDatabase>] and created using the open source HELIOS-K opacity calculator <cit.> and the Kurucz atomic line lists <cit.>. We created separate masks for the most opaque lines, the moderately opaque lines, and the weak lines. The most opaque lines become optically thick high up in the atmosphere and they trace the processes occurring at high altitudes, while the resulting CCF from the mask created from the lower opacity lines trace deeper in the atmosphere. This method allows us to directly probe different atmospheric depths which we compare to 3D atmospheric models.
The opacity model of Fe I, with the absorption lines separated into their three bins is shown in Figure <ref>. The lower boundary of the strong lines bin was chosen to approximately map to a pressure of 10^-5 bars, which corresponds to the top of the model atmosphere. As detailed in section <ref>, extending the model to higher altitudes is computationally difficult, and so this strong line bin does not have a model counterpart. For the other two bins, we chose the number of lines for each which led to an even split in signal in the resulting CCFs between the final ESPRESSO spectra and the mask (see SNR values in Table <ref>). To determine the lower bound of the weak lines bin, we looked for the point where adding more Fe lines into our CCF mask no longer increased the resulting signal to noise ratio (SNR), calculated by averaging the CCFs in planet's rest frame and comparing the peak value to noise far from the expected peak. By comparing the bottom and top panel of Figure <ref>, the lower bound of the weak line bin approximately maps to a pressure of 1 millibar, which is consistent with the cloud deck location from the retrievals in <cit.>. While the binary mask CCF approach is less sensitive to parameter choices such as temperature than a model CCF approach due to line shapes not being important, opacity is still temperature sensitive. We find that we need to adjust the values that are used for the bin cutoffs (e.g., bottom panel of Figure <ref>) when using opacities of different temperatures, but when the bins are chosen so that the heights of the resulting CCFs are consistent, the derived parameters (radial velocity, FWHM, etc.) are also consistent to within the quoted uncertainties.
We cross correlated each night of cleaned spectra with the three different binary masks. While the velocity bins for both nights were the same, the phase coverage and spacing on each night was different. We therefore interpolated the CCF grids onto a uniform phase-space grid, and then co-added the two together, weighting the contribution of each by their SNRs and the number of spectra taken during transit.
§ GCM MODELING
In this work, we post-processed three models of WASP-76b <cit.>. These are double-grey models using the RM-GCM <cit.>, with 65 vertical layers evenly spaced in log pressure, from 100 to 10^-5 bars, and a horizontal spectral resolution of T31, corresponding to roughly 4 degree spacing at the equator. In this paper, we focus on the effect of drag and differing drag prescriptions since the main observables we are comparing to are radial velocities and line shapes, which are measurements or winds and are dominated by the treatment of drag. The three models presented differ in their treatment of magnetic effects. The simplest case, the 0G/drag-free model, represents the hydrodynamics only case. The uniform drag model applies a single global drag timescale of 10^4 s, chosen to match the strong drag case from <cit.>.
Our final model, the 3 G case, applies an active drag prescription, also known as a “kinematic MHD" method, and was first applied to hot Jupiters in <cit.>. This method applies a drag on the winds in the east-west direction <cit.> and with a timescale calculated based on local conditions, using the following expression from <cit.>:
τ_mag(B,ρ,T, ϕ) = 4 πρ η (ρ, T)/B^2 |sin(ϕ) |
where B is the chosen global magnetic field strength (in this case 3 G), ϕ is the latitude, ρ is the density, and η the magnetic resistivity, which is a strong function of temperature. The 3 G model is presented here as it most closely matched the Spitzer phase curve of the planet <cit.>. Compared to the drag-free models, previous work has shown that the consideration of active drag results in a variety of behaviors not seen in drag free or uniform drag models. The differences most relevant to this paper are as follows:
* The active drag model produces a different dayside upper atmosphere flow pattern. The 0 G model and uniform drag model show day to night flow centered on the dayside, with the uniform drag model showing slower wind speeds. Our active model shows flow moving mostly in the north-south direction, up and over the poles <cit.>.
* Increasing the strength of our magnetic field results in a decrease of the hotspot offset (until it rests at the substellar point) and increase the day-night temperature contrast <cit.>.
* Active drag models will show different Doppler shift trends in high-resolution emission <cit.> and transmission <cit.> spectra, compared to 0 G and uniform models of the same planet.
In Figure <ref> we show line of sight (LOS) velocities including the effects of both winds and rotation for the east and west limbs during ingress, midtransit, and egress. These are plotted as a function of altitude and thus the atmospheric extent is highly influenced by the temperature structure. The two black contours indicate the pressure levels probed by the middle (outer contour, roughly 10 μbar) and weak lines (inner contour, roughly 1 mbar), respectively. The strongest lines probe pressures not captured by the GCM and are not shown. These differences in velocity structure—between types of drag as well as pressure levels—influence the net Doppler shifts found in the postprocessed spectra (see Section <ref>). Although there are also varying temperature structures between the models, the winds will dominate the determination of the net Doppler shift.
We then post-processed the two GCMs to generate high resolution transmission spectra at R∼ 450,000. Using ray-striking radiative transfer <cit.>, we take into account 3D effects such as temperature structure and Doppler shifts from winds and rotations. To generate Fe opacity tables, we assumed solar-abundance equilibrium models <cit.> with line lists from <cit.>.
As the data are cleaned and processed in many steps, which masked or removed some fraction of pixels, we injected the model transmission spectra into the data at the same K_p and a systemic velocity of 100 km s^-1 (leading to an offset of 101 km s^-1 from the true planet's signal) in order to perform the same cleaning steps on the models and therefore accurately compare the models to the data. The model transmission spectra were generated for five phases: -0.04, -0.02, 0.0, 0.02, 0.04. For each model transmission spectrum, we interpolated the model onto the same wavelength grid as the data, and then follow convention and simply multiply the data by the interpolated model transmission spectrum for the corresponding phase. We then performed the exact same cleaning steps on the data that have been inject with the model as we did for the data that was not injected with the model <cit.>.
§ RESULTS
We begin by discussing the trends we observe in Fe I in the atmosphere of WASP-76b, as Fe I shows the strongest signal. We then move on to comparing the Fe I trends to trends we detect in other atoms. Finally, we compare the trends we see in the data, with the trends we find from the injected GCM models.
§.§ Observed Fe I Trends with Altitude
We created three separate CCF grids from the separate strong-line, middle-line, and weak-line binary masks applied to the combined two nights of ESPRESSO data. Due to the slow rotation of the host star (v sin i=1.48 km s^-1; ) and misaligned orbit of the planet (λ=61.28 ^∘; ), contamination from Doppler shadow due to the Rossiter-McLaughlin effect is confined to a small region around ±10 km s^-1 in the rest frame of the star. Due to known difficulties in correcting for the Rossiter-McLaughlin effect <cit.>, we simply choose to mask out this region of the CCF grids. We used the planet's well-known velocity semi-amplitude (K_p) of 196.5±1 km s^-1 from <cit.>, and shifted all of the cross correlation functions to rest. These results are shown in Figure <ref>, and the presence of Fe I lines in the planet's atmosphere in all three opacity bins is clearly detected as white residuals in the planet's rest frame (along 0 km s^-1). As expected, the CCF grid created using the strong-line binary mask shows the largest residual amplitudes, meaning that these lines probe higher in the atmosphere. The middle and weak-lined masks show decreasing values in the residual amplitude as they probe deeper in the atmosphere.
The residual amplitudes in Figure <ref> are related to the planetary radius or number of scale heights above the continuum where the absorption occurs <cit.>. We converted the residual amplitudes to planetary radii using the following equation:
R_λ = √(1 + h/δ) R_p ,
where R_λ is the effective planetary radius above the continuum where the average absorption occurs, h is the measured residual amplitude, and δ is the white light transit depth of the planet. We also convert this radius to scale heights byassuming a scale height of 1500 km, as reported for the dayside in <cit.>.
We combined each different CCF grid in time by co-adding all of the phases within transit to get three 1D CCFs in the planet's rest frame (Figure <ref>). Consistent with Figure <ref>, the CCF created from the mask that used the weak Fe lines only extends to about 1.01 R_p, whereas the strong line CCF extends to 1.06 R_p.
In the absence of any atmospheric dynamics and planetary rotation, we would expect the full phase-combined 1D CCFs to peak in the planet's rest frame, close to 0 km s^-1, but Figure <ref> shows that at all altitudes the planet's absorption is significantly blueshifted. As the signals are noisy and we do not know the underlying functional form of the CCFs, we simply fit Gaussians to each 1D CCF in the region around the observed signal (±80 km s^-1) using the python curve fitting code, , to determine the average radial velocity shift and an uncertainty on that radial velocity for each bin (see Appendix <ref> for more information on the reliability of these values and uncertainties). We also calculated SNRs by taking the peak value of the Gaussian fit and dividing by the standard deviation far from the peak (outside of ±80 km s^-1). The measured radial velocities for each bin are plotted in Figure <ref> and the SNR of each CCF is reported in the caption of Figure <ref>. These measured radial velocities for each CCF show a significant trend where the Fe lines in the lower atmosphere appear to be more blueshifted, and as we look higher in the atmosphere the lines become less blueshifted.
WASP-76 b shows a strong asymmetry between the absorption depth and the radial velocity shift measured in the first half of the transit versus the second half of the transit <cit.>, which can be seen clearly in Figure <ref>. This blueshift evolution over the course of the transit has been explained with GCMs <cit.> as being due to a combination of day-to-night winds and the hotter east limb (which is more blueshifted from tidally locked rotation) coming into view at the end of the transit. To try to isolate changes that are due to the vertical structure and not the change in viewing angle, we also split the full transit into two phase ranges that separated the beginning and end of the transit and co-added these two parts of the CCF grid separately (Figure <ref>). When we examine Figure <ref>, we see that the significant trend of blueshift decreasing with altitude observed in Figure <ref> is no longer clearly observed. The majority of the perceived increasing blueshift is due to the fact that at lower altitudes the signal from the beginning of the transit (ϕ_-) is drastically diminished and so the combined CCF from the whole transit is influenced more by the ϕ_+ phases which are more blueshifted.
We report the measured radial velocity shifts for the three altitude bins, separated by phase, in Table <ref>. We find highly consistent values and trends for the measured radial velocities at the beginning and end of the transit when we compare our values to those from previous studies of <cit.> and <cit.> even though we use a completely different method to either paper, demonstrating the reliability of the measured velocities and their lack of dependence on reduction method (see also Appendix <ref> for more discussion). Looking at how the measured radial velocities change with altitude, we find that the radial velocities at all altitudes are consistent within their uncertainties for the beginning of the transit (ϕ_-). For the ϕ_+ bin, we find that the radial velocities show a trend for stronger wind speeds deeper in the atmosphere (i.e. larger blueshifts are measured for the opacity bin that traces the deepest point in the atmosphere). In order to determine the significance of this trend we performed a linear regression using , which was chosen due to its handling of uncertainties in regressions, and find a low-significance trend with a p-value of 0.02 or 2.3-σ. Only observing a trend during the end of the transit (ϕ_+) can be explained given that recent GCM modeling <cit.> and phase-resolved retrieval results for WASP-76 b <cit.> suggest that during the second half of the transit the evening side completely dominates and the morning side of the atmosphere cannot be resolved, while during the beginning of the transit both the morning and evening side of the atmosphere can be observed. Therefore, the ϕ_+ CCFs originate from a single hemisphere, making it easier to observe any trends within the data. The three bins trace altitudes of ∼1.02 to 1.1 R_p, or 1 to 4 scale heights, meaning that at most we measure a 1.15 km s^-1 change in the wind between these altitudes.
We also report the FWHM of the CCFs and the ratio of the heights between the phase bin for the beginning of the transit and the phase bin for the end of the transit in Table <ref>. These values and their uncertainties were also determined by fitting Gaussians to the CCFs shown in Figure <ref>. The CCFs for the phase bin encompassing the beginning of the transit are on average 6 km s^-1 wider than for the phase bin at the end of the transit. This is expected given the results from the GCM modeling and retrievals mentioned above which conclude that the Fe signal at the beginning of the transit originates from a combination of the morning and evening terminators, while the signal at the end of the transit is dominated by the evening side alone. Again, we only see a trend for the ϕ_+ phases, and measure an increasing FWHM as we probe higher altitudes. As previously mentioned, the majority of the perceived blueshift evolution in altitude shown in Figure <ref> is caused by the CCF signal in the beginning of the transit becoming weaker for the CCF mask that contained the least opaque lines (weak lines), and is reflected in our reported height ratio in Table <ref>, which decreases at lower altitudes. For the lowest altitude bin we find that during the beginning of the transit the Fe lines become optically thick much deeper in the atmosphere than during the end of the transit when the hotter evening side of the atmosphere is in view.
Finally, motivated by recent modeling work that discussed how asymmetric absorption would manifest in a K_p vs. V_sys diagram <cit.> and observations of different offsets for different species in the same K_p vs. V_sys diagram <cit.>, we created separate K_p vs. V_sys diagrams for the 3 altitude bins and compare the relative offsets from the planet's known position in Appendix <ref>. We do not find any clear trend in K_p or V_sys offset for the three bins.
§.§ Observed Trends in Altitude from other Atoms and Ions
Fe I shows the strongest signal of any of the detected atoms or ions in WASP-76 b <cit.>, and so we focus our findings on Fe I. However, we also checked whether any of the trends seen in Fe I were also present in the absorption signals from other atoms (Cr I, V I, and Mn I) that showed significant absorption in <cit.>. For each atom we created two masks (instead of 3 as was done for Fe I) since the signals were not as strong and using more bins would result in less robust detections. One mask contained the most opaque lines while the second mask contained lines that were less opaque. We split the bins at a point where the two masks led to SNRs in the resulting CCFs that were roughly equal. The binary mask CCFs for the two opacity bins and three different atoms, co-added in the planet's rest frame are shown in Figure <ref>.
We find the same trends in all three other species that we see in the analogous plot for Fe I (Figure <ref>), and in making the same K_p vs. V_sys plots for these species as we did for Fe I, we do not find any clear trends with altitude either (Appendix <ref>), reinforcing both conclusions drawn from Fe alone. Figure <ref> demonstrates that for these three other species the CCF made using the mask of weaker lines shows a larger blueshift than the CCF made using the mask of stronger lines. For Cr I and V I, we measure a significant (>3-σ) difference between the radial velocities of our two altitude bins. Mn I has the lowest SNR of the three trace species, and so even though we find the same blueshifting pattern, the difference between the two measured velocities is less than 2-σ. The consistency of the observed pattern in radial velocity with altitude for multiple species gives credence to the trend we observe with Fe I. It also demonstrates that the physical mechanism causing the signal to be weaker at lower altitudes during the beginning of the transit (decreasing height ratio trend) is not unique to Fe I, and therefore likely is a process that acts on a global scale (e.g., temperature or wind structure) and not on a single species (e.g., ionization, condensation).
§.§ Comparison with GCM models
We next aimed to use GCMs to help us interpret these trends and better understand the physical processes occurring in the atmosphere of WASP-76 b. To that end, we compared the measured radial velocities and line shapes of the injected models to the data to determine which drag prescription best fits the data. We injected the different models into the ESPRESSO data at a systemic velocity of +100 km/s so that the true signal and the injected signal would not overlap. We tested injecting the signal at different velocities and found that any differences were within the uncertainties.
We found that all three models underestimate the strength of the Fe signal at all altitudes (see Figure <ref>), but that the mismatch with the bin that contains the strong lines was the worst (the models only reach a radius of 1.02 R_p while the data reach 1.05 R_p). The top of the atmosphere for the models occurs at 10^-5 bars, and so this significant underestimation from the models is likely due to the strong Fe lines absorbing at altitudes above 10^-5 bars in the planet's atmosphere. The top boundary of the GCM was chosen to be 10^-5 bars, as the the lower atmospheric densities result in numerical instabilities due to extremely short timescales. This upper boundary was also enforced in the post-processing, which is why the CCFs of the GCMs are much weaker for the strong lines. Additionally, non-LTE effects and photoionization — which are not modeled by the GCM — play stronger roles in the extended atmosphere <cit.>. Therefore, we focus the rest of our comparisons on the bins containing the middle and weak Fe lines, as these probe pressure levels covered by our GCMs. Even at lower altitudes, the models still underestimate the signal strength, which may be due to non-solar abundances of Fe – an assumption made in the post-processing routine – or differences in temperature structure compared to the true planet.
Since our modeled spectra do not create a signal that is as strong as the data, and therefore have even lower SNRs (∼2.5-3.5 for the models and >5 for the data), in order to better compare the model to the data we inject the model into the data using standard injection and recovery methods <cit.> at 20 times the expected strength to create `zero noise model' CCFs. We treat these CCFs in the same way as the data and split them into different phase bins, fitting Gaussians to each CCF in order to measure the same CCF parameters for each model as we measured for the data. These newly measured CCF parameters are given in Table <ref>. More complex structure (e.g., double peaks) is now apparent in the CCFs due to the drastically increased SNRs, and demonstrate how this binary mask CCF method preserves line shapes and therefore information about atmospheric wind structures. By comparing Figure <ref>, we can see how differing wind patterns manifest in the CCFs. Despite the more complex structure, we still fit a single Gaussian to the models in order to treat the models and data in a consistent manner and derive parameters that can be compared in a one-to-one manner. The `zero noise' model CCFs are plotted along with the real data CCFs in Figure <ref>. We also plot all of the summarizing quantities from the two Tables in Figure <ref> to enable an easier visual comparison between the models and data. Each model shows unique features that we will attempt to match to the data.
By comparing the CCF parameters for all the models, we see that all of the GCM models reproduce the observed pattern of more blueshifted CCFs in the second half of the transit, which naturally arises from the combination of tidally locked planet rotation and the hotter eastern limb coming into view during the later part of the transit (see Figure <ref>). As the transit progresses the blueshifted contours become more dominant and are surrounded by regions of higher temperatures.
Again looking at Table <ref> and Figure <ref>, we see that all of the models follow the clear trend in the data that the FWHMs at both phase bins increase with altitude (larger FWHMs for the middle Fe line bin than the weak line bin). We also see that all the models follow the tentative trend that we found in the data where for the ϕ_+ phases, the degree of blueshifting increases as we probe deeper in the atmosphere (weaker Fe I lines). Examining Figure <ref>, the 3D wind structures at the pressure contours vary significantly, which contribute to the differing Doppler shifts found at these two pressures. This effect is a result of the complex, multidimensional interplay between winds, temperature, atmospheric circulation. The equatorial jet is more pronounced at all longitudes at the deeper pressure level probed by the weak lines resulting in a narrower FWHM and a stronger average blueshift at post transit phases. At the higher level probed by the middle lines the wind structure is far more spatially variable, resulting in a wider FWHM and a smaller average blueshift at post transit phases (even though the winds speeds are typically larger at this altitude).
We find that both the 0G and 3G models show significantly larger FWHMs for the first half of the transit than the second half of the transit, as is seen in the data. Conversely, the uniform model shows larger FWHMs for the second half of the transit, which is disfavored by the data. The uniform drag model also shows a stronger signal at the beginning of the transit and a weaker signal at the end of the transit (represented in Table <ref> by a height ratio greater than one).
Both the 0G and 3G models reproduce the observed trend in the data of a weaker signal at the beginning of the transit.
The uniform drag model applies the same drag at every spatial point on the planet, which causes less asymmetry than the other two models, and therefore a less dominant evening side of the planet. The signal is therefore weaker and has a wider FWHM due to contribution from both hemispheres. This is evidence that the uniform drag model does not accurately reproduce the observations and that a more complex treatment of drag is necessary.
Between the 0G and 3G models, we find some instances where the 0G models seems to be preferred and some where the 3G model seems to be preferred. In the data, we find a decreasing height ratio as deeper pressures are probed, which means that the signal from the ϕ_+ bin become stronger compared to the ϕ_- bin as one looks deeper in the atmosphere. The 3G model shows a decreasing height ratio, like the data, while the 0G model has an almost constant height ratio at the two altitudes. This trend is likely influenced by multiple concurrent physical effects. In our GCMs, the 3G model has the largest day-night temperature contrast, resulting in the largest scale height difference between egress and ingress. The model also shows a jet that is more localized to the lower atmosphere than the 0G case, potentially leading to this large scale height difference being more localized to the lower atmosphere. Other processes, such as clouds or hydrogen dissociation (though not included in these models) could also likely influence this trend, and are discussion more in Section <ref>.
On the other hand, the 0G model matches the measured RV shifts the best, in that it exhibits the strongest blueshift for the ϕ_+ phases, and the difference between the RV shift between the two phase bins is largest. This is because the 0G model has no additional drag, resulting in a stronger super-rotational jet. It is important to note that no GCM models have been able to reproduce the magnitude of the measured RV shifts found on WASP-76 b, and so there is likely some missing physics in the GCMs.
Even though all of the models underpredict the observed blueshifts, both the 3 G and 0 G models follow the trend of increasing blueshifts as the transit progresses from ϕ_- to ϕ_+, and increasing blueshifts deeper in the atmosphere.
§ DISCUSSION
From our comparisons, we are able to show that applying drag uniformly to the atmosphere is not sufficient to fit the data and we are able to reject the uniform drag model. Previous comparisons to high resolution data found that models with weak drag were required to fit the asymmetric shape of the Fe CCF <cit.>. Conversely, low resolution Spitzer phase curves showed little phase offset, and therefore required models with significant drag, therefore setting up a tension between the two methods. As the previous GCM modeling results used a uniform drag prescription <cit.>, we suggest that the poor fits for the drag models could have been due to the spatially uniform application of drag, and find that the inclusion of spatially varying drag is necessary to best fit these high resolution data. From our analysis it is initially not obvious whether the 0 G or the 3 G model is the better fit, as both of the models are able to fit many of the trends. However, some of the first GCM modeling work for hot Jupiters <cit.> predicted that the trend of increasing blueshifts for deeper pressures—which is shown by the data—could be indicative of magnetic effects. Finally, it is notable that the 3 G model was also able to fit the low resolution Spitzer phase curves better than the 0 G model <cit.> meaning that our single kinematic MHD model fits both high resolution and low resolution data, which is a significant feat. As we have not done a full parameter sweep of varying field strengths or magnetic topologies, we are not claiming that the magnetic field strength of this planet is 3G, but take the agreement with both phase curves and high-resolution data as potential indication of magnetic effects shaping the environment.
All of the models underpredict the wind speeds, and previous GCMs of this planet have been unable to match the magnitude of the phase-resolved Doppler shifts <cit.>. <cit.> suggested this discrepancy between observed radial velocities and those predicted by GCMs could be due to imprecision/uncertainties in the ephemeris and eccentricity, but recent precision measurements of the eccentricity and ephemeris of WASP-76 b from <cit.> have shown that orbital uncertainties are not to blame for inconsistencies with GCMs. It therefore seems that there is some missing physics that has not been taken into account for the GCMs that is needed to completely fit the WASP-76 b data. As of now it is unclear whether this problem of underpredicting planetary radial velocities is widespread, as there are few planets with this type of measurement, but initial wind measurements in a sample of 6 ultra-hot Jupiters found that WASP-76 b had the highest wind speed <cit.>.
Future work should explore in more depth why GCMs can underpredict wind speeds in HRS. Running GCMs with lower hyperdissipation and/or higher resolution may lead to higher model wind speeds. In addition, including physical processes such as hydrogen dissociation, clouds, and magnetic drag concurrently should be ran to better physically understand the atmosphere of this planet. Hydrogen dissociation would reduce the day-night temperature contrast <cit.> and the corresponding difference in scale height. Clouds, which may be present on one or both terminators, can increase the net blueshift, particularly during ingress <cit.>. Understanding how all of these processes act together and influence measured observables will shed light on which are the most important in future exoplanet modeling and the physical mechanisms occurring in planetary atmospheres.
§ CONCLUSIONS
Using two ESPRESSO transits of WASP-76b, we explored how winds and dynamics in the planet's atmosphere changed as a function of vertical altitude, and compared these results to the output of state-of-the-art global circulation models (GCMs). To resolve the vertical structure of the atmosphere, we created binary masks containing Fe I lines in three different opacity bins. The bin with the Fe I lines that were the most opaque probes the highest layers of the atmosphere. The bins with less opaque Fe I lines probe lower in the atmosphere since light can travel through deeper layers of the atmosphere before it becomes optically thick. We then cross correlated these binary masks with the data and explored trends in the cross correlation functions. By cross correlating with a binary mask we are able to preserve information on the true line shapes, and therefore accurately extract the average height in the atmosphere where the signal arises and full width at half maximum (FWHM) of this signal. As the Fe I signal is famously also known to show an asymmetric velocity shift from the start of the transit to the end of the transit <cit.>, we also split up the cross correlation functions (CCFs) into two phase bins.
We find that at all heights, the phase bin for the second half of the transit (ϕ_+) is more blueshifted than the first half of the transit (ϕ_-), and that the asymmetry first found by <cit.> and confirmed in <cit.> persists at all altitudes. We also see that at all altitudes the FWHM for the phase bin encompassing the beginning of the transit is significantly larger than the phase bin for the end of the transit. We take this as evidence that at the end of the transit the signal is dominated by only one side of the planet (the hotter evening side), whereas at the beginning of the transit the signal is a combination of the beginning and end of the transit, consistent with phase-resolved retrievals of WASP-76b <cit.>. When we focus on the ϕ_+ phases, which we assume come form a single hemisphere, we find that at higher altitudes the signal is both wider (has a larger FWHM) and less blueshifted. The blueshift evolution with altitude only shows a marginal trend (2.3-σ), but the FWHM trend is robust. Finally, we find that as we probe deeper in the atmosphere, the signal from the beginning of the transit is less significant (SNR=5.1 for the strong-line bin and SNR=2.84 for the weak-line bin) and peaks lower in the atmosphere (height ratios between the two half decrease from 0.76 for the strong-line bin to 0.44 for the weak-line bin).
We next compared the data and the observed trends to GCMs with different prescriptions for magnetic drag by injecting the models into the data and performing the same analysis on the models.
All of the models show the trend of larger FWHMs at higher altitudes and a stronger blueshifted signal at lower altitudes for the ϕ_+ phase bin, as is seen in the data. We explain both of these trends as being due to the fact that at lower altitudes the models show smaller absolute velocities, but a more ordered velocity pattern due to the presence of a jet. At higher altitudes there are larger absolute velocities, but in a less ordered pattern, leading to broader CCFs with a smaller average blueshift. However, the models show key differences and we are able to rule out the uniform drag model due to the fact that the FWHMs for that model are larger at ϕ_+ phases and the signal is stronger during the first half of the transit. As both of these trends are clearly ruled out by the data, we are able to conclude that applying a spatially uniform drag prescription via a single drag coefficient cannot reproduce the data. The remaining two models both adequately reproduce the RV shifts, FWHMs, and height ratios seen in the data, but we argue that while the 3G model does not predict the degree of blueshifting as well as the 0G model, it better follows the trends seen in height ratio and FWHM. As all GCM models are unable to reproduce the degree of blueshifting <cit.>, and the 3G model also fit low resolution Spitzer phase curves better <cit.>, we take this as tentative evidence of magnetic field effects shaping the environment of WASP-76b. While more work is needed to explore if other field strengths, physical processes, or more complex radiative transfer routines can provide a better fit to all of the data, it is promising that a single GCM model can adequately fit both the low-resolution Spitzer phase curve and the high-resolution optical transmission spectra, as previous modeling efforts found that to fit key features in the data they required significantly different drag prescriptions <cit.>.
This study is a first step towards exploring exoplanets along new dimensions. As detection signal strengths increase, it becomes possible to better resolve the atmosphere by breaking the signal up. <cit.>, and later <cit.> and <cit.>, showed that by observing the shape and velocity evolution of planetary absorption lines during a transit, differences between the two terminators of a planet can be inferred. <cit.> performed retrievals that successfully characterized WASP-76b's atmosphere at 4 discreet locations across the planetary surface by breaking the transit up both temporally (probing different regions as the planet rotates during transit) and in velocity space (separating the oppositely rotating eastern and western hemisphere). In this work, we present a novel method to probe the vertical structure of the planet's atmosphere by taking advantage of the fact that absorption lines of different opacity become optically thick at different altitudes. With the current limits on signal, we are unable to draw extensive conclusions or distinguish between a 3-G and 0-G model, but with future facilities such as high resolution spectrographs on ELTs, this will be possible.
We would like to thank Adam Langeveld for useful discussions that prompted inclusion of the K_p vs. V_sys diagrams. This publication makes use of The Data & Analysis Center for Exoplanets (DACE), which is a facility based at the University of Geneva (CH) dedicated to extrasolar planets data visualisation, exchange and analysis. DACE is a platform of the Swiss National Centre of Competence in Research (NCCR) PlanetS, federating the Swiss expertise in Exoplanet research. The DACE platform is available at https://dace.unige.ch. Many of the calculations in this paper made use of the Great Lakes High Performance Computing Cluster maintained by the University of Michigan.
aasjournal
§ DETERMINING UNCERTAINTIES
As our conclusions are drawn from the measured radial velocities and their uncertainties, ensuring the accuracy of our values and uncertainties is vital for the interpretation of the data and in turn the robustness of our conclusions. In this section we try to calculate errorbars in a few different ways in order to ensure that they give consistent results and are truly representative of the data. We also compare our uncertainties to other publications of different data sets and discuss how the reduction and analysis steps affect the measured velocities to ensure that there are no hidden factors that are not represented in the errorbars.
In the simplest method, we used (a python package for curve fitting) to fit Gaussians to the CCFs. As the data are noisy and we do not know the underlying functional form of the data (e.g., Gaussian, double-peaked, etc.), some fitting or smoothing needs to be done to obtain measured values and Gaussians provides the best solution given the data quality and information. reports uncertainties in the parameters it fits, one of which is the center of the Gaussian. These uncertainties come from the covariance matrices. Using the CCF from the strong line bin during the second half of the transit as an example (black line in top panel of Figure <ref>), we obtain a best-fit value of -9.565 ± 0.479 km s^-1 from . Using the same fitted Gaussians as above, but instead of simply assuming the uncertainties given by , we calculate the standard deviation of the fitted Gaussians and divide by the signal to noise ratio of the CCF <cit.>. This method gives an uncertainty of 0.348 km s^-1 for the same example as above. Finally, we also calculate uncertainties by recording the velocity shift of each individual CCF in time and then taking the standard deviation of that measurement and dividing by the square root of the number of measurements. This method was also used to derive alternative uncertainty values in <cit.>, and is similar to one of the methods used to derive radial velocity uncertainties in <cit.>. This final method assumes that the true radial velocity is same at every measurement (at least within a small phase bin), and therefore that any change in velocity is uncertainty in the measurement and not physical. This is not the case for at least the beginning of the transit, but it appears to be a reasonable assumption for the end of the transit (looking at Figure <ref>, phases from 0.02 to 0.04 have a vertical residual pattern). Using this approach, again for the same example as the above two, we are able to get 4 separate measurements of the radial velocity and find an uncertainty of 0.341 km s^-1. The fact that the three methods explored here return similar uncertainties lends credence to our uncertainty results. Throughout the paper, we quote the uncertainties as those from as these seem to be the most conservative and are the simplest to derive.
Next, we compare our uncertainty values to other recent papers to ensure that our values are on the same order as these. <cit.> use a single ESPRESSO transit of WASP-121b and report a radial velocity uncertainty of 0.16 and 0.28 km s^-1 in the first and second half of the transit, respectively, when the CCF is split in two phase bins in a similar manner as we have done. Our uncertainties, which are on the order of 0.4 km s^-1, seem reasonable considering we have two transits, but we split our CCFs into 3 line-strength bins before we split by the two phase bins.
Finally, there may be data reduction or analysis steps that cause small changes in the measured radial velocities. This uncertainty in the radial velocity would not be taken into account in the above calculations, and so we explore how some of our choices affect the measured radial velocities. We obtain consistent measurements for the radial velocity of the Fe I signal with <cit.> (ϕ_-=-3.89km s^-1, ϕ_+=-10.26km s^-1), and <cit.>, as reported by <cit.> (averaging ϕ_- phases ∼-3.5km s^-1, and ϕ_+∼-10.5km s^-1), using a completely different method to either paper. In our own analysis, we do not perform any cleaning steps during our analysis process that significantly alters the data, unlike similar analyses in the NIR, which use PCA to clean the data. Indeed, <cit.> calculate uncertainties in their radial velocity measurements by using different numbers of PCA iterations and taking the resulting radial velocities as different ”measurements" to obtain errorbars, but as our analysis does not perform any drastic cleaning steps, an approach like this is not possible. We did test a few of our data processing choices (i.e., masking bad pixels, etc.) and found that the resulting measured radial velocity differences were smaller than the uncertainties we quote in the paper. The only choice that made a significant difference in the measured radial velocities was the choice of how many lines to include in each mask, but as long as the same choice is used during the GCM model analysis and throughout the analysis, the comparisons and conclusions will be accurate. We therefore do not think that the analysis itself imparts significant uncertainties into the measured radial velocities, and that both the values and uncertainties accurately represent the data.
The data themselves are noisy because we have split the signal into so many different parts and due to the inherent increased noise when using the binary mask method as opposed to a traditional CCF method, but we note that every split CCF has a SNR>4.5, which is usually considered a reliable detection for high resolution observations. Because of this, we believe that the data are of high enough quality to measure reliable RVs. As data quality improves with the coming of ELTs, more robust measurements will be possible that rely on fewer underlying assumptions (e.g., Gaussian shapes).
§ TRENDS WITH ALTITUDE IN MEASURED K_P
Motivated by recent work that find differing offsets for different chemical species in the standard K_p vs V_sys diagram and suggested this difference could be due to species residing in different layers (altitudes) of a planet's atmosphere <cit.>, we tested to see whether our CCFs created with different altitude bins showed noticeable offsets in a K_p vs V_sys diagram.
These diagrams are created in the standard method, by taking the 2D CCFs for each species and each altitude bin and shifting them to rest assuming different values of planet semi-amplitude (K_p) and different systemic velocities (V_sys). Once the 2D CCFs are shifted to rest, they are co-added in this rest frame and the SNR is calculated as described in Section <ref>. In order to compare the positions of the peaks in these diagrams, we have overlayed the SNR contours for the different altitude bins of each individual species in a single plot (Figure <ref>). By comparing the peaks for each bin (marked with crosses) and their associated SNR contours, we do not see a clear pattern emerge of differing K_p offsets at different altitudes. For the most part, the measured SNR peaks for a single species are consistent to within 1-σ, and the strong line peak is not always offset from the weak line peak in the same direction (i.e. the strong line bin peak for V is at a higher K_p value than the weak line bin, whereas for Cr and Mn it is the opposite).
We see this lack of trend as potential indication that altitude is not the sole driver of differences in the measured peak K_p, and other processes, such as condensation or ionization, may lead to more significant differences in the measured K_p offset. It is important to note, however, that the species studied here only range in altitude from about 1.01 to 1.10 R_p (∼10^-3 - 10^-8 bar), and so species that primarily absorb in regions of the atmosphere outside of this range (e.g., H, He, etc.) may show significantly offset K_p values that are indicative of their altitudes.
|
http://arxiv.org/abs/2409.03388v1 | 20240905094724 | Polarized and un-polarized $\mathcal{R}_{K^*}$ in and beyond the SM | [
"Ishtiaq Ahmed",
"Saba Shafaq",
"M. Jamil Aslam",
"Saadi Ishaq"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
CERN-TH-2024-148
Polarized and un-polarized ℛ_K^* in and beyond the SM
Ishtiaq Ahmed^1,2, Saba Shafaq^3, M. Jamil Aslam^4, Saadi Ishaq^5
^1 National Center for Physics, Islamabad 44000, Pakistan.
^2 Theoretical Physics Department, CERN, CH-1211 Genève 23, Switzerland.
^3 Department of Physics, International Islamic University, Islamabad 44000, Pakistan.
^4 Department of Physics, Quaid-i-Azam University, Islamabad 45320, Pakistan.
^5 School of Natural Sciences, Department of Physics, National University of Sciences and Technology, Sector H-12, Islamabad, Pakistan.
Abstract
The Standard Model (SM) is lepton flavor universal, and the recent measurements of lepton flavor universality in B → (K,K^*)ℓ^+ℓ^-, for ℓ = μ, e, decays now lie close to the SM predictions. However, this is not the case for the τ to μ ratios in these decays, where there is still some window open for the new physics (NP), and to accommodate them various extensions to the SM are proposed.
It will be interesting to identify some observables which are not only sensitive on the parametric space of such NP models but also have some discriminatory power. We find that the polarization of the K^* may play an important role, therefore, we have computed the unpolarized and polarized lepton flavor universality ratios of τ to μ in B→ K^*ℓ^+ℓ^+, ℓ= μ, τ decays. The calculation shows that in most of the cases, the values of the various proposed observables fall within the current experimental sensitivity, and their study at some on going and future experiments will serve as a tool to segregate the variants of the NP models.
Anisotropic Resonant Scattering from uranium systems at the U M_4 edge
G. H. Lander
September 9, 2024
======================================================================
§ INTRODUCTION
Although the predictions of the Standard Model (SM) are consistent with most of the particle-physics data, it is still not enough to explain some important puzzles, e.g., the contents of the dark matter, dark energy, matter-antimatter asymmetry in the universe, hierarchy problem, neutrino oscillations, etc. The search of the physics beyond the SM, known as the new physics (NP), is a great challenge for different experimental programs of the high energy physics, where the main challenge is to detect the particle contents of the various NP models. The flavor physics is an ideal platform to explore the NP in an indirect way, where the flavor changing neutral current (FCNC) transitions have their special place. These transitions are forbidden at tree level in the SM, and are loop suppressed due to the Glashow-Iliopoulos-Maiani (GIM) mechanism. Therefore, they are quite sensitive to the particles running in the loop which make them the ideal probe for the NP searches.
A pertinent feature of the SM is the lepton flavor universality (LFU), which states that different generations of leptons interact identically with the SM Higgs boson,
i.e., they have universal Higgs couplings, and differ only by their masses. However, for the last few years it has come under scrutiny, e.g., the study of the deviation of the LFU ratio R_H≡ℬ(B→ Hμ^+μ^-)/ℬ(B→ He^+e^-), where H=K,K^*, X_s,…, from SM result, i.e., R_H≈ 1, triggered a lot of interest in physics beyond the SM <cit.>. Though these decays involve the hadronic contributions arising from the form factors, the dominant theoretical uncertainties from QCD cancel out in the ratio, and the QED uncertainties are controlled to contribute only 1% in R_K^∗ predictions. Also, the source of LFUV in SM is through the Higgs to lepton couplings, but these are too small to make any difference to R_H <cit.>.
The observation of R^μ e_K^∗ at the LHCb <cit.> and Belle <cit.> have 2.1 - 2.4σ deviations from the corresponding SM prediction. A 3.7σ discrepancy from the SM prediction is observed at LHCb for R^μ e_K in 1.1≤ s ≤ 6 GeV^2 bin <cit.>. The experimental measurements of R^μ e_K^* in low and central s bins are:
R_K^*^Exp = 0.660^+0.11_-0.07± 0.03, 0.045 ≤ s ≤ 1.1 GeV^2
R_K^*^Exp = 0.690^+0.11_-0.07± 0.05. 1.1 ≤ s ≤ 6 GeV^2.
Similarly, the value of R^μ/e_K in central s bin is as follows:
R_K^Exp = 0.846^+0.042+0.013_-0.039-0.012 0.045 ≤ s ≤ 1.1 GeV^2.
The current experimental data of R_K, K^*^^μ e is consistent with the corresponding SM predictions <cit.>; however, the global analysis of b→ s ℓ^+ℓ^- for (ℓ = e,μ) data does not rule out the possibility of having the adequate values of LFUV components of the NP couplings in a number of NP scenarios, see e.g., <cit.>. One can expect the similar role by studying the ratios involving τ - μ, namely, R_K,K^*^τμ, where any deviations, surpassing the one arising due to the mass difference of the ratio of τ/μ, from the SM predictions will hint towards the LFUV involving second and third generation of charged leptons. Ref. <cit.> demonstrate that these ratios can deviate from their SM predictions even when the new physics couplings are universal linking to mass-related effects associated with the involvement of τ and μ leptons. Extending the SM weak effective Hamiltonian with new vector and axial-vector couplings, the analysis of full angular distribution of B→ (K,K^*)ℓ^+ℓ^-, where ℓ = μ, τ has been done in Ref. <cit.>, to find the most optimized LFUV observables.
In the case of LFUV ratio involving μ - e, the deviations from the SM were analyzed in the model independent effective field theory (EFT) framework by encoding the short distance physics, both arising due to SM and NP, in the Wilson coefficients of higher dimensional operators <cit.>. This model independent EFT approach provides a useful guide for the construction of NP models that are viable to explain these anomalies. Ref. <cit.> present an up-to-date complete model-independent global fit analysis by including the recent LHCb measurements
of R_K,K_s,K^∗, B_s →ϕμ^+μ^- and B_s →μ^+μ^-, which now includes 254 observables, superseding their previous analyses
<cit.>. Their are now two main scenarios:
* The Wilson coefficients C_9^'μ and C_10^'μ which correspond to the right-handed couplings remain a suitable option.
* The LFUV left-handed coupling C_9 μ^V = - C_10 μ^V accommodates the data better, if the LFU new-physics is allowed in the WC C_9^U.
* It is found that the LFUV observable Q_5 help us to distinguish the both types of scenarios.
In B→ K^∗ℓ^+ℓ^-, ℓ, τ, μ decay, the final state vector meson K^∗ could have longitudinal and transverse polarizations, and in this study we explore the LFUV ratio in this decay with a particular polarization of K^∗ meson both in the SM and by including the above mentioned NP scenarios. We will see that, together with Q_5, these physical observables have the potential to distinguish various scenarios.
The paper is organized as follows: The weak effective Hamiltonian (WEH) responsible for the B→ K^∗ℓ^+ℓ^- decay is discussed in Section <ref>, whereas Section <ref> presents the expressions of the polarized LFUV observables in terms of the SM and NP Wilson coefficients. In Section <ref>, we have plotted these observables with the above mentioned NP scenarios and we have also tabulated the numerical values of these observables in various momentum transfer (q^2 ≡ (p_ℓ^++p_ℓ^-)^2) bins in the same section. Finally, we conclude the study in the same Section.
§ EFFECTIVE HAMILTONIAN IN THE STANDARD MODEL AND BEYOND
The weak effective Hamiltonian for the rare B meson decays can be obtained by integrating out the heavy degrees of freedom, such as the W-boson, t-quark and the Higgs boson <cit.>. This approach is known as the operator product expansion (OPE), where the short distance effects render in the Wilson coefficients 𝒞_i, leaving the operators 𝒪_i describing the physics at a long distance. By implementing this, the weak effective Hamiltonian can be written as :
H_eff=-4 G_F/√(2)λ_t[∑_i=1^6C_i(μ)O_i(μ)+∑_i=7,9,10C_i(μ)O_i(μ)
+C_i^'(μ)O_i^'(μ) ].
In Eq. (<ref>) λ_t=V_tbV_ts^∗ are the CKM matrix elements, G_F is the Fermi coupling constant, C_i are
the Wilson coefficients, and O_i are the SM with operators with V-A structure. For B→ K^∗ℓ^+ℓ^- decays in the SM, the operators O_7, 9, 10 and their corresponding WCs C_7, , 9 10 will contribute. These operators have the form
O_7 = e/16π ^2m_b( s̅σ _μνP_Rb) F^μν ,
O_9 = e^2/16π ^2(s̅γ _μP_Lb)(ℓ̅γ^μℓ) ,
O_10 = e^2/16π ^2(s̅γ _μP_Lb)(ℓ̅γ ^μγ _5ℓ) .
Specifically, the operator O_7 describe the interaction of b and s quarks with the emission of a photon, whereas O_9, 10 correspond to the interaction of these quarks with charged leptons through (almost) same Yukawa couplings.
In Eq. (<ref>), the operators O_i^' are the chirality flipped operators, i.e., with weak interaction structure V+A. In the SM, the WCs C_9, 10^' are zero, where as C_7^' is non-zero - but suppressed by a factor m_s/m_b. In contrast with O_9, 10, the NP operators O_9, 10^' add different contributions to the transitions when final state leptons are the muons or the electrons.
The WCs given in Eq.(<ref>) encode the short distance (high momentum) contributions and these are calculated using the perturbative approach. The contributions from current-current, QCD penguins and chromomagnetic operators O_1-6,8 have been unified in the WCs C_9^eff and C_7^eff, and their explicit expressions are given as follows <cit.>:
C_7^eff(q^2)=C_7-1/3(C_3+4/3C_4+20C_5+80/3C_6)-α_s/4π[(C_1-6C_2)F^(7)_1,c(q^2)+C_8F^7_8(q^2)]
C_9^eff(q^2)=C_9+4/3(C_3+16/3C_5+16/9C_6)-h(0,q^2)(1/2C_3+2/3C_4+8C_5+32/3C_6)
-(7/2C_3+2/3C_4+38C_5+32/3C_6)h(m_b,q^2)+(4/3C_1+C_2+6C_3+60C_5)h(m_c,q^2)
-α_s/4π[C_1F^(9)_1,c(q^2)+C_2F^(9)_2,c(q^2)+C_8F^(9)_8(q^2)]
The WC given in Eq. (<ref>) involves the functions h(m_q,s) with q=c,b, functions F^7,9_8(q^2), and F^(7,9_1,c(q^2) are
defined in <cit.>.
The numerical values of Wilson coefficients C_i for i=1,...,10 at μ∼ m_b scale are presented in Table <ref>.
The WCs C^ℓ_(9,10) and C^ℓ_(9^',10^') written in Eq.(<ref>) correspond to the new vector-axial vector operators, which can be expressed as <cit.> :
C_(9,10)^μ = C^U_(9,10)+C^V_(9,10), C_(9,10)^τ=C^U_(9,10),
C_(9^',10^')^μ = C^U_(9^',10^')+C^V_(9^',10^'),
C_(9^',10^')^τ=C^U_(9^',10^').
The WCs C^U_(9,10) and C^U_(9^',10^') are associated with b→ sℓ^+ℓ^-,(ℓ=e,τ) transitions, whereas the WC C^V_(9,10) and C^V_(9^',10^') contributes only to b→ sμ^+μ^- transition.
In ref. <cit.>, a particular NP scenario affecting the muon is considered, and the NP WCs are constrained by fitting the full data set available for all 254 observables, and also by restricting to 24 LFUV observables. In the present study, we consider the values obtained from the full data set and the three prominent 1D NP scenarios and eight D > 1 scenarios are summarized in Table <ref> and <ref>.
§ PHYSICAL OBSERVABLES
In Section <ref>, we emphasized that the LFU, defined by
R_K^∗ = ℬ(B→ K^∗μ^+μ^-)/ℬ(B→ K^∗e^+e^-),
is an important observable in establishing the NP present in these FCNC decays. The purpose here is to see if there exist some observables that could be used to segregate the effects of above mentioned NP scenarios. To do so, we considered the polarization of the final state meson and defined the following ratios :
R^τℓ=ℬ(B→ Mτ^+τ^-)/ℬ(B→ Mℓ^+ℓ^-) R^τ=ℬ(B→ Mτ^+τ^-)/ℬ(B→ M^'τ^+τ^-),
where M=K^*,K^*_L,K^*_T, with the subscript L(T) designating the longitudinal(transverse) polarization of the final state vector meson. In R^τ the meson in the numerator and denominator have difference in their polarizations. It is important to emphasize that the ratios remain the same whether the lighter lepton is an electron or muon. However, we have present here the results by consider the light lepton to be the muon. The different combinations will give
ℛ_K^*^τμ = ℬ(B→ K^*τ^+τ^-)/ℬ(B→ K^*μ^+μ^-); ℛ_K^*τ_L^τμ=ℬ(B→ K^*_Lτ^+τ^-)/ℬ(B→ K^*μ^+μ^-); ℛ_K^*τ_T^τμ=ℬ(B→ K^*_Tτ^+τ^-)/ℬ(B→ K^*μ^+μ^-);
ℛ_K^*μ_L^τμ = ℬ(B→ K^*τ^+τ^-)/ℬ(B→ K^*_Lμ^+μ^-); ℛ_K^*μ_T^τμ=ℬ(B→ K^*τ^+τ^-)/ℬ(B→ K^*_Tμ^+μ^-); ℛ_K^*_LL^τμ=ℬ(B→ K^*_Lτ^+τ^-)/ℬ(B→ K^*_Lμ^+μ^-);
ℛ_K^*_TT^τμ = ℬ(B→ K^*_Tτ^+τ^-)/ℬ(B→ K^*_Tμ^+μ^-); ℛ_K^*_LT^τμ=ℬ(B→ K^*_Lτ^+τ^-)/ℬ(B→ K^*_Tμ^+μ^-); ℛ_K^*_TL^τμ=ℬ(B→ K^*_Tτ^+τ^-)/ℬ(B→ K^*_Lμ^+μ^-);
ℛ_K^*_L^τ = ℬ(B→ K^*_Lτ^+τ^-)/ℬ(B→ K^*τ^+τ^-); ℛ_K^*_T^τ=ℬ(B→ K^*_Tτ^+τ^-)/ℬ(B→ K^*τ^+τ^-); ℛ_K^*_LT^τ=ℬ(B→ K^*_Lτ^+τ^-)/ℬ(B→ K^*_Tτ^+τ^-).
The SM values of the branching ratios, ℬ(B→ (K^*,K^*_L,T)ℓ^+ℓ^-), appear in the above ratios can be written as
ℬ(B→(K^*, K^*_L, T)ℓ^+ℓ^-)= G_F^2 |V_tbV^*_ts|^2/2^11π^5m_B^3𝒟_j^ℓ(s),
where j represents K^*, K^*_L, K^*_T and ℓ = μ,τ. The values of 𝒟_j^ℓ(s), in the 14≤ s≤ s_maxGeV^2 bin, are appended in Table <ref>. By using these values one can easily find the SM values of the ratios given in Eq. (<ref>), and these are tabulated in Table <ref>.
As we are dealing with the NP scenarios that affect the muon and tau modes, therefore, the unpolarized and polarized branching ratios, after addition of NP, in terms of the new WCs can be written as
ℬ(B→(K^*, K^*_L, T)μ^+μ^-(τ^+τ^-))= G_F^2 |V_tbV^*_ts|^2/2^11π^5m_B^3𝒩_j^μ(τ)(s).
The expressions of 𝒩_K^*^μ(τ), 𝒩_K^*_L^μ(τ) and 𝒩_K^*_T^μ(τ), after integration over s in the 14≤ s≤ s_maxGeV^2 bin, can be written in terms of new WCs which are given in Eq. (<ref>). Writing
𝒩_K^*^μ ≃ 𝒟_K^*^μ+𝒩_K^*^'μ, 𝒩_K^*_L^μ≃𝒟_K^*_L^μ+𝒩_K^*_L^'μ, 𝒩_K^*_T^μ≃𝒟_K^*_T^μ+𝒩_K^*_T^'μ,
𝒩_K^*^τ ≃ 𝒟_K^*^τ+𝒩_K^*^'τ, 𝒩_K^*_L^τ≃𝒟_K^*_L^τ+𝒩_K^*_L^'τ, 𝒩_K^*_T^τ≃𝒟_K^*_T^τ+𝒩_K^*_T^'τ,
with
𝒩_K^*^'μ ≃ [2.7 C_10'^μ2-4. C_10^μ C_10'^μ+17. C_10'^μ+2.7 C_10^μ2+22. C_7^μ2+2.7 C_9'^μ2+2.7 C_9^μ2-23. C_10^μ-38. C_7^μ
+15. C_7^μ C_9'^μ-14. C_9'^μ-11. C_7^μ C_9^μ-4. C_9'^μ C_9^μ+19. C_9^μ]×10^4,
𝒩_K^*_L^'μ ≃ [0.95 C_10'^μ2-1.9 C_10^μ C_10'^μ+8.2 C_10'^μ+0.95 C_10^μ2+8.6 C_7^μ2+0.95 C_9'^μ2+0.95 C_9^μ2-8.2
C_10^μ
-20. C_7^μ+5.7 C_7^μ C_9'^μ-6.5 C_9'^μ-5.7 C_7^μ C_9^μ-1.9 C_9'^μ C_9^μ+6.5 C_9^μ]×10^4,
𝒩_K^*_T^'μ ≃ [1.4 C_10'^μ2-2.8 C_10^μ C_10'^μ+12. C_10'^μ+1.4 C_10^μ2+13. C_7^μ2+1.8 C_9'^μ2+1.8
C_9^μ2-12. C_10^μ
-18. C_7^μ+9.6 C_7^μ
C_9'^μ-7.4 C_9'^μ-5.1 C_7^μ C_9^μ-2.1
C_9'^μ C_9^μ+12. C_9^μ]×10^4,
𝒩_K^*^'τ ≃ [0.52 C_10'^τ2-1.05 C_10^τ C_10'^τ+4.48 C_10'^τ+0.52 C_10^τ2+14.40 C_7^τ2+1.80 C_9'^τ2+1.80 C_9^τ2-45.31 C_10^τ
-25.53 C_7^τ+10.14 C_7^τ C_9'^τ-9.44 C_9'^τ-7.31 C_7^τ C_9^τ-2.70 C_9'^τ C_9^τ+12.51 C_9^τ]×10^4,
𝒩_K^*_L^'τ ≃ [0.35 C_10'^τ2-0.69 C_10^τ C_10'^τ+2.98 C_10'^τ+0.35 C_10^τ2+5.71 C_7^τ2-4.31 C_9'^τ2+4.31 C_9^τ2-2.98
C_10^τ
-12.96 C_7^τ+3.80 C_7^τ C_9'^τ-4.31 C_9'^τ-3.80 C_7^τ C_9^τ-1.26 C_9'^τ C_9^τ+4.31 C_9^τ]×10^4,
𝒩_K^*_T^'τ ≃ [1.78 C_10'^τ2-0.36 C_10^τ C_10'^τ+1.51 C_10'^τ+1.78 C_10^τ2+8.70 C_7^τ2+1.17 C_9'^τ2+1.17
C_9^τ2-15.54 C_10^τ
-12.57 C_7^τ+6.34 C_7^τ
C_9'^τ-5.12 C_9'^τ-3.51 C_7^τ C_9^τ-1.43
C_9'^τ C_9^τ+8.20 C_9^τ]×10^4,
and 𝒟^ℓ_j is given in Table <ref>.
Using Eqs. (<ref>) and (<ref>) and with some manipulation we are able to rewrite the LFVU ratios in terms of two components given as follows:
ℛ_K^*^τμ = [ℛ_K^*^τμ SM+ℛ_K^*^τμ']; ℛ_K^*τ_L^τμ=[ℛ_K^*τ_L^τμ SM+ℛ_K^*τ_L^τμ'];
ℛ_K^*τ_T^τμ=[ℛ_K^*τ_T^τμ SM+ℛ_K^*τ_T^τμ'];
ℛ^τμ_K^*μ_L = [ℛ_K^*μ_L^τμ SM+ℛ_K^*μ_L^τμ']; ℛ^τμ_K^*μ_T=[ℛ_K^*μ_T^τμ SM+ℛ_K^*μ_T^τμ'];
ℛ_K^*_LL^τμ=[ℛ_K^*_LL^τμ SM+ℛ_K^*_LL^τμ'];
ℛ^τμ_K^*_TT = [ℛ^τμ SM_K^*_TT+ℛ^τμ'_K^*_TT]; ℛ^τμ_K^*_LT=[ℛ_K^*_LT^τμ SM+ℛ_K^*_LT^τμ'];
ℛ_K^*_TL^τμ=[ℛ_K^*_TL^τμ SM+ℛ_K^*_TL^τμ'];
ℛ_K^*_L^τ = [ℛ_K^*_L^τ SM+ℛ_K^*_L^τ']; ℛ_K^*_T^τ=[ℛ_K^*_T^τ SM+ℛ_K^*_T^τ']; ℛ_K^*_TL^τ=[ℛ_K^*_TL^τ SM+ℛ_K^*_TL^τ'].
These expressions are written in such a way that, the first term has purely the SM contributions, whereas the second term encapsulate the NP as well as SM contribution given as ℛ_i^τμ' and ℛ_i^τ', where i=K^*, K^*_L, K^*_T, K^*_LT, K^*_TL, K^*_LL, K^*_TT. The later contributions to the LFUV ratios, appearing in Eq. (<ref>) becomes :
ℛ_K^*_(α)^τμ(τ) NP=𝒟_K^*_(α)^μ𝒩_K^*_(α)^'τ-𝒟_K^*_(α)^τ𝒩_K^*_(α)^'μ/𝒟_K^*_(α)^μ(𝒟_K^*_(α)^μ+𝒩_K^*_(α)^'μ); ℛ_K^*_(αβ)^τμ(τ) NP=𝒟_K^*_(β)^μ𝒩_K^*_(α)^'τ-𝒟_K^*_(α)^τ𝒩_K^*_(α)^'μ/𝒟_K^*_(β)^μ(𝒟_K^*_(β)^μ+𝒩_K^*_(β)^'μ) α,β=L, T
There are three prominent one-dimensional scenarios: (i) S-I: C_9^μ, (ii) S-II: C_9^μ= -C_10^μ and (iii) S-III: C_9^μ= -C_9^'^μ. The 1 σ ranges of the new WCs in these scenarios are given in Table <ref>. For these scenarios, C_(9^('),10^('))^V=0, so from Eq. (<ref>), one can notice that C_(9^('),10^('))^μ=C_9^('),10^(')^τ. Therefore, the set of expression, 𝒩_i^μ(τ), given in Eq. (<ref>), for 1D scenarios can be written in the following general form
𝒩_j^'μ(τ) = 𝒜C_9^U+ℬ(C_9^U)^2.
The coefficients 𝒜 and ℬ contain the contribution from SM WCs, and the form factors. Using the numerical values of the various inputs parameters, these are calculated in Table <ref>. Here, we have also included the uncertainties coming through the form factors and other input parameters.
Similarly, for scenarios D>1, one can express 𝒩_j^'τ in terms of the NP WCs as
𝒩_j^'τ = 𝒜^τ C_XX^τ+ℬ^τ(C_XX^τ)^2,
where C_XX^τ=C_9^U for scenarios V, VI, VII, and VIII, C_XX^τ=C_10^U for S-IX, S-X, C_XX^τ=C_10^'^U with 𝒜^τ=-𝒜^τ for S-XI, and C_XX^τ=C_10^'^U-C_10^U with 𝒜^τ=-𝒜^τ for S-XIII. The numerical values of 𝒜^τ and ℬ^τ are given in Table <ref>.
Similarly, for 𝒩^'μ_i in D>1 NP scenarios, we have used the Eq. (<ref>) with the conditions given in Table <ref>.
Finally, by using the constraints of NP WCs, summarized in Table <ref>, we have calculated the numerical values of ℛ^τμ(τ)_i, in the 14≤ s≤ s_maxGeV^2 bin in different NP scenarios, and registered them in Tables <ref> and <ref>.
§ PHENOMENOLOGICAL ANALYSIS
In this section, we present the phenomenological analysis of the twelve potential physical observables that are given in Eq. (<ref>). For R^τμ_K^∗, the SM prediction in s∈ [14, 19] GeV^2 bin is 0.41± 0.01. Unlike the well measured R^μ e_K^∗, the experimental measurements of R^τμ_K^∗ are currently missing due intricacy in reconstruction of tauons in final state. However, a similar study in the flavor changing charged current process governed by b→ c τν_τ using proton-proton (pp) collision data corresponding to an integrated luminosity of 2fb^-1 collected by the LHCb experiment during the periods 2015-16, the LFU ratios R_D(D^*) were measured <cit.>. Therefore, quite optimistically in future we will be able to measure these asymmetries involving the τ leptons in the final state, and could scrutinize the SM further in the τ-μ sector.
Figs. <ref>, <ref> and <ref> show the s≡ q^2 profile of R^τμ (τ)_i, where i=K^*, K^*_L, K^*_T, K^*_LT, K^*_TL, K^*_LL and K^*_TT, for D=1. Before starting the analysis, it is useful to mention here that the grey band in all these figures show the SM values, the color bands show the variation in their values for the new WCs and the width of the bands correspond to the uncertainties in the values of the physical observables due to the different input parameters, particularly the form factors.
* Fig. <ref> (left) illustrates the dependence of ratio R^τμ_K^∗ on new WCs as function of s. It can be noted that the second scenario (S-II) (C^U_9=-C^U_10), drawn in blue color, is precluded by uncertainties coming from SM; whereas, S-I and S-III predict values lower than that of SM. The results of these scenarios are well distinguished in almost all the s range, particularly, in s∈ [18,19] GeV^2 bin, where the results of S-III are significantly lower than S-I even. This is in line with the results given in Table <ref>, where the values of R^τμ_K^∗ for S-I and S-III ∼ (0.33 - 0.35) is well segregated from the S-II result, i.e., (0.40-0.41). To make it more visible, after integrating over s, these variations of R^τμ_K^∗ with C^NP_9μ are drawn in Fig. <ref> (right), where we can easily see that the results in three scenarios are well separated from each other.
* The polarized LFUV ratios ℛ_K^*μ_L,T^τμ, where K^*μ_L,T respresent the polarized vector meson when we have muons in the final state, are presented as first line of Fig. <ref>. The corresponding SM results of these ratios are given in Table <ref>, where the maximum value for it is 1.105^+0.056_-0.037, and it is shown by the gray band in the Fig.<ref>. We can see that the values of new WCs data arising from all three scenarios destructively interfere with the SM contributions, hence decreasing the corresponding observable values. This can also be noticed from Table <ref> where the maximum suppression is for the S-III for ℛ_K^*μ_T^τμ case. In the case of LFUV ratios with polarized final state meson, these are the ratios of full to longitudinal or transverse polarized ratios, therefore, we can expect ℛ_K^*_L^SM+ℛ_K^*_T^SM > 1, and this can bee seen in Fig. <ref>.
* In the second row of Fig. <ref> we have presented the ratios ℛ_K^*τ_L,T^τμ=ℬ(B→ K^*_L,Tτ^+τ^-)/ℬ(B→ K^*μ^+μ^-). With the particular polarization of the K^* in the numerator and the different impact of NP to μ and τ, the value of ℛ_K^*τ_L,T^τμ is expected to be less than one in the whole s region, and it is evident from the second row of Fig. <ref>. Also, the trend of longitudinal and transverse polarized LFUV ratios are opposite to each other. Once again, the range of S-II is masked by the uncertainty in the values of the SM - but the results of S-I and S-III are distinguishable from the SM, especially for ℛ_K^*τ_T^τμ.
* In Eq. (<ref>) the ratios, R^τμ_K^*_LT, R^τμ_K^*_TL, R_K^*_LL and R_K^*_TT can be defined as the polarized lepton flavor universality violation (PLFV) ratios due to a particular polarization of the final state meson in the numerator and denominator. It can be seen form the last line of Fig. <ref> that the observables R^τμ_K^*_LT and R^τμ_K^*_TL have the same trend as ℛ_K^*τ_L^τμ and ℛ_K^*τ_T^τμ, respectively.
In line with the LFUV ratio: ℛ_K^*^SM=ℬ(B→ K^*τ^+τ^-)/ℬ(B→ K^*μ^+μ^-), we expect that in SM R_K^*_LL+R_K^*_TT≈ 1, because the similar polarizations in the numerator and denominator do not change the total probability, and this can be seen in the first row of Fig. <ref> at any value of s. We can also observe from Table <ref>, that this has good discriminatory power, and the S-III can be distinguished from the SM and other scenarios for R_K^*_LL in the high s range, i.e., s∈[17, 19] GeV^2 bin.
* Contrary to the LFUV ratios, the R_K^*_(L,T)^τ defined in Eq. (<ref>) correspond to the case when for each polarization of K^* meson, the final state leptons are tauons only; therefore, we can say that these are just the helicity fractions. These are plotted in Fig. <ref> (last two rows). We can see that the NP contributes equally in the the numerator and denominator, that is why we have ℛ_K^*_L^τ+ℛ_K^*_T^τ≃1. This can be observed from Fig. <ref> and from the values tabulated in Table <ref>. From these plots, one can notice that the scenario S-I is well distinct from the scenarios S-II and S-III, and the variation in the values of these ratios due to the new physics can also be seen from the Table <ref>.
It is to mention here that along with the variations of ℛ_K^*(τμ)_i^τ,μ, where i represent the particular polarization of the vector meson in the numerator and denominator, we have also analyzed the correlation between the ℛ_K^*(τ,μ)_i^τμ and ℛ_K^*^τμ and have shown the results in Figs. <ref> and <ref> (left inset). The behaviour of the observables by taking into account the new WCs and the total magnitude over the range of 1σ is also given in the inset (right) of the corresponding observables with SM uncertainties plotted as the gray band. From these plots, one can not only correlate these physical observables, but can also discriminate between the three NP scenarios considered here.
The profiles of the above mentioned observables for the D>1 cases, i.e., for the scenarios mentioned in Table <ref> are given in Figs. <ref>, <ref>, <ref> and
Fig. <ref>. Also, the the values after integrating over s are given in Table <ref>.
The first plot in Fig. <ref> represents the 1σ variations of R_K^*^τμ as a function of s for different NP scenarios. The color coding for the various scenarios is shown in the top right inset showing by the single color bar plots for the same 1σ range. Here, we can see that all the NP scenarios are distinguishable from the SM and from each other, particularly for S-V and S-XIII, where the respective maximum values are 0.71 and 0.63 (c.f first row Table <ref>).
The second plot of Fig. <ref> (top right panel) shows the density plot drawn for R_K^*^τμ, whereas in the second row of Fig. <ref>, we have plotted the density profile for (R_K^*_L^τ, R_K^*_LT^τ) and (R_K^*_T^τ,R_K^*_TL^τ) against the available parametric space of different NP scenarios. Here x-axis and y-axis show the available parametric space of WCs for all the considered NP scenarios. In these plots, the variation in the colors correspond to the variation in the magnitude of the these observables, and this is shown in the first plot of Fig. <ref> and also in Figs. <ref> and <ref> by the multicolored bars. These density plots would be helpful to extract the precise parametric space of a particular NP scenario when the said observables will be measured precisely in future.
To make the analysis more clear, the variation of R_K^*μ_L^τμ and R_K^*μ_T^τμ is shown in first row of Fig. <ref>. It can be seen that for S-V, S-VII, S-X, S-XI and S-XIII the predicted values are greater then SM predictions while for S-VI and S-VIII, the values are smaller. Once again, the scenario V is the one that has shown the deviations from the SM results.
From the second row of Fig. <ref> we can see that the R_K^*τ_L^τμ and R_K^*τ_T^τμ has the same profile with s as R_K^*_LL^τμ and R_K^*_TT^τμ drawn in the first row of Fig. <ref>. We can see that the values of R_K^*τ_L^τμ and R_K^*_LL^τμ decreases with increasing s, where as R_K^*τ_T^τμ and R_K^*_TT^τμ increses with s. This value is significantly large to be measured in some ongoing experiments like LHCb.
In the last row of Fig. <ref>, we have drawn the ratios R_K^*_LT^τμ and R_K^*_TL^τμ with s. We can see that the value of first increases with s, where as the second decreses by the same increment making their sum equal to 1. The maximum value is for the scenarios V, whereas the minimum is in S-XIII.
Fig. <ref> shows the ratio R^τ_K^*_L,T and R^τ_K^*_(LT,TL) in last two rows. It is observed that except S-VI and S-VIII all other scenarios are precluded by SM uncertainties. These two, however, are also overlapping and can not be distinguish from one another. Now, in the insets of Fig. <ref> and Fig. <ref> the correlation plots for different LFU violations rations are drawn. One can see that this enables us to see how different R^τμ_i's correlates with the R^τμ_K^* making them useful for the future experimental studies.
Finally, Fig. <ref> shows the effect of the 1 σ values of new WCs along with SM predictions in bar chart form. It can bee seen from these plots that the LFUV ratios R^τμ_K^*μ_L, R^τμ_K^*μ_T, R^τμ_K^*τ_T, R^τμ_K^*_LL, R^τμ_K^*_TT and R^τμ_K^*_TL show significant deviations from SM results, making them useful to hunt for the NP. The behaviour of R^τμ_K^*τ_L and R^τμ_K_LT* is somewhat similar. For other observables like R^τ_K^*_T and R^τ_K^*_LT only S-VI and S-VIII lie outside the uncertainties band of the SM. While for R^τ_K^*_LT and R^τ_K^*_L, the maximum deviations from the SM results arise for S-V, S-VI and S-VIII. It can be noticed that all these R^τμ_i's i.e, R^μ_K^*_L, R^τμ_K^*τ_T and R^τμ_K^*_TT are less masked by the SM uncertainties, and also useful to disentangle the NP arises due to different beyond SM scenarios.
To Summarize: the experimental data on the decays of B mesons have revealed discrepancies from the predictions made by the SM, particularly in processes involving τ and μ leptons in the final state. To delve into these discrepancies, a variety of different physical observables related to lepton universality R^τμ_i where the index i represents different types of final state meson polarizations i.e, i=K^*, K^*_L, K^*_T, K^*_LT, K^*_TL, K^*_LL, K^*_TT
are studied. Each of these observables provides unique insights into the decay processes and their potential deviations from SM predictions. To illustrate the impact of NP on the amplitude of these observables, we calculated their numerical values within the q^2=14-s_max bin and showed them through the bar plots. Our findings reveal that these polarized observables are not only sensitive to the values of the NP WCs but are also useful tool for distinguishing among various NP scenarios.
Therefore, the aforementioned analysis demonstrates that the accurate measurements of the polarized and unpolarized ratios considered in this study will not only provide insights into potential NP but also be helpful to disentangle the tension among different beyond SM scenarios.
§ ACKNOWLEDGMENTS
I.A. acknowledges the kind hospitality and financial support of CERN theory division during his stay at CERN in the summer of 2024.
Hiller:2014ula
G. Hiller and M. Schmaltz,
JHEP 02, 055 (2015)
doi:10.1007/JHEP02(2015)055
[arXiv:1411.4773 [hep-ph]].
Bordone:2016gaq
M. Bordone, G. Isidori and A. Pattori,
Eur. Phys. J. C 76, no.8, 440 (2016)
doi:10.1140/epjc/s10052-016-4274-7
[arXiv:1605.07633 [hep-ph]].
LHCb:2020lmf
R. Aaij et al. [LHCb],
Phys. Rev. Lett. 125, no.1, 011802 (2020)
doi:10.1103/PhysRevLett.125.011802
[arXiv:2003.04831 [hep-ex]].
LHCb:2017avl
R. Aaij et al. [LHCb],
JHEP 08, 055 (2017)
doi:10.1007/JHEP08(2017)055
[arXiv:1705.05802 [hep-ex]].
Belle:2019oag
A. Abdesselam et al. [Belle],
Phys. Rev. Lett. 126, no.16, 161801 (2021)
doi:10.1103/PhysRevLett.126.161801
[arXiv:1904.02440 [hep-ex]].
LHCb:2021trn
R. Aaij et al. [LHCb],
Nature Phys. 18, no.3, 277-282 (2022)
doi:10.1038/s41567-021-01478-8
[arXiv:2103.11769 [hep-ex]].
LHCb:2022qnv
R. Aaij et al. [LHCb],
Phys. Rev. Lett. 131, no.5, 051803 (2023)
doi:10.1103/PhysRevLett.131.051803
[arXiv:2212.09152 [hep-ex]].
LHCb:2022vje
R. Aaij et al. [LHCb],
Phys. Rev. D 108, no.3, 032002 (2023)
doi:10.1103/PhysRevD.108.032002
[arXiv:2212.09153 [hep-ex]].
Isidori:2022bzw
G. Isidori, D. Lancierini, S. Nabeebaccus and R. Zwicky,
JHEP 10, 146 (2022)
doi:10.1007/JHEP10(2022)146
[arXiv:2205.08635 [hep-ph]].
Nabeebaccus:2022pje
S. Nabeebaccus and R. Zwicky,
PoS CKM2021, 071 (2023)
doi:10.22323/1.411.0071
[arXiv:2209.09585 [hep-ph]].
SinghChundawat:2022ldm
N. R. Singh Chundawat,
Phys. Rev. D 107, no.5, 055004 (2023)
doi:10.1103/PhysRevD.107.055004
[arXiv:2212.01229 [hep-ph]].
Alok:2023yzg
A. K. Alok, N. R. Singh Chundawat and A. Mandal,
Phys. Lett. B 847, 138289 (2023)
doi:10.1016/j.physletb.2023.138289
[arXiv:2303.16606 [hep-ph]].
Alok:2024cyq
A. K. Alok, N. R. S. Chundawat, J. Kumar, A. Mandal and U. Tamponi,
[arXiv:2405.18488 [hep-ph]].
Descotes-Genon:2015uva
S. Descotes-Genon, L. Hofer, J. Matias and J. Virto,
JHEP 06, 092 (2016)
doi:10.1007/JHEP06(2016)092
[arXiv:1510.04239 [hep-ph]].
Capdevila:2017bsm
B. Capdevila, A. Crivellin, S. Descotes-Genon, J. Matias and J. Virto,
JHEP 01, 093 (2018)
doi:10.1007/JHEP01(2018)093
[arXiv:1704.05340 [hep-ph]].
Alguero:2018nvb
M. Algueró, B. Capdevila, S. Descotes-Genon, P. Masjuan and J. Matias,
Phys. Rev. D 99, no.7, 075017 (2019)
doi:10.1103/PhysRevD.99.075017
[arXiv:1809.08447 [hep-ph]].
Alguero:2019ptt
M. Algueró, B. Capdevila, A. Crivellin, S. Descotes-Genon, P. Masjuan, J. Matias, M. Novoa Brunet and J. Virto,
Eur. Phys. J. C 79, no.8, 714 (2019)
doi:10.1140/epjc/s10052-019-7216-3
[arXiv:1903.09578 [hep-ph]].
Geng:2021nhg
L. S. Geng, B. Grinstein, S. Jäger, S. Y. Li, J. Martin Camalich and R. X. Shi,
Phys. Rev. D 104, no.3, 035029 (2021)
doi:10.1103/PhysRevD.104.035029
[arXiv:2103.12738 [hep-ph]].
Altmannshofer:2021qrr
W. Altmannshofer and P. Stangl,
Eur. Phys. J. C 81, no.10, 952 (2021)
doi:10.1140/epjc/s10052-021-09725-1
[arXiv:2103.13370 [hep-ph]].
Hurth:2020ehu
T. Hurth, F. Mahmoudi and S. Neshatpour,
Phys. Rev. D 103, 095020 (2021)
doi:10.1103/PhysRevD.103.095020
[arXiv:2012.12207 [hep-ph]].
Alok:2019ufo
A. K. Alok, A. Dighe, S. Gangal and D. Kumar,
JHEP 06, 089 (2019)
doi:10.1007/JHEP06(2019)089
[arXiv:1903.09617 [hep-ph]].
Ciuchini:2020gvn
M. Ciuchini, M. Fedele, E. Franco, A. Paul, L. Silvestrini and M. Valli,
Phys. Rev. D 103, no.1, 015030 (2021)
doi:10.1103/PhysRevD.103.015030
[arXiv:2011.01212 [hep-ph]].
Datta:2019zca
A. Datta, J. Kumar and D. London,
Phys. Lett. B 797, 134858 (2019)
doi:10.1016/j.physletb.2019.134858
[arXiv:1903.10086 [hep-ph]].
Hurth:2020rzx
T. Hurth, F. Mahmoudi and S. Neshatpour,
Phys. Rev. D 102, no.5, 055001 (2020)
doi:10.1103/PhysRevD.102.055001
[arXiv:2006.04213 [hep-ph]].
Alguero:2021anc
M. Algueró, B. Capdevila, S. Descotes-Genon, J. Matias and M. Novoa-Brunet,
Eur. Phys. J. C 82, no.4, 326 (2022)
doi:10.1140/epjc/s10052-022-10231-1
[arXiv:2104.08921 [hep-ph]].
Chen:2024hln
C. Chen [LHCb],
[arXiv:2405.08953 [hep-ex]].
Grinstein:1987vj
B. Grinstein, R. P. Springer and M. B. Wise,
Phys. Lett. B 202, 138-144 (1988)
doi:10.1016/0370-2693(88)90868-4
Buchalla:1995vs
G. Buchalla, A. J. Buras and M. E. Lautenbacher,
Rev. Mod. Phys. 68, 1125-1144 (1996)
doi:10.1103/RevModPhys.68.1125
[arXiv:hep-ph/9512380 [hep-ph]].
Beneke:2001at
M. Beneke, T. Feldmann and D. Seidel,
Nucl. Phys. B 612 (2001), 25-58
doi:10.1016/S0550-3213(01)00366-2
[arXiv:hep-ph/0106067 [hep-ph]].
Greub:2008cy
C. Greub, V. Pilipp and C. Schupbach,
JHEP 12 (2008), 040
doi:10.1088/1126-6708/2008/12/040
[arXiv:0810.4077 [hep-ph]].
Bharucha:2015bzk
A. Bharucha, D. M. Straub and R. Zwicky,
“B→ Vℓ^+ℓ^- in the Standard Model from light-cone sum rules,”
JHEP 08, 098 (2016)
doi:10.1007/JHEP08(2016)098
[arXiv:1503.05534 [hep-ph]].
Alok:2017sui
A. K. Alok, B. Bhattacharya, A. Datta, D. Kumar, J. Kumar and D. London,
Phys. Rev. D 96, no.9, 095009 (2017)
doi:10.1103/PhysRevD.96.095009
[arXiv:1704.07397 [hep-ph]].
|
http://arxiv.org/abs/2409.03725v1 | 20240905173036 | Hardware-Assisted Parameterized Circuit Execution | [
"Abhi D. Rajagopala",
"Akel Hashim",
"Neelay Fruitwala",
"Gang Huang",
"Yilun Xu",
"Jordan Hines",
"Irfan Siddiqi",
"Katherine Klymko",
"Kasra Nowrouzi"
] | quant-ph | [
"quant-ph"
] |
Applied Math and Computational Research Division, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
Applied Math and Computational Research Division, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
Quantum Nanoelectronics Laboratory, Department of Physics, University of California at Berkeley, Berkeley, CA 94720, USA
Accelerator Technology and Applied Physics Division, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
Accelerator Technology and Applied Physics Division, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
Accelerator Technology and Applied Physics Division, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
Department of Physics, University of California at Berkeley, Berkeley, CA 94720, USA
Quantum Nanoelectronics Laboratory, Department of Physics, University of California at Berkeley, Berkeley, CA 94720, USA
Applied Math and Computational Research Division, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
National Energy Research Scientific Computing Center, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
Applied Math and Computational Research Division, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
§ ABSTRACT
Standard compilers for quantum circuits decompose arbitrary single-qubit gates into a sequence of physical X_π/2 pulses and virtual-Z phase gates. Consequently, many circuit classes implement different logic operations but have an equivalent structure of physical pulses that only differ by changes in virtual phases. When many structurally-equivalent circuits need to be measured, generating sequences for each circuit is unnecessary and cumbersome, since compiling and loading sequences onto classical control hardware is a primary bottleneck in quantum circuit execution. In this work, we develop a hardware-assisted protocol for executing parameterized circuits on our FPGA-based control hardware, . This protocol relies on a hardware-software co-design technique in which software identifies structural equivalency in circuits and “peels” off the relevant parameterized angles to reduce the overall waveform compilation time. The hardware architecture then performs real-time “stitching” of the parameters in the circuit to measure circuits that implement a different overall logical operation. This work demonstrates significant speed ups in the total execution time for several different classes of quantum circuits.
Hardware-Assisted Parameterized Circuit Execution
Kasra Nowrouzi
September 9, 2024
=================================================
§ INTRODUCTION
A quantum processing unit (QPU) is defined by its topology (i.e., qubit-to-qubit connectivity) and its native gate set. A gate set consists of a set of native state preparations (usually chosen to be |0⟩ for all qubits), a set of native single-qubit gates for all qubits, a set of native two-qubit gates between all qubit pairs, and a set of native positive operator-valued measures (POVMs; native measurements are usually chosen to be in the computational basis). Quantum compilers decompose all quantum circuits (using a set of ordered instructions) according to the QPU's topology and gate set. Most quantum circuits are designed with a restricted set of multi-qubit gates (e.g., using only CX or CZ gates). However, generally single-qubit gates in a circuit are allowed to be arbitrary. In order to implement arbitrary rotations in 𝖲𝖴(2), quantum compilers typically decompose single-qubit gates in terms of discrete physical X_π/2 pulses (implemented via resonant Rabi-driven pulses) and arbitrary virtual-Z phase gates <cit.>, which are performed in software as a change the phase of the subsequent physical pulse. Thus, arbitrary single-qubit gates are often termed U_3 gates, as they are decomposed as unitaries that are parameterized by three distinct phases:
U_3(ϕ, θ, λ) = Z_ϕ - π/2 X_π/2 Z_π - θ X_π/2 Z_λ - π/2 .
This ZXZXZ-decomposition reduces the time needed to implement arbitrary single-qubit gates (only a single X_π/2 needs to be calibrated per qubit), but comes at the cost of requiring two physical pulses per single-qubit gate (for most gates). Importantly, in this manner, all single-qubit gates are parameterized by three phases, such that implementing different single-qubit gates equates to changing the phases of the physical pulses, without needing to change any of the physical parameters of the X_π/2 pulses themselves (e.g., amplitude, frequency, pulse envelope, etc.).
Many classes of quantum circuits are designed to be structurally-equivalent — i.e., they contain the same structure of physical pulses, but the virtual phases in the single-qubit gates might differ. In some cases, structurally-equivalent circuits implement a different overall logic operation, as is the case for many variational quantum algorithms <cit.>, such as the variational quantum eigensolver <cit.>, the quantum approximate optimization algorithm <cit.>, and the circuits used for quantum machine learning <cit.>. On the other hand, some structurally-equivalent circuits are instead designed to be logically-equivalent, as is the case for noise tailoring methods such as randomized compiling (RC) <cit.>, Pauli frame randomization <cit.>, and equivalent circuit averaging <cit.>. Similarly, many circuits used for quantum characterization, verification, validation (QCVV) are by design structurally-equivalent (for a given circuit depth). For example, state tomography, quantum process tomography <cit.>, gate set tomography (GST) <cit.>, randomized benchmarking (RB) <cit.>, cycle benchmarking (CB) <cit.>, and many others.
All of the aforementioned protocols require generating, compiling, and measuring a large (∼100 – 10000) number of circuits. The naive strategy for doing so is to compile and measure sequences for each circuit independently, creating a large bottleneck on the classical side of the control hardware. However, in many cases, this is both unnecessary and time-consuming, since the waveform for structurally-equivalent circuits has already been loaded onto the hardware. In this work, we introduce a co-designed hardware-software technique that addresses the classical-bound bottlenecks for parameterized circuit execution. The technique involves two protocols — Read-Identify-Peel (RIP) and Stitch with Deft scheduling — which universally applies to any set of structurally-equivalent quantum circuits to improve their execution efficiency. Our approach identifies circuits with equivalent pulse structures, selectively compiles unique circuits (one from each batch of structurally-equivalent circuits), peels and stores the variable parameters for each circuit, and re-attaches at run time while executing on the control system. This technique reduces the compilation time by a constant factor and adds minimal runtime overhead on resource-constrained control systems. We demonstrate significant temporal gains for the overall run time for several commonly-used protocols, including RC, RB, CB, and GST.
§ PARAMETERIZED CIRCUIT EXECUTION
In this work, we design a hardware-assisted parameterized circuit execution (PCE) protocol by introducing two techniques — Read-Identify-Peel (RIP) and Stitch with Deft Scheduling — to improve the execution efficiency for classes of structurally-equivalent circuits. As shown in Fig. <ref>, the process is a hardware-software co-designed protocol involving a general purpose computer, referred to as Host Computer, an embedded software computer, referred to as Software on ARM, and a hardware design on the FPGA that runs on the control system. The software, done in Python, identifies structurally-equivalent circuits, separates the phase parameters in single-qubit gates, and orchestrates the circuit execution; the hardware performs the parameterized execution in real time. The control system in this work uses AMD (formerly Xilinx) Zynq UltraScale+ RFSoC ZCU216 evaluation kit <cit.>, consisting of ARM processors and an FPGA. The deft scheduler runs on the ARM processor, and the Stitch design, written in Verilog/System Verilog, is implemented on the FPGA. The design integrates with the 2.0 control system <cit.>, an open-source FPGA-based quantum control system developed at Lawrence Berkeley National Laboratory.
§.§ Read-Identify-Peel (RIP)
RIP is a software process written in Python and runs on a general-purpose computer (i.e., Host Computer). Figure <ref> shows the process that prepares the circuits for parameterized execution and provides the necessary information to the control system's deft scheduler and the Stitch process. The RIP technique is generic and can be adapted (with minimal changes) to any circuit abstraction. Currently, the process integrates to the native control system gate format, QubiC, and has been tested with circuits written/compiled in <cit.>, <cit.>, <cit.>, <cit.>, and -native gate abstraction. As shown in Fig. <ref>a, the circuit abstraction-specific compiler (e.g., a compiler for , , , or circuits) compiles the circuit into native single- and two-qubit gates, where the single-qubit gates are decomposed according to Eq. <ref>. The transpiler converts these compiled circuits into instructions, which is the input to the RIP stage.
The Read-Identify method identifies the structural equivalency in a batch of circuits. The process reads each circuit in the list of circuits and creates a set of graphs for each circuit. Each graph consists of a root node for the target qubits and the gates associated with it in the same timing order. After constructing the graph for a circuit, the process compares the graph for structural equivalency in the previous circuit graph. If the graph does not match any prior graph, the circuit is unique. Each unique circuit is added as a new sublist in the list of circuit indices and marked for circuit modification. If there is equivalency, then the circuit index is appended to the sublist of the equivalent circuit index. After reading all the circuits in the batch, the process identifies structurally-equivalent circuits as a list of circuit indices, where each sublist has all the structurally-equivalent circuits and the first index depicting the unique circuit. The flattened list of lists will provide the circuit execution order for the deft scheduler. The computational complexity of finding equivalency is 𝒪(n^2), where n is the number of circuits. Depending on the algorithm, similar circuits can be next to each other (for RC, RB, and CB circuits) or in a random order in the batch (in the case of GST). The read-identify process can automatically identify the structural equivalency in both scenarios.
The Peel method extracts the parameters from the circuits. The process uses the graph from the Read-Identify process to extract the parameters from each of the circuits. The extraction identifies the graph nodes with desired parameters (e.g., virtual-Z phases) and creates a dictionary based on the circuit and the target physical qubit. The entire dictionary is then binarized and transmitted to the deft scheduler. The binarization is a performance method and provides up to 15× speed up compared to transmitting a non-binarized dictionary.
Modification of circuits is the final method in the RIP process. The unique circuits marked during the Read-Identify phase are modified to request the phase in real time. This process replaces the parameter instructions (), from where the peel method has extracted the phases, into a phase request instruction, . These instructions are specific to the distributed processor and request the phase from the Stitch module in FPGA. In terms, the Stitch module acts as a Function Processor <cit.>, or .
§.§ Stitch
The Stitch process re-attaches the parameters from the RIP process in the FPGA. The Stitch module consists of three subcomponents: the memory controller, parameter memory, and the stitch logic. The memory controller interfaces with the ARM processor for the read/write of the parameters, the parameter memory holds the parameter values, and the stitch logic interacts with the distributed processor for parameterized circuit execution. The Stitch modules are highly modularized and extensible for several qubits. The current design is written in a combination of Verilog and System Verilog, and runs at the clock speed of 500 MHz, supporting parameterization of up to eight physical qubits.
The control system uses the Zynq UltraScale+ RF-SoC ZCU216 evaluation kit as the hardware platform. The development board consists of ARM processors and FPGA running in different frequency domains. The ARM processor runs up to 1.33 GHz and connects to the extensible parameter memory on FPGA running at 500 MHz (design choice) via an AXI bus running at 100 MHz (the Full Power Domain bus). The memory controller is a cross-clock domain module interacting with the single AXI bus and the parallel parameter memory interface. Within the controller lies a single-stage switch that translates the AXI memory map to the parameter memory map. The current mapping is a 32-bit AXI to a 14-bit parameter memory; of the 14-bit, the first 3 bits map the eight parallel memories of 2^11 bytes.
The parameter memory is an FPGA Block RAM (BRAM) containing parallel memories for each physical qubit (eight memories in the current implementation). Each parallel memory is 8 kilobytes and holds up to 2048 32-bit parameters per circuit. Each memory is a true dual-port (TDP) design, with one interface to the controller and the other to the stitch logic. In a normal parameterized circuit execution, the memory controller interface writes the parameter to the memory, and the stitch logic reads from the memory. The rest of the read and write bus, i.e., read from memory controller and write from stitch logic, are only used for debugging.
The stitch logic block is the heart of the hardware-assisted parameterization and interacts with the parameter memories and the control system's distributed processor. The current design hosts eight parallel interfaces for the memory and eight interfaces for the processor. The interface requests the parameters, and the stitch logic pulls them from the parameter memory and provides them back to the . Since the request pulls are sequential, the logic uses prefetch functionality to improve performance and completes a parameter request within 4 ns. Further, each quantum circuit typically runs for N times (shots), and the stitch logic tracks the number of parameters in a circuit and repeats itself for N times. Also, the architecture supports the existing mid-circuit measurement functionality. The request type between a parameter and a mid-circuit measurement from the control processor is distinguished using the parameter in the interface. Additionally, the logic can repeat a partial set of the parameters and switch to a different set. The control codes set by the deft scheduler before circuit execution facilitate these functionalities.
§.§ Deft Scheduler
Deft Scheduling orchestrates the circuit execution by interacting with the RIP and the Stitch layer. The scheduler is Python software running on the ARM processor on the control hardware. It integrates with the existing software API infrastructure by adding methods to the class. The integration allows users to use the PCE functionality with the existing API infrastructure. On the control side, the scheduler creates new methods (low-level memory drivers) for circuit and parameter loading. It reuses the load definitions for writing the envelope and frequency parameters, run-circuit to trigger the circuit run, and get-data methods for collecting the data.
The deft scheduler receives a dictionary of parameters and the modified unique circuits from the host computer via a remote procedure call (RPC). The scheduler de-binarizes the parameter dictionary to read the circuit order and parameters. It takes the index from the circuit order and loads the parameters to the parameter memory in the FPGA. It loads the circuit only if the index belongs to the unique circuit list. Once the parameters and circuits are loaded, the scheduler continues the standard software operations by loading the definitions (controlled by user flags), running the circuits, receiving the data from the FPGA, and transferring data to the host computer.
§ TIME PROFILING
The PCE protocol should reduce the execution time of quantum circuits. As seen in Sec. <ref>, a quantum circuit execution consists of multiple steps over different computational domains, such as the host computer and the control system. Understanding the efficacy of PCE requires detailed time profiling of each step in the execution process. Thus, we added a time profiling infrastructure to to measure the time at different stages.
§.§ Time Profiling Infrastructure
The time profiling presented here occurs at three layers: the application, the host computer software, and the control system software. Each layer has multiple classical computation stages; in some cases, each stage has sub-stages of operations. During a circuit execution, these stages and sub-stages may run for varying iterations. For example, to run a single circuit for 100 shots, the compilation and the circuit load stage run once, whereas the run circuit stage has 100 iterations (1 per shot). The profiler captures the time spent on all the stages and sub-stages at different layers for varying iterations, and collects them into a Python dictionary. For ease of use, the profiling infrastructure integrates with the control software layer and allows the user to control the time profiling for all layers from the application layer.
Application Layer: this layer is the front end of the circuit execution. The experiments in this manuscript run as a Jupyter Notebook on an Intel i9-11900K desktop processor. Typically, this layer is responsible for circuit creation, pre-compilation, parameterization and control software initiation, and result display. The different profiling stages in this layer are as follows:
* Pre-compile creates or reads circuits, compiles, and transpiles to a native control system gate format. This stage refers to the process described in Section <ref> and Fig. <ref>(a). Even though this is a compilation stage, the word `Pre-Compile' is chosen to differentiate the compile stage in the host computer software layer, which converts the circuits into pulse definition. The pre-compile stage has the following sub-stages:
* Get circuit provides the circuit by generating or reading a pre-generated circuit from the file.
* Transpile converts the circuit shown in Eq. <ref> into the native control software gate format.
* RIP is the software parameterization stage identifying the structural equivalency, extracting the phases, and modifying the circuit for PCE.
* Active is an optional stage to convert a circuit from a passive reset (500 μ s in current QPU) to an active reset circuit.
* Build Run transfers the control from the application layer to the host computer software layer and receives the quantum data from the control system.
* Total is the complete execution time from the start of circuit creation to the display of the result for an experiment.
Host Computer Software Layer: the application layer initiates this layer for circuit execution. Like the application layer, this layer also runs on the desktop as a Python module. This layer initiates the circuit execution by compiling and assembling the circuit and transferring the output to the software running on the control hardware by acting as a client in a remote procedure call (RPC). After the circuit execution, it reads the data from the RPC server running on the control hardware and passes it to the application layer. This layer has the following stages and sub-stages:
* Compile converts the circuit from the native control software gate format into control hardware-specific assembly instructions.
* Assemble converts the assembly language instructions into a machine code.
* RunAll on Host covers the time to run the circuits and get the result data.
* Run on Host measures the run time of the circuits from the host computer.
* Data Sort converts the raw IQ data into a distribution of bit strings.
* Client/Server is a post-processing calculation for measuring the time taken for data transmission between the RPC client and server.
Control Software Layer: the control software layer runs on the ARM processor of the control hardware platform (AMD ZCU216 RFSoC) as a Python module. The control software receives the circuits and the execution parameters from the host computer software layer via the RPC protocol. The control software interacts with the control hardware FPGA to load all the circuit elements, run the circuits, and collect the data. This process has the following stages:
* Load Batch transfers all the circuit parameters from the ARM processor to different memories in the FPGA. This stage has the following sub-stages:
* Load circuit transfers the quantum circuits from the ARM processor to the command memory in FPGA.
* Load definition is a multi-stage loading of circuit parameters such as signal envelope and frequencies.
* Load env. transfers the signal envelope.
* Load freq. transfers the frequency parameters.
* Load zero clears the command buffer.
* Load para transfers the parameters peeled in RIP to the parameter memory in FPGA.
* Run Batch starts with the run trigger to start the circuit execution and ends with getting the quantum data.
* Start Run measures the run time of the circuit on the FPGA.
* Get data is the time to read the quantum measurement data from the FPGA.
* Stitch is a post-processing measurement for the stitch module to serve the stitch request inside FPGA.
§.§ Randomized Compiling
To benchmark the runtime improvements of PCE for classes of equivalent circuits, we profile the total execution time of random circuits at varying depths (from 1 – 100, defined by the number of two-qubit gate cycles) and widths (2 – 8 qubits) measured with randomized compiling (RC), shown in <ref>. We measure each circuit under RC with N=20 randomization (50 shots/randomization) and N=1000 randomization (1 shot/randomization). The latter case is called the fully randomly compiled (FRC) limit <cit.>, since each randomization is measured a single time. We generate RC circuits using <cit.> for both the software and parameterized protocols. The software implementation follows the existing software infrastructure to generate sequences and upload them to the control hardware for each randomization. The PCE implementation follows the RIP and Stitch protocols. The experiment details for the different RC cases are listed in Appendices <ref> and <ref>.
In <ref>(a) and (b), we plot the profiling results for the software and parameterized RC results with N=20 randomization, respectively. In <ref>(c) and (d), we plot the profiling results for the software and parameterized RC results with N=1000 randomization, respectively. We observe that parameterized RC provides a ∼3× and ∼5× speedup over software RC for N = 20 and N = 1000 randomizations, respectively. For software RC, the main temporal bottleneck is compile and assemble due to the need to compile many different circuits from to the low-level machine code. For parameterized RC, the main temporal bottleneck is pre-compile, stemming from the compilation of circuits into the native gates.
§ VALIDATION, BENCHMARKING, AND CHARACTERIZATION
The PCE protocol applies to many classes of circuits and is primarily beneficial in the regime in which a large number of structurally-equivalent circuits need to be measured. A prototypical example of this is benchmarking and characterization circuits. For example, the circuits to perform quantum process tomography (QPT) are identical, except for a change in preparation and measurement basis. Basis rotations can be expressed in terms of combinations of a single physical gate (e.g., X_π/2) and virtual Z gates; in this case, many QPT circuits will have the same structure of physical pulses which differ only by a change in virtual phase. Here, we validate the performance of our PCE protocol against the standard protocols for randomized benchmarking (RB) <cit.>, cycle benchmarking (CB) <cit.>, and gate set tomography (GST) <cit.>.
§.§ Randomized Benchmarking
A depth m RB circuit consists of m randomly sampled Clifford gates and a single inversion gate at the end rotates the system back to the starting state. For single-qubit RB, all single-qubit Cliffords can be decomposed into native gates and virtual phases via <ref>. Thus, a depth m single-qubit RB circuit consists of 2(m + 1) physical X_π/2 gates, and 3(m + 1) virtual Z gates. This makes RB an ideal candidate for PCE since all waveforms for a given circuit depth are structurally equivalent. In <ref>, we plot single-qubit RB results measured using the standard “software”-based procedure, whereby all circuits are independently sequenced and measured, and compare the results to the same circuits executed using PCE. We observe perfect agreement between the two (up to the uncertainty), with a process infidelity (i.e., error per Clifford) of 1.7(2) × 10^-3 and 1.5(1) × 10^-3 for the software and PCE results, respectively.
In Figs. <ref>(c) and (d), we profile the time it takes to perform RB on the host and control systems, respectively using the experiment parameters described in Appendix <ref>. We observe that PCE provides a 27× speedup for the compile stage and a 13× speedup for the assemble stage on the host computer. On the control side, we observe a 7× speedup for loading the circuits, but we have an additional cost of loading the parameters onto the FPGA. Altogether, we observe a 4× speedup in the classical side of performing RB, and a 2.5× speedup for the overall run time, as shown in <ref>(e) (here, the quantum measurement time is the same in both cases, since the total number of shots is the same). Also shown in <ref>(e) is a comparison of the total execution time for software-based and PCE-based RB when the number of qubits is scaled from 1 to 8. The speed up in <ref>(e) is lower compare to <ref>(c) and (d) for two reasons. First, the data read back time, which are constant for both software and PCE adds 22 seconds which 10% of the overall classical time as given in Table <ref>. Second, the circuit runtime with a passive reset of 500 ns, which adds to 23% of runtime. We observe that while the classical cost of both the software and PCE protocols scales linearly in the number of qubits, this cost grows much more rapidly for the software-based protocol than for PCE, demonstrating that PCE gives us a larger relative speedup as the number of qubits grows.
It should be noted that PCE might not provide drastic improvements for all protocols. For example, for two-qubit RB, one must decompose a two-qubit Clifford gate to native gates, which does not always result in the same number of native operations (any two-qubit unitary can be expressed in as little as one native two-qubit gate or as much as three native two-qubit gates). Therefore, not all circuits at a given circuit depth will be structurally-equivalent, limiting the effectiveness of the protocol.
§.§ Cycle Benchmarking
A depth m CB circuit consists of m interleaved cycles of n random Pauli gates followed by an n-qubit gate cycle, as well as a single cycle of Paulis at the end of the sequence which rotates the system back to the Pauli eigenstate in which it started. Thus, all CB circuits of a given depth are structurally-equivalent and can be implemented with PCE, since the single-qubit Pauli gates are decomposed according to <ref> and the n-qubit gate cycle is decomposed in a manner which depends on the gates in the cycle. Here, our interleaved gate is a two-qubit CZ gate, a native gate in our system, and is directly compiled down to the pulse definition of the gate. In Figs. <ref>(a) – (b), we plot the exponential decays for CB measured using the standard “software”-based procedure, whereby all circuits are independently sequenced and measured, and compare the results to the same circuits executed using PCE. In <ref>(c) – (d), we plot the individual Pauli infidelities (which define the different bases in which the system is prepared and measured), as well as the average process infidelity; we find that the software and PCE results are in perfect agreement (up to the uncertainty), with a (dressed) CZ process infidelity of 2.1(1) × 10^-2 and 2.19(9) × 10^-2 for the software and PCE results, respectively.
In Figs. <ref>(c) and (d), we profile the time it takes to perform CB on the host and control systems based on experiment parameters described in Appendix <ref>. We observe that PCE provides a 252× speedup for the compile stage and a 187× speedup for the assemble stage on the host computer. These values are greater than for RB because CB typically requires more circuits than RB (due to the need to prepare and measure different bases). On the control side, we observe a 100× speedup for loading the circuits, but we have the additional cost of loading the parameters onto the FPGA. Altogether, we observe a 5× speedup in the classical side of performing CB and a 1.8× speedup for the overall run time, as shown in <ref>(g). As mentioned in RB, the CB is also affected by the read which is 28% of the overall classical time as given in Table <ref>, and the runtime which constitutes for 44% of the total execution time. Moreover, similar to RB, the classical run time improvement gets larger as we scale the number of qubits from 2 – 8.
§.§ Gate Set Tomography
Tomographic reconstruction methods, such as QPT and GST, scale exponentially in the number of qubits and are thus typically a very costly form of gate characterization. While PCE cannot change the poor scaling of tomography, it can improve their runtime performance, since there is a high degree of degeneracy in circuit structures, particularly for long-sequence GST. To demonstrate this, we perform two-qubit GST with and without PCE (Appendix <ref> describes the experiment parameters), and summarize some of the relevant performance metrics in Table <ref>. We observe that the estimated entanglement infidelities for all of the gates in the gate set (except Y_π/2 on Q2) are equal up to 10^-3 (10^-2) for the single-qubit gates (two-qubit CZ gate). However, we observe that the model violation (N_σ) is significantly lower for PCE (411) than for the standard case (2672), suggesting that the GST model for the PCE data is more trustworthy. This could be, in part, due to the fact that PCE reduces the total execution time (see Table <ref>), and thus would be less susceptible to any kind of parameter drift during the execution of the circuits.
In Figs. <ref>(a) and (b), we profile the time it takes to perform GST on the host and control systems, respectively. We observe that PCE provides an 89× speedup for the compile stage and a 37× speedup for the assemble stage on the host computer. On the control side, we observe a 22× speedup for loading the circuits, but we have the additional cost of loading the parameters onto the FPGA. Altogether, we observe a 6.86× speedup in the classical side of performing GST. Even though, as shown in Table <ref>, the classical time for GST is about 13.34%.
§ CONCLUSIONS
In this work, we develop a protocol for implementing parameterized circuits on our FPGA-based control hardware, . Our protocol — PCE — incorporates both software and hardware components for identifying structurally-equivalent circuits, determining the parameterized rotation phases and storing them in memory on the FPGA, and stitching these phases back into a physical waveform loaded onto the FPGA for measuring different circuits. We demonstrate significant time savings for a number of different benchmarking and characterization protocols, highlighted in Table <ref>. Depending on the protocol, we observe a reduction of classical time by 64% to 80% and a speedup of between 2.55× and 6.86×.
PCE can be applied to many different classes of circuits, and the examples shown here are only representative of typical workflows in QCVV. However, many classes of — for example variational circuits <cit.> — are parameterizable, and future work could explore other applications that could benefit from PCE. Additionally, calibrating quantum gates often requires sweeping a pulse definition over many different pulse parameters, and adapting PCE to gate calibration could lead to significant time savings in processor bring-up. Finally, we believe that PCE is an important example of how classical control hardware can be utilized to reduce the overhead in quantum workflows, and, in addition to related co-designed protocols developed for <cit.>, we plan to explore more ways in which classical resources can be utilized to improve the efficiency of quantum computations.
*
The majority of this work was supported by the Laboratory Directed Research and Development Program of Lawrence Berkeley National Laboratory under U.S. Department of Energy Contract No. DE-AC02-05CH11231. Y.X., I.S., and G.H. acknowledge financial support for the primary development of the hardware from the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research Quantum Testbed Program under Contract No. DE-AC02-05CH11231 and the Quantum Testbed Pathfinder Program.
*
A.D.R. designed the RIP and Stitch method and performed the experiments. A.D.R. and A.H. performed the data analysis. N.F., Y.X., and G.H. developed the classical control hardware used in this work. J.H. assisted in the design and analysis of the GST data. A.D.R. and A.H. wrote the manuscript with input from all coauthors. I.S., K.K., and K.N. supervised all work.
*
The hardware-assisted parameterized circuit execution presented in this work is protected under the U.S. patent application no. 63/651,049 (patent pending), filed by Lawrence Berkeley National Laboratory on behalf of the following inventors: A.D.R., N.F., A.H., K.N., G.H., Y.X., and I.S.
*
All data are available from the corresponding author upon reasonable request.
§ EXPERIMENT PARAMETERS
Below are the experiment parameters used for profiling in Sections <ref> and <ref>. Each circuit uses a delay of 500ns per shot to ensure starting at the ground state (passive reset).
§.§ Randomized Compiling - RC 20
* depths : [1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
* widths : [(0,1), (0,1,2), (0,1,2,3), (0,1,2,3,4),
(0,1,2,3,4,5), (0,1,2,3,4,5,6), (0,1,2,3,4,5,6,7)]
* randomization per depth and width : 20
* Total number of circuits :
randomization*depth*width 20*11*7 = 1540
* shots per depth and width :
50 per randomization = 50×20 = 1000
§.§ RC Fully Randomly Compiled - RC FRC
* depths : [1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
* widths : [(0,1), (0,1,2), (0,1,2,3), (0,1,2,3,4),
(0,1,2,3,4,5), (0,1,2,3,4,5,6), (0,1,2,3,4,5,6,7)]
* randomization per depth and width : 1000
* Total number of circuits
randomization*depth*width = 1000*11*7 = 7700
* shots per depth and width
1 per randomization = 1×1000 = 1000
§.§ Cycle Benchmarking - CB
* widths : [(0,1), (0,1,2,3), (0,1,2,3,4,5),
(0,1,2,3,4,5,6,7)]
* randomization : [[4, 16, 64], [4, 8, 32], [2, 4, 8],
[2, 4, 8]]
* number of circuits per width : [540, 660, 960, 1080]
* Total number of circuits : 540+640+960+1080 = 3240
* shots per circuit : 100
§.§ Randomized Benchmarking - RB
* widths : [(0),(0,1), (0,1,2), (0,1,2,3), (0,1,2,3,4),
(0,1,2,3,4,5), (0,1,2,3,4,5,6), (0,1,2,3,4,5,6,7)]
* randomization : [(16,128,384), (16,96,384),
(16,64,256), (16,64,192), (8,64,192), (8,32,160),
(4,32,160), (4,32,128)]
* number of circuits per width :
(randomization * number of circuit per randomization) + read circuits = 3×30 + 2 = 90 + 2 = 92
* Total number of circuits : circuit per width * width = 92*8 = 736
* shots per circuit : 100
§.§ Gate Set Tomography - GST
* depths : [2, 4, 8, 16, 32, 64, 128]
* widths : [(0,1)]
* Total number of circuits : 19488
* shots per circuit : 1000
|
http://arxiv.org/abs/2409.03463v1 | 20240905121907 | Characterizing Massive Activations of Attention Mechanism in Graph Neural Networks | [
"Lorenzo Bini",
"Marco Sorbi",
"Stephane Marchand-Maillet"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
: a novel deep learning model to detect single transit events with no prior data filtering in PLATO light curves
H. G. Vivien1 ^https://orcid.org/0000-0001-7239-6700
< g r a p h i c s > M. Deleuil1 ^https://orcid.org/0000-0001-6036-0225
< g r a p h i c s > N. Jannsen2 ^https://orcid.org/0000-0003-4670-9616
< g r a p h i c s > J. De Ridder2 ^https://orcid.org/0000-0001-6726-2863
< g r a p h i c s > D. Seynaeve2 ^https://orcid.org/0000-0002-0731-8893
< g r a p h i c s > M.-A. Carpine3 Y. Zerah1 ^https://orcid.org/0000-0003-1786-7367
< g r a p h i c s >
Received Month dd, yyyy; accepted Month dd, yyyy
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Graph Neural Networks (GNNs) have become increasingly popular for effectively modeling data with graph structures. Recently, attention mechanisms have been integrated into GNNs to improve their ability to capture complex patterns. This paper presents the first comprehensive study revealing a critical, unexplored consequence of this integration: the emergence of Massive Activations (MAs) within attention layers. We introduce a novel method for detecting and analyzing MAs, focusing on edge features in different graph transformer architectures. Our study assesses various GNN models using benchmark datasets, including ZINC, TOX21, and PROTEINS. Key contributions include (1) establishing the direct link between attention mechanisms and MAs generation in GNNs, (2) developing a robust definition and detection method for MAs based on activation ratio distributions, (3) introducing the Explicit Bias Term (EBT) as a potential countermeasure and exploring it as an adversarial framework to assess models robustness based on the presence or absence of MAs. Our findings highlight the prevalence and impact of attention-induced MAs across different architectures, such as GraphTransformer, GraphiT, and SAN. The study reveals the complex interplay between attention mechanisms, model architecture, dataset characteristics, and MAs emergence, providing crucial insights for developing more robust and reliable graph models.
Codehttps://github.com/msorbi/gnn-ma
§ INTRODUCTION
Graph Neural Networks (GNNs) have emerged as a powerful tool for learning representations of graph-structured data, demonstrating remarkable success across various applications such as social network analysis <cit.>, recommendation systems <cit.> and molecular biology <cit.>. Central to the recent advancements in GNNs is the integration of attention mechanisms, which enable the models to focus on the most relevant parts of the input graph, thereby enhancing their ability to capture intricate patterns and dependencies.
Despite the substantial progress, the phenomenon of Massive Activations (MAs) within attention layers has not been thoroughly explored in the context of GNNs. MAs, characterized by exceedingly large activation values, can significantly impact the stability and performance of neural networks. In particular, understanding and mitigating MAs in GNNs is crucial for ensuring robust and reliable model behavior, especially when dealing with complex and large-scale graphs.
In this paper, we aim to bridge this gap by systematically investigating the occurrence and implications of MAs in attention-based GNNs. We focus on edge features in graph transformers, a state-of-the-art GNN architecture, and analyze how these features contribute to the emergence of MAs. Our study reveals that certain graph structures on edge configurations are more prone to inducing MAs, which in turn affects the overall performance and interpretability of the models.
To address these challenges, we propose a novel methodology for detecting and analyzing MAs in GNNs. Our approach involves a comprehensive evaluation of various GNN architectures, including GraphTransformer <cit.>, GraphiT <cit.>, and SAN <cit.>, across multiple benchmark datasets, like ZINC <cit.>, TOX21 <cit.> and OGBN-PROTEINS <cit.>, which differs from their downstream tasks like graph regression, multi-label graph classification, and multi-label node classification. We introduce specific criteria for identifying MAs and conduct extensive ablation studies to elucidate the role of edge features in this context.
This study represents the first comprehensive investigation of MAs in GNNs, laying the groundwork for future research. Our findings suggest that the scope of MAs analysis can be expanded to include a wider range of architectures and the evaluation of state-of-the-art attack methods, ultimately enhancing our understanding of MAs' influence on GNN performance and robustness. This is crucial for developing more robust and reliable graph transformer models, especially given the increasing popularity and widespread adoption of transformers in various applications today.
Our contributions are threefold:
* We provide the first systematic study on MAs in attention-based GNNs, highlighting their prevalence and impact on model performance.
* We propose a robust detection methodology for MAs, accompanied by detailed experimental protocols and ablation studies.
* We introduce the Explicit Bias Term (EBT) as a potential countermeasure for MAs, and we exploit it in an adversarial framework, called Explicit Bias Attack, to demonstrate the effectiveness of the MAs in compromising GNNs robustness.
Through this work, we aim to shed light on a critical yet understudied aspect of attention-based GNNs, offering valuable insights for the development of more resilient and interpretable graph-based models. All datasets and code are publicly available on our GitHub repository.
§ RELATED WORKS
GNNs have become effective instruments for studying and extracting insights from graph-structured data, with uses spanning fields like fraud detection <cit.>, traffic prediction <cit.> and recommendation systems <cit.>. The evolution of GNNs has been marked by significant advancements in their architectures and learning mechanisms, with a recent focus on incorporating attention mechanisms to enhance their expressive power and performance. The introduction of attention in GNNs was largely inspired by the success of transformers in natural language processing <cit.>. Graph Attention Networks (GATs) <cit.> were among the first to incorporate self-attention into GNNs, allowing nodes to attend differently to their neighbors based on learned attention weights. This innovation significantly improved the model's ability to capture complex relationships within graph structures.
Building upon the success of GATs, several variants and extensions have been proposed. GraphiT <cit.> introduced a generalization of transformer architectures to graph-structured data, incorporating positional encodings and leveraging the power of multi-head attention mechanisms. Similarly, the Structure-Aware Network (SAN) <cit.> proposed a novel attention mechanism that explicitly considers the structural properties of graphs, leading to improved performance on various graph-based tasks.
Recent studies on Large Language Models (LLMs) and Vision Transformers (ViTs) have revealed the presence of MAs within their internal states, specifically in the attention layer's output <cit.>. This phenomenon prompted investigations into the role of these activations in model behavior, performance, and potential vulnerabilities. Similar observations were made in Vision Transformers (ViTs) <cit.>, suggesting that the presence of MAs might be a common feature in transformer-based architectures across different domains. These findings have led to a growing interest in understanding the implications of MAs for model interpretability, robustness, and potential vulnerabilities to adversarial attacks.
The study of internal representations in deep learning models has been a topic of significant interest in the machine learning community. Works such as <cit.> have explored the interpretability of neural networks by analyzing activation patterns and their relationships to input features and model decisions. However, the specific phenomenon of MAs in GNNs has remained largely unexplored until now, representing a crucial gap in our understanding of these models.
The intersection of adversarial attacks and GNNs is another relevant area of study that relates to the investigation of MAs. Previous work has explored various attack strategies on graph data <cit.>, including topology attacks, feature attacks, adversarial training and hybrid approaches. However, the potential vulnerabilities introduced by MAs represent a novel direction for research in this field. Understanding how MAs might be exploited or manipulated by adversarial inputs could lead to the development of more robust GNN architectures.
However, in the broader context of neural network analysis, techniques for probing and interpreting model internals have been developed. Methods such as feature visualization <cit.> and network dissection <cit.> have provided insights into the functions of individual neurons and layers in convolutional neural networks. Adapting and extending these techniques to analyze MAs in GNNs could provide valuable insights into their role and impact in possible future works.
Finally, the study of attention mechanisms in various neural network architectures has also yielded insights that may be relevant to understanding MAs in GNNs. Work on attention flow <cit.> and attention head importance <cit.> in transformer models has shown that not all attention heads contribute equally to model performance, and some may even be pruned without significant loss of accuracy. These findings raise questions about whether similar patterns might exist in graph transformer models and how they might relate to the presence of MAs.
§ TERMINOLOGY OF MASSIVE ACTIVATIONS IN GNNS
Building upon the work on MAs in LLMs <cit.>, we extend this investigation to GNNs, focusing specifically on graph transformer architectures. Our study encompasses various models, including GraphTransformer (GT) <cit.>, GraphiT <cit.>, and Structure-Aware Network (SAN) <cit.>, applied to diverse task datasets such as ZINC, TOX21, and OGBN-PROTEINS (see Supplementary Material for details on models' configurations and datasets' composition). This comprehensive approach allows us to examine the generality of MAs across different attention-based GNN architectures.
§.§ Characterization of Massive Activations
MAs in GNNs refer to specific activation values that exhibit unusually high magnitudes compared to the typical activations within a layer. These activations are defined by the following criteria, where an activation value is intended to be its absolute value:
Magnitude Threshold: An activation is classified as massive if its value exceeds a predetermined threshold. This threshold is typically set to a value that is significantly higher than the average activation value within the layer, ensuring that only the most extreme activations are considered.
Relative Threshold: In the paper by <cit.>, MAs were defined as at least 1,000 times larger than the median activation value within the layer. This relative threshold criterion helped differentiate MAs from regular high activations that might occur due to normal variations in the data or model parameters.
The formal definition was represented as:
MAs = {a | a > 100 and a > 1000 ×median(𝐀)}
where 𝐀 represents the set of activation values in a given layer.
However, in contrast to previous studies that employed a fixed relative threshold, our approach adopts a more rigorous method. We estimate MAs by comparing the distributions of activation ratios between a base, untrained model with Xavier weight initializations <cit.>, and a fully trained model. This method ensures a more precise identification of MAs based on empirical data rather than an arbitrary fixed threshold. In this way, the untrained model serves as a reference for identifying unusual activations that emerge during training.
§.§.§ Detection Methodology
For both the base and trained models, we detected the MAs following a systematic procedure:
Normalization: We normalized the activation values within each layer, dividing them by the edge median on the layer, to account for variations in scale between different layers and models. This normalization step ensures a consistent basis for comparison. The choice of dividing by the edge median comes from the huge amount of MAs being present, since almost every edge in the layers presenting MAs holds at least one MA, as shown from <Ref>. This is probably caused by the fact that attention is computed between pairs of adjacent nodes only, in contrast to LLMs where it is computed among each pair of tokens, therefore the model tends to spread MAs among almost all the edges to make them “available” to the whole graph. Indeed, <Ref> indicates that MAs are a common phenomenon across different models and datasets, that they are not confined to specific layers but are distributed throughout the model architecture, and that MAs are an inherent characteristic of the attention-based mechanism in graph transformers and related architectures, not strictly dependent on the choice of the dataset.
Batch Analysis: We analyzed the activations on a batch-by-batch basis, minimizing the batch size, to have suitable isolation between the MAs and to ensure that the detection of MAs is not influenced by outliers in specific samples.
For each activation, we computed the ratio of its magnitude to the edge median:
ratio(activation) = abs(activation)/median(abs(edge_activations))
and activations whose ratio exceeds the threshold are flagged as massive.
Then, we considered the maximum ratio of each batch to detect those containing MAs.
Layer-wise Aggregation: We performed this analysis across multiple layers of the model to identify patterns and layers that are more prone to exhibiting MAs. This layer-wise aggregation helps in understanding the hierarchical nature of MAs within the model.
<Ref> reports the analysis results. The batch ratios significantly increase in the trained transformers, concerning base ones, often even overcoming the threshold of 1000 defined by previous works <cit.>, showing the presence of MAs in graph transformers, too.
§ METHODOLOGY AND OBSERVATIONS
Focusing on edge features, first, we analyzed the ratio defined in <Ref>, taking the maximum for every batch, across the layers of each selected model and dataset, and visually compared the outcomes to value ranges obtained using the same model in a base state (with its parameters randomly initialized, without training) to verify the appearance of MAs. The graphical comparison, reported in <Ref>, shows ratios over the base range in most of the trained models, representing MAs.
To better characterize MAs, we studied their distribution employing the Kolmogorov-Smirnov statistic <cit.>. We found that a gamma distribution well approximates the negative logarithm of the activations' magnitudes, as well as their ratios. <Ref> shows this approximation for a base model layer.
We point out that, according to the existing definition, items on the left of the -3 are MAs.
We then compared the distributions of the log-values between the base and trained models. <Ref> illustrates this comparison, highlighting a significant shift in the distribution of the trained model compared to the base model. Moreover, this shift underscores the emergence of MAs during the training process, affirming that the threshold around -log(ratio) = -3 (e.g., a ratio of 1000 or higher) effectively captures these significant activations, though sometimes it appears to be slightly shifted to the right as in <Ref>.
When MAs appear, we have found two possible phenomenons:
* A lot of massive activation values are added on the left-hand side of the distribution, preventing a good approximation (<Ref>).
* A few values appear on the left-hand side of the distribution, as spikes or humps or out-of-distribution values, which may or may not deteriorate the approximation, as shown in <Ref>.
For example, histogram in <Ref> represents the base model with untrained weights (only Xavier initialization). The gamma approximation fits the sample histogram well, with a low Kolmogorov-Smirnov (KS) statistic of 0.020, indicating a very nice fit.
<Ref> shows that the distribution of the trained model exhibits a significant shift due to a big hump appearing on the left side, representing extreme activation ratios (MAs). Indeed, the gamma approximation does not fit well, with a higher KS statistic of 0.168, indicating a poor match caused by the presence of MAs.
Moreover, in the histogram of <Ref> the trained model's distribution exhibits a clear spike on the left side at -log(ratio) = -3, corresponding to a ratio of 1000. This separation indicates the distinction between basic and massive activation regimes. The gamma distribution doesn't fit well this time, because of this spike preventing a good approximation, with a KS statistic of 0.027 highlighting the model's shift due to training.
<Ref> also shows the trained model's distribution, with a noticeable hump on the left side indicating MAs. The gamma approximation fits better than in <Ref>, with a KS statistic of 0.019, but still indicates the presence of MAs in the trained model, meaning that MAs have been added on the left-hand side of the distribution.
§.§ Insights and Implications
From <Ref> and <Ref> we can highlight the following points.
* Dataset Influence:
* The ZINC and OGBN-PROTEINS datasets consistently show higher activation values across all models compared to TOX21, suggesting that the nature of these datasets significantly influences the emergence of MAs. Even though many MAs are emerging form GT on TOX21.
* Model Architecture:
* Different GNN models exhibit varying levels of MAs. For instance, GraphTransformer and GraphiT tend to show more pronounced MAs than SAN, indicating that model architecture plays a crucial role.
* Impact of Attention Bias:
* Previous works suspect that MAs have the function of learned bias, showing that they disappear introducing bias at the attention layer. This holds for LLMs and ViTs, and for our GNNs as well, as shown in <Ref> where the presence of MAs is affected by the introduction of the Explicit Bias Term on the attention. <Ref> and text below suggest that MAs are intrinsic to the models' functioning, being anti-correlated with the learned bias.
The consistent observation of MAs in edge features, across various GNN models and datasets, points to a fundamental characteristic of how these models process relational information.
Inspired by recent advancements in addressing bias instability in LLMs <cit.>, we introduced an Explicit Bias Term (EBT) into our graph transformer models. This bias term is discovered to counteract the emergence of MAs by stabilizing the activation magnitudes during the attention computation. The EBT is computed as follows:
b_e = Q k e'
b_v = softmax(A_e) v' ,
where k, e, v ∈ℝ^d are the key, edge, and node value bias terms (one per each attention head), and A_e is the edge attention output. b_e and b_v represent the edge and node bias terms and are added to the edge and node attention outputs, respectively. By incorporating EBT into the edge and node attention computations, and adding bias in the linear projections of the attention inputs, we regulated the distribution of activation values, thus mitigating the occurrence of MAs.
As shown in <Ref>, the introduction of these bias terms significantly reduces the frequency and magnitude of MAs, bringing the activation ratios closer to those observed in the base models. The effect of EBT is evident across all the different datasets. Whether it's ZINC, TOX21, or OGBN-PROTEINS, the activation ratios are brought closer to the baseline levels observed in the untrained models. This consistency underscores the general applicability of EBT in various contexts and downstream tasks. Moreover, <Ref> shows that EBT mitigates MAs across different layers of the models. This is crucial as it indicates that EBT's effect is not limited to specific parts of the network but is extended throughout the entire architecture. For example, GraphTransformer on ZINC without EBT shows MAs frequently exceed 10^4, while when EBT has been applied these ratios are significantly reduced, aligning more closely with the base model's range.
<Ref> shows that EBT does not systematically influence the test loss equally across different models and datasets. We have considered the test loss metric to keep the approach general, making it extendable to different downstream tasks. This ensures that the proposed method can be applied broadly across various applications of graph transformers.
Although the test loss remains relatively unchanged with the introduction of EBT, its presence helps in mitigating the occurrence of MAs, as evidenced by the reduction in extreme activation values observed in earlier figures. By analyzing these results, it becomes evident that while EBT does not drastically alter the test performance, it plays a crucial role in controlling activation anomalies, thereby contributing to the robustness and reliability of graph transformer models.
In the next section, we will demonstrate how attacking the model with and without MAs can directly impact the robustness of the architectures. This will provide deeper insights into the robustness of graph transformers in the presence of MAs, suggesting their potential pitfalls.
§ EXPLICIT BIAS ATTACK
The study of adversarial attacks on GNNs has become increasingly important as these models are deployed in critical applications. While various attack strategies have been explored <cit.>, the vulnerability introduced by MAs remains largely unexplored. Understanding how MAs can be exploited by adversaries is crucial for developing more robust GNN architectures and their downstream tasks. In this section, we propose the Explicit Bias Attack, a gradient-based method designed to exploit MAs and assess model robustness. Our approach is inspired by gradient ascent attacks previously applied to image classifiers <cit.> and adapted for graph data <cit.>. By analyzing the effectiveness of gradient ascent attack with and without the presence of EBT and MAs, we aim to provide insights into the role of these activations in model fragility.
Therefore, inspired by previous section, we exploited EBT as computed in <Ref> and <Ref> to analyze the importance of MAs for a gradient ascent attack at test time, where noise (added to the input feature embedding) is learned to directly maximize the loss function.
The effectiveness of an attack is evaluated by comparing the average test loss before and after the attack (i.e., with random and optimized noise, with the same standard deviations, respectively), using a gain defined as
attack gain = optimized noise loss - random noise lossrandom noise loss
thus a higher gain means a more dangerous attack.
We focus on GraphTransformer (GT) with TOX21 because the presence of MAs in each layer - as shown by <Ref> - highlights the MAs effect for the attack,
and compare the power of this method with and without the use of the EBT, which calls off the model's MAs.
<Ref> shows a stable increase of gain when dealing with MAs, using noise with standard deviation values of 0.01, 0.03, and 0.1 (the input feature embedding has a standard deviation of about 0.9) optimized for 1000 epochs on the test set. <Ref> highlights that MAs can be dangerous for the robustness of a model, and potentially exploited by attacks.
These results indicate that a gradient ascent attack is effective in degrading model performance, especially in the presence of MAs. However, the introduction of explicit bias, consistent with the reduction of MAs, can significantly mitigate the impact of the attack, leading to more robust models. This highlights the importance of considering bias in designing defenses against these types of adversarial attacks, to prevent them from exploiting the presence of MAs.
In future work, to enable us to comprehensively assess the correlation between model robustness/fragility and the presence of MAs, we intend to delve deeper into different graph attack configurations while targeting MAs. This will offer a richer understanding of how these vulnerabilities can be mitigated, in favor of more reliable models.
§ CONCLUSION AND FUTURE WORK
This paper presents the first comprehensive study of MAs in attention-based GNNs. We have introduced a novel methodology for detecting and analyzing MAs, focusing on edge features in various graph transformer architectures across multiple benchmark datasets. Our findings reveal that MAs are prevalent across different models and datasets, and demonstrate that they could be effectively leveraged by adversaries to degrade the performance of GNNs.
We showed that the introduction of Explicit Bias Terms (EBT) can effectively mitigate the occurrence of MAs, leading to more stable activation distributions. However, our results also showed that this mitigation does not always translate to improved test performance, highlighting the complex role of MAs in GNNs' behavior.
Furthermore, we introduced the Explicit Bias Attack, a gradient-ascent adversarial framework, that demonstrates how MAs, if not mitigated by EBT, can expose models to vulnerabilities in their tasks. This further points out the importance of considering these activations in the context of model robustness.
Future research will expand this analysis to a wider range of architectures and advanced attack methods, further clarifying the influence of MAs on GNN performance and robustness, and potentially leading to more interpretable and stable graph-based models. Specifically, future research could explore:
* Customized Adversarial MAs: Developing more adversarial techniques to regulate and attack these activations to enhance model stability and performance, like injecting fake MAs or exploiting state-of-the-art graph attack methods.
* Downstream-driven MAs: Leveraging MAs for specific downstream task, investigating how to harness these significant activations to improve models and their interpretability on specific assignments such as link prediction or drug design.
* Comparative Analysis: Extending the study to additional models and datasets to generalize the findings further and uncover broader patterns.
These insights provide a deeper understanding of the internal mechanisms of attention-based GNNs and highlight the way for improvements in graph learning models. By addressing the challenges and opportunities presented by MAs, we can work towards developing more robust, interpretable, and effective GNN architectures for a wide range of applications.
§ REPRODUCIBILITY CHECKLIST
This paper:
* Includes a conceptual outline and/or pseudocode description of AI methods introduced: partial.
* Clearly delineates statements that are opinions, hypothesis, and speculation from objective facts and results: yes.
* Provides well marked pedagogical references for less-familiare readers to gain background necessary to replicate the paper: yes.
Does this paper make theoretical contributions? no.
Does this paper rely on one or more datasets? yes.
* A motivation is given for why the experiments are conducted on the selected datasets: yes.
* All novel datasets introduced in this paper are included in a data appendix: NA.
* All novel datasets introduced in this paper will be made publicly available upon publication of the paper with a license that allows free usage for research purposes: NA.
* All datasets drawn from the existing literature (potentially including authors’ own previously published work) are accompanied by appropriate citations: yes.
* All datasets drawn from the existing literature (potentially including authors’ own previously published work) are publicly available: yes.
* All datasets that are not publicly available are described in detail, with explanation why publicly available alternatives are not scientifically satisficing: NA.
Does this paper include computational experiments? yes.
* Any code required for pre-processing data is included in the appendix: yes.
* All source code required for conducting and analyzing the experiments is included in a code appendix: yes.
* All source code required for conducting and analyzing the experiments will be made publicly available upon publication of the paper with a license that allows free usage for research purposes: yes.
* All source code implementing new methods have comments detailing the implementation, with references to the paper where each step comes from: partial.
* If an algorithm depends on randomness, then the method used for setting seeds is described in a way sufficient to allow replication of results: yes.
* This paper specifies the computing infrastructure used for running experiments (hardware and software), including GPU/CPU models; amount of memory; operating system; names and versions of relevant software libraries and frameworks: no.
* This paper formally describes evaluation metrics used and explains the motivation for choosing these metrics: yes.
* This paper states the number of algorithm runs used to compute each reported result: no.
* Analysis of experiments goes beyond single-dimensional summaries of performance (e.g., average; median) to include measures of variation, confidence, or other distributional information: no.
* The significance of any improvement or decrease in performance is judged using appropriate statistical tests (e.g., Wilcoxon signed-rank): partial.
* This paper lists all final (hyper-)parameters used for each model/algorithm in the paper’s experiments: yes.
* This paper states the number and range of values tried per (hyper-) parameter during development of the paper, along with the criterion used for selecting the final parameter setting: NA.
|
http://arxiv.org/abs/2409.03006v1 | 20240904180150 | Black hole singularity resolution from unitarity | [
"Steffen Gielen",
"Lucía Menéndez-Pidal"
] | hep-th | [
"hep-th",
"gr-qc"
] |
1.08
equation
i
School of Mathematical and Physical Sciences, University of Sheffield, Hicks Building, Hounsfield Road, Sheffield S3 7RH, United [email protected] de Física Teórica, Universidad Complutense de Madrid, Parque de Ciencias 1, 28040 Madrid, [email protected]§ ABSTRACT
We study the quantum dynamics of an interior planar AdS (anti–de Sitter) black hole, requiring unitarity in the natural time coordinate conjugate to the cosmological “constant of motion” appearing in unimodular gravity. Both the classical singularity and the horizon are replaced by a non-singular highly quantum region; semiclassical notions of spacetime evolution are only valid in an intermediate region. For the singularity, our results should be applicable to general black holes: unitarity in unimodular time always implies singularity resolution.
Black hole singularity resolution from unitarity
Lucía Menéndez-Pidal
September 9, 2024
================================================
Introduction.— The fate of classical singularities is one of the most important questions for any theory of quantum gravity; indeed, the incompleteness of classical relativity formalised in the Penrose–Hawking singularity theorems <cit.> is often cited as a main motivation for why gravity must be quantum. The most important singularities of direct relevance to our Universe are at the Big Bang and at the centre of black holes. In a first approximation, these situations can be represented through idealised, spatially homogeneous geometries whose high degree of symmetry allows for a quantum description at least at an effective level. One can then ask in such simple models what happens to the classical singularity.
In the context of homogeneous cosmology, the question of whether singularities are resolved through quantum effects (in what is usually called quantum cosmology) does not have a clear answer, since it depends both on the criteria for singularity resolution and on the precise definition of the model including the choice of quantum state <cit.>. Nevertheless, one can make general statements if the quantum theory is required to be unitary with respect to a given choice of time <cit.>: one would expect singularities to be resolved if they are only a finite “time” away for that specific notion of time, since singular evolution seems incompatible with the requirement of a global time translation operator. Inevitably, such a general result means that the property of singularity resolution depends on the choice of clock <cit.>, and signifies a general clash between general covariance and the quantum notion of unitarity <cit.>, in itself a somewhat controversial topic in quantum gravity.
Here we note that the results of Ref. <cit.> extend straightforwardly to the study of black hole singularities, in particular for the planar AdS black holes studied in Ref. <cit.> and related previous work in the context of AdS/CFT <cit.>. The interior metric studied in these works is of Kasner type, with a single anisotropy variable, and dynamically equivalent to a flat homogeneous, isotropic cosmology with a massless scalar field. The cosmological constant is taken to be negative and fixed, but reinterpreting it as a constant of motion as suggested by unimodular gravity <cit.> turns it into another global degree of freedom. The black hole interior is then classically identical to the cosmology studied in <cit.>, and its canonical quantisation can be studied along the same lines. If the Schrödinger-like time coordinate conjugate to the cosmological constant is used to define evolution, and one requires the theory to be unitary, all singularities are resolved <cit.>.
The notion of “singularity” in this context does not only apply to curvature singularities; a singularity is any point at which the classical evolution terminates, and where non-classical behaviour is necessarily required for a unitary quantum theory. These can be coordinate singularities <cit.>. In our black hole model, at the horizon the spatial volume goes to zero and the classical solution cannot continue. Quantum unitarity with respect to the preferred clock (here unimodular time) would then require the horizon to be similarly replaced by a highly quantum region in which classical evolution is not valid. This specific conclusion is sensitive to the particular choice of foliation which becomes singular at the horizon. However, the conclusion regarding black hole singularities is more generally valid since the singularity is only a finite time away for many possible observers. The Belinski–Khalatnikov–Lifshitz (BKL) conjecture states that approach to a generic singularity is described by Kasner-like dynamics, like the example studied here; this suggests that for many possible clock choices, the classical singularity would need to be replaced by well-defined quantum evolution leading to the emergence of a white hole. Our results demonstrate that black hole singularities either lead to a failure of quantum unitarity (in unimodular time), or to a new scenario for a quantum transition of a black hole to a white hole.
Quantum theory of black hole interior.— The classical action for general relativity with cosmological constant Λ, including the Gibbons–Hawking–York boundary term, is
S=1/κ∫ d^4 x√(-g)(1/2R-Λ)-1/κ∫ d^3 x√(q)K
where g_μν is the spacetime metric, R the Ricci scalar, q_ij the boundary metric and K the extrinsic curvature; κ=8π G where G is Newton's constant.
The interior planar black hole (Kasner) metric studied in Ref. <cit.> and previous papers <cit.> is given by
ds^2 = -N^2 dr^2 + v^2/3(e^4k/3 dt^2+e^-2k/3( dx^2+ dy^2))
where N, v and k are functions of r only. Thought of as a radial coordinate outside the horizon, in the interior r is timelike and hence this metric is spatially homogeneous. It corresponds to a locally rotationally symmetric Bianchi I model with metric written in the Misner parametrisation (see, e.g., Ref. <cit.>). One important feature of this parametrisation is that the anisotropy variables (here a single one, k) behave as free massless scalar fields in a flat isotropic geometry, as we will see explicitly.
Substituting the metric ansatz (<ref>) into (<ref>) reduces the action to
S = 1/κ∫ dr ((k')^2 v^2 - (v')^2/3Nv-Λ N v)
where ' denotes derivative with respect to r. We have implicitly assumed that the overall numerical factor arising from performing the integration over t, x and y has been set to one by a choice of coordinates.
Legendre transform leads to a Hamiltonian
ℋ=√(3)/2N v(π_k^2/v^2-π_v^2+Λ) .
where we have made the unit choice κ=2/√(3) to obtain a simpler form.
The resulting Hamiltonian constraint <cit.>𝒞=π_k^2/v^2-π_v^2+Λ≈ 0
is exactly the one studied in Ref. <cit.>, where it was interpreted as describing a flat homogeneous, isotropic cosmology with a free massless scalar field and a perfect fluid (playing the rôle of dark energy). In this interpretation, Λ is no longer a parameter but a (conserved) momentum conjugate to a time variable T. This assumption can be justified by promoting Λ in (<ref>) to a dynamical variable and adding a term Λ T' to the action. The resulting action is then the reduction of the Henneaux–Teitelboim action for unimodular gravity <cit.>
S_ HT=1/κ∫ d^4 x[√(-g) (1/2R-Λ)+Λ∂_μ𝒯^μ]
with suitable boundary terms to a spatially homogeneous geometry. See also Ref. <cit.> for a more general perspective on “deconstantisation” applied to other quantities; we only need the statement that Λ is an integration constant, familiar from unimodular gravity.
The classical solutions in the time T are found to be
v(T) = √(-π_k^2/Λ+4Λ(T-T_0)^2) ,
k(T) = 1/2log|π_k-2Λ(T-T_0)/π_k+2Λ(T-T_0)| + k_0
where T_0 and k_0 are integration constants. The metric <ref> has singularities (v→ 0) for T_-=T_0-π_k/2Λ and T_+=T_0+π_k/2Λ. The Kretschmann scalar
R_μνξηR^μνξη=8Λ^2/3(1+2π_k^2/(2Λ(T-T_0)+π_k )^2)
diverges for T→ T_- (black hole singularity, k→ +∞) but is finite for T=T_+ (black hole horizon, k→ -∞).
The constraint (<ref>) can be written as 𝒞=g^ABπ_A π_B+Λ, making the dynamical problem equivalent to relativistic particle motion in a configuration space (minisuperspace) parametrised by v and k and with a flat metric
g_AB=[ v^2 0 0 -1 ] .
This minisuperspace is equivalent to the Rindler wedge, a portion of full (1+1) dimensional Minkowski spacetime with boundary at v=0. This viewpoint suggests a natural operator ordering in canonical quantisation <cit.>, leading to the Wheeler–DeWitt equation
(+Λ)Ψ:=(-1/v^2∂_k^2+∂_v^2+1/v∂_v+Λ)Ψ=0
where is the Laplace–Beltrami operator for g_AB. Eq. (<ref>) is covariant with respect to variable transformations of v and k and, because g_AB is flat, with respect to lapse redefinitions which act as conformal transformations on g_AB<cit.>.
The general solution to Eq. (<ref>) can be straightforwardly given as
Ψ(v,k,Λ) =∫ dp/2π e^ pk(α(p,Λ)J_ p(√(Λ)v).
.+β(p,Λ)J_- p(√(Λ)v))
where α(p,Λ) and β(p,Λ) are so far arbitrary and J_ν(x) is a Bessel function of the first kind. Eq. (<ref>) gives the general solution as a function of Λ, a dynamical variable in our setup; for Λ<0 it is less ambiguous to pass from J_ p(√(Λ)v) to the modified Bessel functions K_ p(√(-Λ)v) and I_ p(√(-Λ)v)<cit.>. The appearance of Bessel function solutions for the black hole interior is familiar from other contexts <cit.>. Fourier transform converts the wavefunction given as a function of Λ into a time-dependent wavefunction dependent on T, the conjugate to Λ.
Asking whether the resulting quantum theory is unitary with respect to evolution in unimodular time T[To see why T is unimodular time, note the global factor N v=√(-g) in the Hamiltonian ℋ. For ℋ to be given just by the constraint 𝒞, we must choose N so that √(-g) is constant.] is equivalent to asking whether is self-adjoint in a suitable inner product. The most natural choice of inner product is induced by g_AB,
⟨Ψ|Φ⟩ = ∫_0^∞ dv v ∫ dk Ψ^* Φ .
It is easy to see is then not self-adjoint, as expected for a space with boundaries which can be reached by classical solutions in a finite amount of time. Here this corresponds to both the black hole horizon and the singularity being only a finite (unimodular) time away. Self-adjoint extensions can be defined by restricting wavefunctions to a subspace satisfying the boundary condition <cit.>lim_v→ 0∫ dk v(Ψ^*∂Φ/∂ v-Φ∂Ψ^*/∂ v)=0 ,
thus obtaining a unitary quantum theory. Such a boundary condition can be seen as reflection from the singularity, similar to what is proposed in Ref. <cit.>.
We are interested in Λ<0 solutions, which are the analogue of bound states. Normalised solutions to the Wheeler–DeWitt equation <ref> and the boundary condition <ref> can be expressed as
Ψ(v, k, T) =∫ p/2π e^ p k∑_n∈ℤ e^Λ_n^p T√(-2Λ_n^psinh(pπ)/pπ)
α(p,Λ_n^p)K_p( √(-Λ_n^p) v) ,
where ∫ p/2π∑_n∈ℤα(p,Λ_n^p)^2=1 and
Λ_n^p =- e^-(2n+1)π/p+θ(p)
is a discrete set of allowed negative Λ values. Here the free function θ(p) parametrises different self-adjoint extensions of ; in a sense, each choice of θ(p) defines a different theory. Qualitative features explored in the following are not expected to be sensitive to this choice, as shown in a similar model in loop quantum cosmology <cit.>. For simplicity, we choose θ(p)=π/|p| below.
Eq. <ref> represents the quantum states of a planar AdS black hole interior as superpositions of different values of momentum p and cosmological constant Λ_n^p. Crucially, because of the reflecting boundary condition the allowed bound states are modified Bessel functions of the second kind, behaving near v=0 as
K_p( √(-Λ_n^p) v) ∼Γ(- p)/2e^ p log(√(-Λ_n^p) v/2) + c.c. ,
i.e., as superpositions of positive and negative p modes with equal magnitude. Since changing the sign of p is equivalent to time reversal, swapping the roles of horizon and singularity, or switching between classical black-hole and white-hole solutions (see Eq. (<ref>) and below; this corresponds to the parameter π_k in the classical solution), none of these bound states can correspond to a single semiclassical trajectory that ends in a singularity. The necessary superpositions of black-hole and white-hole solutions then lead to singularity resolution in this theory.
Singularity resolution.— To see this explicitly, we numerically study the evolution of a semiclassical (Gaussian) state
α(p,Λ_n^p)=√(𝒩) e^-(p-p_0)^2/2σ_p^2-(Λ_n^p-Λ_0)^2/2σ_Λ^2
with free parameters p_0, Λ_0, σ_p^2, and σ_Λ^2 and a normalisation factor 𝒩. For a semiclassical interpretation we need σ_p≪ p_0, σ_Λ≪|Λ_0| and p_0≫ 1. The latter condition then also guarantees that the allowed discrete Λ_n^p values are reasonably close together.
Our main observable is the volume v(T). Expectation values and moments of v in our state take the form
⟨ v^α(T)⟩ = 𝒩∫_0^∞ dv v^α+1∫ dp/2π∑_n,n'2√(Λ_n^pΛ_n'^p)sinh(|p|π)/|p|π
× e^-(p-p_0)^2/σ_p^2e^-(Λ_n^p-Λ_0)^2/2σ_Λ^2-(Λ_n'^p-Λ_0)^2/2σ_Λ^2+(Λ_n^p-Λ_n'^p)T
× K_ |p|(√(-Λ_n^p)v)K_ |p|(√(-Λ_n'^p)v) ;
the v integral in Eq. (<ref>) can be done analytically, leaving the sums and the p integral for numerical evaluation. In practice, due to the Gaussians inside the integral the contributions from |p-p_0|≫σ_p and |Λ_n^p-Λ_0|≫σ_Λ are very small and we can replace the infinite p integral and sums over n and n' by finite sums by introducing cutoffs.
The expectation value ⟨ v(T)⟩ can be compared with classical solutions given in Eq. (<ref>) where π_k and Λ are replaced by the average of p and Λ in our chosen states; due to the discrete spacing of possible Λ values these averages are not exactly equal (but close) to p_0 and Λ_0.
In Fig. <ref> we show the quantum expectation value ⟨ v(T)⟩ and fluctuations Δ v=√(⟨ v(T)^2⟩ - ⟨ v(T)⟩^2) for such a state (with two different sets of parameters). We can see that, as expected from the general discussion, for small |T| the expectation value stays close to its corresponding classical solution (<ref>), but it departs strongly near the horizon or singularity where the interference between an ingoing (black hole) and outgoing (white hole) solution becomes important. There is a finite minimal value for v and in this sense, both the black hole singularity and the horizon are replaced by quantum “bounces”. When the expectation value closely follows the classical curve, the variance is small, indicating that the state is indeed semiclassical, but at the bounces the variance grows, indicating strong quantum fluctuations where the state is reflected. As required by unitarity, all expectation values and higher moments are globally defined, not just for the finite T interval in which the classical solution is valid. Taken at face value the resulting quantum solution describes cycles of local expansion and contraction, corresponding to a sequence of black hole/white hole interiors passing from horizon to singularity and back. We see that over longer timescales the variances grow, suggesting a spreading in the state and eventual breakdown of the semiclassical picture. While all the specific features displayed here depend on the chosen parameters in the state, the qualitative behaviour showing disappearance of the classical horizon and singularity seems universal, resulting from the reflecting boundary condition (<ref>).
Discussion.— It has long been stated that a quantum theory of black hole dynamics that is required to be unitary must deviate strongly from semiclassical expectations. Usually this is discussed in the context of unitarity of black hole formation and evaporation, leading to the famous issue of information loss <cit.>. Preserving unitarity together with a few other “reasonable” assumptions can lead to disasters such as a firewall at the horizon <cit.>, or more generally the conclusion that there is no simple semiclassical resolution of the paradox. What we are discussing here is different; we studied a simple quantum model of the black hole interior, in which the gravitational degrees of freedom (in the truncation to a Kasner metric) are quantised according to the Wheeler–DeWitt equation. In this setting too, there is a clash between unitarity and consistency with semiclassical physics: unitarity means globally well-defined time evolution, which is incompatible with singularities or even the appearance of a coordinate singularity at the horizon. It is of course a tricky issue to define unitarity in a fundamentally timeless setting such as canonical quantum gravity. The key ingredient in our discussion was to use unimodular gravity and its preferred choice of time coordinate, which can be implemented through auxiliary fields as in Ref. <cit.>. With respect to this clock, both the horizon and the singularity are only a finite time away, as they would be for an infalling observer. Unitarity forces us to replace both the horizon and the singularity with highly quantum bounce regions. Unitarity with respect to a different notion of clock would generally lead to different conclusions, showing a clash between unitarity and general covariance <cit.>. Our general conclusions should be valid for any standard of time such that the singularity or horizon are only a finite time away. It would be interesting to construct other analytically accessible examples.
The work of Ref. <cit.> studied a number of possible clocks in the same interior black hole spacetime. For instance, the anisotropy parameter k is classically monotonic, and hence a good standard of time. With respect to this clock and other clocks such as v (or log v) and π_v, no deviation from semiclassical physics was found in Ref. <cit.>. These observations are fully consistent with the results of Ref. <cit.> for the k and v clocks, and with the general conjecture of Ref. <cit.>, where different clocks were classed as “slow” and “fast”. Unitarity with respect to a fast clock, which runs to ±∞ at a singularity, does not require any deviation from semiclassical physics. In the black hole model studied here, k, v and π_v are all fast. Similar behaviour is also found, for example, for a massless scalar field clock in homogeneous Wheeler–DeWitt cosmology <cit.>. However, such clocks do not describe the experience of local observers; classical singularities are troublesome exactly because one can reach them in finite time. When such a slow clock is studied, on approach to the singularity one must either give up unitarity or find generic resolution of all singularities, and possibly horizons.
The metric form (<ref>) corresponds to a simple model of a black hole with planar symmetry, but only a slight extension – adding a second possible anisotropy variable in the Misner parametrisation – turns it into the general Kasner form describing, according to the BKL conjecture, successive periods during the generic approach to a spacelike singularity, even for more complicated or realistic black holes (as discussed in the related work of Ref. <cit.>). Since this second anisotropy variable again acts as a massless scalar field in an isotropic Universe, the results illustrated here would be expected to hold more generically for singularities. For horizons, the general picture is less clear since the model studied here sees the horizon as a coordinate singularity, and the black hole metric at a horizon would in general be more complicated. Already in the case of the usual (A)dS-Schwarzschild black hole, the positive curvature of constant time slices would contribute at the horizon and potentially change the conclusions. In general, it is at least in principle always possible to construct a model with a notion of time that stays regular at the horizon, so that collapse from an asymptotically flat (or asymptotically AdS or de Sitter) region could be described as a unitary process. While all of these alternative constructions will change the interpretation of what happens at the horizon, in the theory we have defined unitary quantum dynamics will always necessarily replace the (universal) singularity by non-singular evolution into a white hole: either unitarity fails, or there is no black hole singularity.
Acknowledgments.— The work of SG is funded by the Royal Society through the University Research Fellowship Renewal URF\R\221005. LMP is supported by the Leverhulme Trust.
99singtheorem S. W. Hawking and G. F. R. Ellis, The large scale structure of space-time (Cambridge University Press, 1973).
QSingRes
V. Husain and O. Winkler, “Singularity resolution in quantum gravity,” Phys. Rev. D69 (2004) 084016, gr-qc/0312094;
A. Ashtekar, “Singularity resolution in loop quantum cosmology: A brief overview,” J. Phys. Conf. Ser.189 (2009) 012003, 0812.4703;
C. Kiefer, “On the avoidance of classical singularities in quantum cosmology,” J. Phys. Conf. Ser.222 (2010) 012049.
GotayDemaret
M. J. Gotay and J. Demaret, “Quantum cosmological singularities,” Phys. Rev. D28 (1983), 2402–2413.
SteffenLuciapapers
S. Gielen and L. Menéndez-Pidal, “Singularity resolution depends on the clock,” Class. Quant. Grav.37 (2020) 205018, 2005.05357;
“Unitarity, clock dependence and quantum recollapse in quantum cosmology,” Class. Quant. Grav.39 (2022) 075011,
2109.02660;
L. Menéndez-Pidal,
The problem of time in quantum cosmology
(PhD Thesis, University of Nottingham, 2022),
2211.09173.
Essay
S. Gielen and L. Menéndez-Pidal,
“Unitarity and quantum resolution of gravitational singularities,” Int. J. Mod. Phys. D31 (2022) 2241005,
2205.15387.
HartnollWdW
S. A. Hartnoll,
“Wheeler–DeWitt states of the AdS–Schwarzschild interior,” JHEP01 (2023) 066,
2208.04348.
Frenkel
J. P. S. Lemos,
“Two-dimensional black holes and planar general relativity,” Class. Quant. Grav.12 (1995) 1081–1086,
gr-qc/9407024;
E. Witten,
“Anti-de Sitter space, thermal phase transition, and confinement in gauge theories,” Adv. Theor. Math. Phys.2 (1998) 505–532,
hep-th/9803131;
A. Frenkel, S. A. Hartnoll, J. Kruthoff and Z. D. Shi,
“Holographic flows from CFT to the Kasner universe,” JHEP08 (2020) 003,
2004.01192;
N. Bilic and J. C. Fabris,
“Thermodynamics of AdS planar black holes and holography,” JHEP11 (2022) 013,
2208.00711.
unimodular J. L. Anderson and D. Finkelstein, “Cosmological constant and fundamental length,” Am. J. Phys. 39 (1971) 901–904; J. J. van der Bij, H. van Dam and Y. J. Ng, “The Exchange of Massless Spin Two Particles,” Physica 116A (1982) 307–320; W. G. Unruh, “Unimodular theory of canonical quantum gravity,” Phys. Rev. D 40 (1989) 1048–1052; W. Buchmüller and N. Dragon, “Einstein Gravity From Restricted Coordinate Invariance,” Phys. Lett. B 207 (1988) 292-294; “Gauge Fixing and the Cosmological Constant,” Phys. Lett. B 223 (1989) 313–317.
GrybTheb S. Gryb and K. P. Y. Thébault, “Superpositions of the cosmological constant allow for singularity resolution and unitary evolution in quantum cosmology,” Phys. Lett. B784 (2018) 324–329, 1801.05782;
“Bouncing Unitary Cosmology I: Mini-Superspace General Solution,” Class. Quant. Grav.36 (2019) 035009, 1801.05789;
“Bouncing Unitary Cosmology II: Mini-Superspace Phenomenology,” Class. Quant. Grav.36 (2019) 035010, 1801.05826.
bojobook M. Bojowald, Canonical Gravity and Applications: Cosmology, Black Holes, and Quantum Gravity
(Cambridge University Press, 2010).
HenneauxTeitelboim
M. Henneaux and C. Teitelboim,
“The cosmological constant and general covariance,” Phys. Lett. B222 (1989) 195–199.
Joao
J. Magueijo,
“Cosmological time and the constants of nature,” Phys. Lett. B820 (2021) 136487,
2104.11529.
HawkingPage
S. W. Hawking and D. N. Page,
“Operator ordering and the flatness of the universe,” Nucl. Phys. B264 (1986), 185–196.
MalcolmBH
M. J. Perry,
“No Future in Black Holes,”2106.03715.
Halliwell
J. J. Halliwell,
“Derivation of the Wheeler–DeWitt equation from a path integral for minisuperspace models,” Phys. Rev. D38 (1988) 2468–2481.
AnnihtoNothing
M. Bouhmadi-López, S. Brahma, C. Y. Chen, P. Chen and D. h. Yeom,
“ Annihilation-to-nothing: a quantum gravitational boundary condition for the Schwarzschild black hole,” JCAP11 (2020) 002,
1911.02129.
Pawlowski2012
T. Pawłowski and A. Ashtekar, “Positive cosmological constant in loop quantum cosmology,”Phys. Rev. D85 (2012) 064001, 1112.0360.
HawkingBH
S. W. Hawking,
“Breakdown of predictability in gravitational collapse,” Phys. Rev. D14 (1976) 2460–2473.
Firewall
A. Almheiri, D. Marolf, J. Polchinski and J. Sully,
“Black holes: complementarity or firewalls?,” JHEP02 (2013) 062,
1207.3123.
AshtekarSingh
A. Ashtekar and P. Singh,
“Loop quantum cosmology: a status report,” Class. Quant. Grav.28 (2011) 213001,
1108.0893.
|
http://arxiv.org/abs/2409.02830v1 | 20240904155253 | Towards a Scalable and Efficient PGAS-based Distributed OpenMP | [
"Baodi Shan",
"Mauricio Araya-Polo",
"Barbara Chapman"
] | cs.DC | [
"cs.DC",
"cs.PF"
] |
PGAS-based Distributed OpenMP
Shan et al.
Stony Brook University, Stony Brook NY 11794, USA
{baodi.shan,barbara.chapman}@stonybrook.edu
TotalEnergies EP Research & Technology US, LLC, Houston TX 77002, USA
Towards a Scalable and Efficient PGAS-based Distributed OpenMP
Baodi Shan1 Mauricio Araya-Polo2 Barbara Chapman1
Accepted Sep 3 2024 to ApJ Letters
==============================================================
§ ABSTRACT
MPI+X has been the de facto standard for distributed memory parallel programming. It is widely used primarily as an explicit two-sided communication model, which often leads to complex and error-prone code. Alternatively, PGAS model utilizes efficient one-sided communication and more intuitive communication primitives.
In this paper, we present a novel approach that integrates PGAS concepts into the OpenMP programming model, leveraging the LLVM compiler infrastructure and the GASNet-EX communication library.
Our model addresses the complexity associated with traditional MPI+OpenMP programming models while ensuring excellent performance and scalability.
We evaluate our approach using a set of micro-benchmarks and application kernels on two distinct platforms: Ookami from Stony Brook University and NERSC Perlmutter. The results demonstrate that DiOMP achieves superior bandwidth and lower latency compared to MPI+OpenMP, up to 25% higher bandwidth and down to 45% on latency.
DiOMP offers a promising alternative to the traditional MPI+OpenMP hybrid programming model, towards providing a more productive and efficient way to develop high-performance parallel applications for distributed memory systems.
§ INTRODUCTION
HPC systems continue to grow in size and complexity, pushing legacy programming models to their limits. Developers of numerical simulation applications must adapt to this reality. Fortunately, alternative programming models and productivity frameworks are available and continually evolving to provide necessary support. Currently and for most of the last decade, MPI+X is the mainstream paradigm for distributed cluster programming models, where X can be OpenMP, OpenACC, CUDA, RAJA or Kokkos, etc <cit.>. However, there is an increasing need for alternatives to MPI+X that are more flexible and less complex. One such alternative is the PGAS (Partitioned Global Address Space) programming model, which is gaining momentum. Notable PGAS models such as UPC++, OpenSHMEM, and Legion and languages such as Chapel are reaching larger developer audiences.
OpenMP is rapidly evolving from a traditional CPU-based and shared-memory programming model to one that includes task-based programming and accelerator-based offloading capabilities. Therefore, we aim to leverage the power of PGAS to extend OpenMP to operate in distributed environments. To that end, we propose the PGAS-based Distributed OpenMP (DiOMP). DiOMP's main contributions are:
Enhanced Scalability and Improved Performance:
DiOMP boosts performance and scalability for distributed applications by allowing efficient data sharing across nodes without the overhead of traditional message-passing.
Simplified Communication in the PGAS Model:
DiOMP exploits the PGAS model direct operations on global memory addresses, which reduces the complexities of message matching and buffer management commonly found in MPI. In the PGAS framework, communication operations like reading and writing remote data are conducted directly via global addresses, without the need for additional management of communication domains.
Simplified Memory Management:
By extending native OpenMP statements, such as , this model simplifies the allocation and management of memory. Compared to MPI RMA, this approach avoids the complexities and overhead associated with creating and destroying MPI windows.
Excellent Extensibility through Activate Message:
Active Messages is a communication mechanism that reduces latency and overhead by directly executing a handler function upon message arrival, ensuring efficient and immediate processing. This guarantees the extensibility of DiOMP, in the current version of DiOMP, is implemented using Active Messages. In future versions, Active Messages will play a crucial role in handling task dependencies within DiOMP by allowing for dynamic and responsive communication patterns.
§ BACKGROUND
§.§ OpenMP
OpenMP <cit.> is one the main standard for shared-memory parallelism in HPC. It provides a straightforward and flexible interface for developers to create parallel applications by exploiting the capabilities of multi-core processors and shared memory systems.
Current versions of OpenMP support the task-based programming model<cit.>, for instance, OpenMP 4.0 introduced task dependencies, allowing programmers to specify dependencies between tasks and enabling the runtime system to automatically manage the execution order based on these dependencies.
With the introduction of version 4.0, OpenMP also expanded its capabilities to include device offloading<cit.>, enabling code execution on accelerators without requiring users to develop device-specific kernels using vendor-specific APIs<cit.>.
§.§ The PGAS Model
PGAS stand for Partitioned Global Address Space programming model.
In contrast to the message-passing model (MPI), the PGAS programming model <cit.> utilizes a globally accessible memory space that is divided among the basic units distributed across one or more nodes.
PGAS models offer a uniform view of distributed memory objects and enable high-performance access to remote memory through direct operations such as reads (gets) and writes (puts). Point-to-point communication in the PGAS model is one-sided, requiring active participation only from the initiating unit.
This decouples communication and synchronization, allowing the target unit's computation to continue uninterrupted during data exchanges.
Many distributed and parallel computing programming languages and libraries feature the PGAS model, including OpenSHMEM, Legion, UPC++, DASH, Chapel, and OpenUH Co-Array Fortran. In the programming languages and libraries that have adopted PGAS, some use MPI as their communication framework, such as DASH, while others utilize UCX, such as OpenSHMEM. But the de-facto communication standard targeted by portable PGAS system is GASNet API. Current and historical GASNet clients include: UPC++ <cit.>, Cray Chapel <cit.>, Legion <cit.>, OpenUH Co-Array Fortran <cit.>, OpenSHMEM Reference implementation <cit.>, Omni XcalableMP <cit.>, and several miscellaneous projects.
§.§ Related Work
The idea of executing OpenMP programs within distributed architectures has been extensively explored in scholarly research. The concept of Remote OpenMP offloading, as introduced by Patel and Doerfert <cit.>, together with subsequent enhancements <cit.> and practical implementations, has demonstrated considerable promise for facilitating OpenMP target offloading to remote devices. Nonetheless, as noted in reference <cit.>, the scalability of such remote offloading is sub-par when compared with conventional hybrid MPI+OpenMP methodologies. In a similar line of analysis, the OpenMP Cluster developed by Yviquel et. al <cit.>, which also focuses on OpenMP target offloading, conceptualizes remote nodes as a computational resource for OpenMP targets. Another path to distributed directive-based programming approach is by combining XMP and YML <cit.>.
§ DESIGN OF PGAS-BASED DISTRIBUTED OPENMP
PGAS-based Distributed OpenMP is developed based on LLVM/OpenMP and utilizes GASNet-EX as the underlying communication middleware. In this section, we will sequentially introduce the memory management model of our PGAS-based approach, point-to-point communication, collective communication, and the synchronization mechanisms, as well as the role and future potential of GASNet-EX Active Message in these mechanisms.
§.§ Memory Management
In the PGAS layer, we use process (rank) as the main unit for memory management and communication. The memory region of each rank is divided into private memory and global memory, adhering to the PGAS paradigm guidelines. The memory management model is illustrated in <ref>.
Due to the segment constraints imposed by the communication middleware GASNet-EX, the memory space related to communication must reside within the segment previously allocated via . To address this requirement, we introduce aligned global memory and unaligned global memory. The allocation of aligned global memory needs the involvement of all ranks, with each rank acquiring an equal size of global memory, which is then placed at the front of their respective segments.
The segments attached by GASNet-EX do not support address alignment, meaning that GASNet-EX cannot guarantee identical address ranges across different ranks' segments. Therefore, PGAS-based distributed OpenMP uses virtual address alignment.
Virtual address alignment operates as follows: although the actual memory addresses assigned to each rank during the allocation of aligned global memory may differ, the runtime system maintains a specific mapping that provides a virtually aligned address space. Thus, when a rank intends to transfer data to other ranks, it can simply utilize its own memory address to obtain the corresponding memory addresses of the other ranks.
As for non-aligned global memory, which is global memory that can be created by individual or some ranks. This type of memory is allocated at the end of the segment in a limited manner. This memory does not receive virtual address mapping, as the process is invisible to ranks that do not participate in this portion of the memory allocation. Non-aligned global memory is particularly suitable for storing and retrieving specific or temporary data.
Whether using aligned or non-aligned global memory, developers utilizing DiOMP can easily allocate global memory by invoking the function, which is part of the OpenMP standard. DiOMP is equipped with specially designed allocators for allocating data in the global space. In addition to supporting standard C function from OpenMP, we have also provided a C++ allocation function with template support, enabling developers to allocate memory for specific data type or data structure.
§.§ Point-to-Point and Collective Communication
DiOMP incorporates two fundamental communication paradigms: point-to-point and collective communication. These paradigms enhance data exchange and synchronization across different ranks, facilitating efficient parallel computing.
language=C,
basicstyle=912,
numbers=left,
numberstyle=,
stepnumber=1,
numbersep=5pt,
backgroundcolor=,
showspaces=false,
showstringspaces=false,
showtabs=false,
frame=single,
captionpos=b,
breaklines=true,
breakatwhitespace=false,
title=
lstlistingchapter
[caption=Point-to-point APIs for PGAS-based Distributed OpenMP, label=lst:rma]
void ompx_get(void *dst, int rank, void *src, size_t nbytes);
void ompx_put(int rank, void *dst, void *src, size_t nbytes);
Point-to-point communication leverages one-sided communication primitives, including and [The model and framework proposed in this paper are currently limited to the proof of concept stage, and the function names are provisional.]. This method enables ranks to directly access each other's memory without needing explicit coordination, thus reducing synchronization overhead and allowing computation and communication to overlap. These operations could utilize a virtual address alignment mechanism to seamlessly map between local and remote memory spaces. <ref> shows the APIs for point-to-point communication in DiOMP.
Collective communication, on the other hand, requires all ranks to participate in data exchange or synchronization. DiOMP supports various collective operations like barrier, broadcast, and reduction, which are optimized based on the network topology and hardware capabilities. These operations help in the efficient distribution and aggregation of data, supporting common parallel programming patterns.
Together, these communication strategies provide a robust framework in PGAS-based Distributed OpenMP.
§.§ Synchronization Mechanisms and Active Messages
DiOMP based on GASNet-EX offers a variety of synchronization mechanisms, including , , and . Among these, the implementations of barrier and waitRMA are based on the native interfaces of GASNet-EX, while utilizes the Active Message mechanism of GASNet-EX. We will use as a case study to demonstrate the significant role that Active Message plays in our model.
The primary function of is to ensure that a specific rank has exclusive access to the shared memory space of a target rank by establishing a lock. This process is facilitated by several dedicated GASNet-EX active message handlers.
When one rank (source rank) wants to lock another rank (target rank), it starts by sending an active message. The source rank then waits for a reply to see if it got the lock. Meanwhile, the target rank checks this request and manages a list of all ranks waiting for a lock, along with a lock status indicator.
If no other rank is waiting for a lock and the target rank is not locked, the target rank will lock itself and inform the source rank that it has successfully obtained the lock through a reply active message. If the target rank is already locked or there are other ranks waiting, the source rank is added to the waiting list. The source rank must then wait its turn until it is at the front of the list and the target rank is unlocked.
Each active message handler in GASNet-EX possesses a unique token, which means the rank queue stores these tokens, each embodying information about its corresponding source rank. This mechanism ensures that every request is uniquely identified and correctly processed.
In cases where the lock cannot be immediately granted, the target rank does not idle. Instead, it monitors the rank queue and only responds once the locking rank issues an unlock active message. This efficient management prevents unnecessary delays and optimizes resource use.
<ref> illustrates the process where rank0 and rank2 simultaneously initiate lock requests and put data on rank1.
Building upon this, we have also introduced the function, which is an extension of that provides thread-level locking.
This function implements both thread-level and process-level locking, making it extremely useful in mixed thread and process programming scenarios, such as when inter-rank communication occurs within an region.
In the future, we plan to further expand the role of active message within DiOMP, particularly in handling OpenMP task dependencies. Active message is expected to play a crucial role in this context.
§ EVALUATION
§.§ Experimental Setup
The experiments were conducted on the Ookami system at Stony Brook University and the Perlmutter supercomputer at Lawrence Berkeley National Laboratory. Refer to <ref> for the hardware and software specifications of the systems.
We performed micro-benchmarks on both systems and tested weak scaling matrix multiplication and strong scaling Minimod <cit.> benchmark on Ookami.
§.§ Micro-benchmarks
We conducted micro benchmark tests on Ookami and Perlmutter platforms to evaluate the performance of DiOMP in terms of bandwidth and latency.
The bandwidth tests using large message sizes showed that DiOMP achieved higher peak bandwidth and sustained higher throughput compared to MPI on both platforms (<ref> and <ref>). As the message size increases, DiOMP-based implementation achieves peak bandwidth earlier than MPI. This can be attributed to the efficient utilization of the underlying interconnect through the GASNet-EX communication layer.
The latency tests using small message sizes demonstrated that DiOMP consistently demonstrates lower latency compared to MPI on both Ookami and Perlmutter (<ref> and <ref>). The reduction in latency is up to 45%. The lower latency of DiOMP is a result of its lightweight one-sided communication model, which eliminates the overhead associated with explicit message matching and synchronization in MPI. Notice that the performance of mpi_put and mpi_get on Perlmutter is consistent but apart, it has been previously reported <cit.>.
These findings suggest that DiOMP is a promising alternative for high-performance inter-node communication in parallel applications.
§.§ Weak Scaling - Matrix Multiplication
We subsequently evaluate the ring exchange communication pattern using a mini-application that implements Cannon's algorithm to perform square matrix multiplication, resulting in the product C
= A × B. Both the MPI version and the DiOMP version of the mini-app incorporate an additional bLoCk stripe for matrix B, enabling the overlap of computation and communication. In this mini-app, as the number of ranks increases, the size of the matrix and the volume of data transferred also increase.
In this test, the matrix size is 500×500×ranks number, resulting in a linear increase in computational load. Due to the ring communication pattern employed, the volume of communication increases in squares. <ref> presents the results of matrix multiplication on the Ookami system using both DiOMP and MPI+OpenMP.
§.§ Strong Scaling - Minimod
Minimod <cit.> is a proxy application designed to simulate the propagation of waves through subsurface models by solving the wave equation in its finite difference discretized form. In this study, we utilize one of the kernels included in Minimod, specifically the acoustic isotropic propagator in a constant-density domain <cit.>.
Minimod supports multi-device OpenMP offloading using regions encapsulated within OpenMP tasks and exhibits strong-scaling characteristics <cit.>. We ported the multi-GPU version of Minimod to versions using MPI+OpenMP and DiOMP. In these versions, the original GPUs device numbers are treated as ranks, with data exchanges being handled through PGAS or MPI. Remarkably, the MPI+ OpenMP version LoCs required for communication are significantly larger than those for the DiOMP version as shown in <ref> and <ref>.
In <ref>, since MPI uses two-sided communication, both the sender and receiver need to be involved in the data transmission process, in order to minimize the waiting time, we set up arrays for both sides of the transmission to ensure the synchronization of information.
In <ref>, since DiOMP uses windowless one-sided communication, the data sender only needs to put the data to the target rank. The will wait for all data to be received completely before executing the code below.
The specific values can be referenced in <ref>.
For tests in <ref>, the grid size is 1000^3 and 1000 time steps. We conducted evaluations on the Ookami system using 1 to 32 nodes. <ref> shows the results of Minimod running on Ookami using both DiOMP and MPI+OpenMP versions. We observed excellent strong scalability.
It is clear that in the majority of cases, DiOMP demonstrated either comparable or superior performance to MPI+OpenMP.
§ CONCLUSION AND FUTURE WORK
In conclusion, this paper introduces DiOMP, an extension of OpenMP utilizing the PGAS distributed model. DiOMP leverages LLVM/OpenMP and GASNet-EX to offer a portable, scalable, and high-performance solution for parallel programming across diverse architectures.
We hope that DiOMP can become an important extension of OpenMP and eventually become part of the OpenMP specification. Based on the current experimental results, DiOMP achieves competitive performance against the legacy MPI+X approach. The PGAS-based Distributed OpenMP model has the potential to replace the traditional MPI+OpenMP hybrid programming approach in many scenarios.
Looking ahead, we aim to further expand the usability of DiOMP, particularly with respect to OpenMP target offloading, including support for accelerators like GPUs, and managing OpenMP task dependencies through active message. We also intend to apply the PGAS-based Distributed OpenMP model to real-world scientific applications and study its productivity and performance in comparison with other PGAS approaches and the MPI+OpenMP hybrid model.
§ ACKNOWLEDGEMENTS
We would like to thank TotalEnergies E&P Research and Technologies US for their support of this work. Our gratitude also extends to Alice Koniges from the University of Hawaii for providing access to the NERSC Perlmutter system.
Additionally, we acknowledge to thank Stony Brook Research Computing and Cyberinfrastructure, and the Institute for Advanced Computational Science at Stony Brook University for access to the innovative high-performance Ookami computing system, which was made possible by a $5M National Science Foundation grant (#1927880). This research also used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
splncs04
|
http://arxiv.org/abs/2409.03237v1 | 20240905043702 | Robust Q-Learning under Corrupted Rewards | [
"Sreejeet Maity",
"Aritra Mitra"
] | cs.LG | [
"cs.LG",
"cs.SY",
"eess.SY",
"math.OC",
"stat.ML"
] |
Gr-IoU: Ground-Intersection over Union for Robust Multi-Object Tracking with 3D Geometric Constraints
Keisuke Toida10009-0006-4873-3651 Naoki Kato20009-0004-3815-0829 Osamu Segawa20009-0000-2469-6098 Takeshi Nakamura20009-0001-4991-3383 Kazuhiro Hotta10000-0002-5675-8713
September 9, 2024
==============================================================================================================================================================================
[1]The authors are with the Department of Electrical and Computer Engineering, North Carolina State University. Email: {smaity2, amitra2}@ncsu.edu.
§ ABSTRACT Recently, there has been a surge of interest in analyzing the non-asymptotic behavior of model-free reinforcement learning algorithms. However, the performance of such algorithms in non-ideal environments, such as in the presence of corrupted rewards, is poorly understood. Motivated by this gap, we investigate the robustness of the celebrated Q-learning algorithm to a strong-contamination attack model, where an adversary can arbitrarily perturb a small fraction of the observed rewards. We start by proving that such an attack can cause the vanilla Q-learning algorithm to incur arbitrarily large errors. We then develop a novel robust synchronous Q-learning algorithm that uses historical reward data to construct robust empirical Bellman operators at each time step. Finally, we prove a finite-time convergence rate for our algorithm that matches known state-of-the-art bounds (in the absence of attacks) up to a small inevitable O(ε) error term that scales with the adversarial corruption fraction ε. Notably, our results continue to hold even when the true reward distributions have infinite support, provided they admit bounded second moments.
§ INTRODUCTION
We study reinforcement learning (RL) within a Markov decision process (MDP) setting where an agent sequentially interacts with an environment to maximize a long-term cumulative value function. To achieve this goal without knowledge of the dynamics of the MDP, the agent plays an action at each time, receives a reward in the form of feedback from the environment, and then transitions to a new state. This process then repeats itself. Using the sequence of observed rewards, the agent learns to take “better" actions over time. In this context, one of the most widely studied model-free RL algorithms is the celebrated Q-learning algorithm of Watkins and Dayan <cit.> that iteratively estimates the optimal state-action value function associated with the MDP. Viewing Q-learning through the lens of stochastic approximation (SA) theory, a rich body of work has investigated the asymptotic performance of this algorithm, i.e., its behavior as the number of iterations goes to infinity <cit.>. More recently, there has been a shift of interest towards providing a finer non-asymptotic/finite-time analysis of different RL algorithms <cit.>. That said, the literature discussed above exclusively pertains to scenarios where the reward feedback received from the environment is always accurate, i.e., it is generated by the true reward distribution of the underlying MDP. However, such an idealistic assumption is unlikely to hold when one seeks to deploy RL algorithms in real-world harsh environments. As such, since feedback in the form of rewards is crucial for the overall decision-making process, our main goal in this paper is to investigate the following questions:
Q1. Are existing model-free RL algorithms robust to perturbations in the rewards?
Q2. Can we obtain reliable value-function estimates under corrupted rewards?
Contributions. In response to the aforementioned questions, the main contributions of this work are as follows.
∙ Problem Formulation. To systematically investigate the question of robustness of RL algorithms, we consider a strong-contamination reward attack model where an adversary with complete knowledge of the MDP and the learner's observations is allowed to arbitrarily perturb ε-fraction of the rewards acquired over time. This attack model is directly inspired from the robust statistics literature <cit.>, where a small fraction of the data points in a static data set can be corrupted by an adversary.
∙ Vulnerability of vanilla Q-Learning. In Theorem <ref>, we prove that even under a weaker attack model than the one described above, the basic Q-learning algorithm converges to the fixed point of a perturbed Bellman operator, where the perturbation is due to reward contamination. By constructing an explicit example, we next establish in Theorem <ref> that this incorrect fixed point can be arbitrarily far away from the optimal Q function. Furthermore, this continues to be the case even when the corruption fraction ε is small.
∙ Robust Q-Learning Algorithm. Motivated by the above findings, we develop a novel robust Q-learning algorithm in the synchronous sampling setting <cit.>, i.e., when a generative model provides independent samples for all state-action pairs. The key idea in our algorithm is to use historical data of rewards for each state-action pair to construct a robust empirical Bellman operator in each iteration. To do so, we leverage the recently developed robust trimmed mean-estimator in <cit.>, along with a novel dynamic thresholding technique.
∙ Near-optimal Rates. Providing a rigorous finite-time convergence analysis of our algorithm is non-trivial since one needs to simultaneously contend with the benign randomness introduced by sampling and deliberate arbitrary adversarial injections. Nonetheless, in Theorem <ref>, we establish a high-probability ℓ_∞-error-rate of Õ(1/√(T))+O(√(ε)), where T is the number of iterations. In the absence of corruption (i.e., ε=0), this rate matches those in recent works <cit.>. Furthermore, we also provide an informal argument that the additive O(√(ε)) term appears to be inevitable.
Finally, we note that our proposed approach does not require the true reward distributions to be bounded or “light-tailed", i.e., we do not need to make any assumptions of sub-Gaussianity on the reward models. Instead, we only need the reward distributions to admit finite second moments. Thus, in principle, even the true reward samples (i.e., the inliers) can come from heavy-tailed distributions. This makes it non-trivial to tell apart clean reward data from corrupted rewards. Nonetheless, the approach developed in this paper overcomes this challenge.
Related Work. The synchronous Q-learning setting we consider here has been extensively studied in several works <cit.>. More recently, its finite-time performance has been characterized in <cit.>. Asynchronous versions of Q-learning have also been analyzed in <cit.>.
Reward corruption models similar to what we consider here have been studied in the multi-armed bandits (MAB) literature <cit.>, where an attacker with a finite budget can corrupt the rewards seen by the learner in a small fraction of the rounds. However, our setup, algorithm, and proof technique fundamentally differ from the MAB framework. That said, our main result in Theorem <ref> has the same flavor as the findings in the above bandit papers: one can recover the performance bounds in the absence of corruptions up to an additive term that captures the attacker's budget; in our case, this is the additive O(√(ε)) term. Finally, some recent works have studied perturbed versions of Markovian stochastic approximation algorithms <cit.>, where the perturbations are due to communication constraints (e.g., delays and compression). Unlike the structured perturbations in these papers, the perturbations injected by the adversary in our work can be arbitrary.
§ BACKGROUND AND PROBLEM FORMULATION
MDP Model. We consider a Markov Decision Process (MDP) denoted by ℳ=(𝒮, 𝒜, P, R, γ), where 𝒮 is a finite state space, 𝒜 is a finite action space, P is a set of Markov transition kernels, R is a reward function, and γ∈ (0,1) is the discount factor. Upon playing action a at state s, the state of the MDP transitions to s' with probability P(s'|s,a), and a scalar stochastic immediate reward r(s,a) with distribution R(s,a) is observed. We define R(s,a) ≜𝔼[r(s,a)] to be the expected value of the random variable r(s,a), and assume that the (noisy) rewards are bounded, i.e., ∃r̅≥ 1 such that |r(s,a)| ≤r̅, ∀ (s,a) ∈S×A.[The assumption of bounded rewards is made here only to convey the key ideas in their simplest form. Later, in Section <ref>, we will see how our techniques naturally allow for reward distributions with finite second moments that can have potentially unbounded support.] We consider deterministic policies π: 𝒮→𝒜 that map states to actions. The “goodness" of a policy π is captured by a γ-discounted infinite-horizon cumulative reward V_π: S↦ℝ given by:
V_π(s) = 𝔼[∑_t=0^∞γ^t r(s_t,a_t) | s_0 = s],
where s_t is the state at time t, a_t = π(s_t) is the action played at time t, and the expectation is taken w.r.t. the randomness in the states and rewards. We will refer to V_π(s) as the value function corresponding to state s. Given this premise, the goal of the learner is to find a policy π that maximizes V_π(s) simultaneously for all states s ∈S. It is well known that such a deterministic optimal policy π^* does exist <cit.>. Let us now briefly discuss how such a policy can be obtained when the MDP is known. To that end, we define the state-action value function Q_π:S×A↦ℝ as follows:
Q_π(s, a) = 𝔼[∑_t=0^∞γ^t r(s_t,a_t) | (s_0, a_0) = (s, a)].
Now let Q^* = Q_π^* denote the optimal state-action value function. Then, Q^* is the unique fixed point of the Bellman operator 𝒯^*: ℝ^|S| × |A|→ℝ^|S| × |A| given by:
(𝒯^*Q)(s,a) = R(s,a) + γ𝔼_s' ∼P(· | s,a)[max_a' ∈𝒜 Q(s',a')].
In other words, T^* (Q^*) = Q^*. The Bellman operator satisfies the following contraction property ∀ Q_1, Q_2 ∈ℝ^|S| × |A|:
‖𝒯^* (Q_1) - 𝒯^* (Q_2)‖_∞≤γ‖ Q_1 - Q_2‖_∞.
An immediate consequence of the above property is that the iterative update rule Q_t+1 = T^*(Q_t) guarantees exponentially fast convergence to Q^*. This is precisely the idea used in dynamic programming <cit.>. In our RL setting, however, the dynamics of the MDP are unknown, rendering the above idea infeasible. We now discuss a synchronous version <cit.> of the seminal Q-learning algorithm that finds Q^* without knowledge of the MDP dynamics.
Synchronous Q-learning. The synchronous Q-learning algorithm operates in iterations t=0, 1, …, where in each iteration t, the learner maintains an estimate Q_t of Q^*. The synchronous setting assumes the existence of a generative model such that in each iteration t, the learner gets to observe the following objects for each state-action pair (s,a) ∈S×A: (i) a new state s_t(s,a) drawn independently from P(·| s,a); and (ii) a stochastic reward r_t(s,a) drawn independently from R(s,a). Using this information, for each (s,a) ∈S×A, the learner updates Q_t(s,a) as follows:
Q_t+1(s,a) = (1- α_t)Q_t(s,a) + α_t (T_tQ_t)(s,a),
where {α_t} is a suitable step-size sequence, and T_t:ℝ^|S| × |A|→ℝ^|S| × |A| is an empirical Bellman operator constructed from observations at iteration t, and defined as
(T_tQ)(s,a) ≜ r_t(s,a) + γmax_a'∈𝒜Q(s_t(s,a),a'), ∀ Q ∈ℝ^|S| × |A|,
where r_t(s,a) ∼R(s,a), and s_t(s,a) ∼P(·| s,a). The term “synchronous" arises from the fact that in each iteration t, the learner gets to observe independent data samples for every state-action pair. As such, every component of the vector Q_t can be updated at iteration t, i.e., synchronously. Viewing synchronous Q-learning as a stochastic approximation (SA) scheme, one can show that under suitable assumptions on the step-size sequence {α_t}, Q_t → Q^* with probability 1 <cit.>. Our goal in this paper is to provide a finite-time analysis of synchronous Q-learning when the observed rewards can be potentially corrupted by an omniscient adversary. We formally describe our attack model below.
Strong-Contamination Reward Attack Model. We consider an adversary that has complete knowledge of the MDP model and the observations of the learner in every iteration. Using this information, in each iteration t, the adversary can perturb the entire reward set {r_t(s,a)}_(s,a) ∈S×A arbitrarily. However, to make the problem meaningful, we will associate an attack budget ε∈ [0, 1/2) with the attacker: for each t ∈ℕ, the attacker is allowed to corrupt the reward sets in at most ε-fraction of the first t iterations. Our attack model is directly inspired by the strong contamination model from the robust statistics literature <cit.> where an adversary is allowed to arbitrarily perturb at most ε-fraction of the data points in a set; in our context, the reward observations constitute the data. We note that our model above is more powerful than the classical Huber attack model <cit.> where each data point can be corrupted with probability ε. In particular, the Huber model would imply that ε t reward sets are corrupted only on an average, for each t∈ℕ. In the sequel, we will use y_t(s,a) to denote the observed reward for state-action pair (s,a) in iteration t. In an iteration t where there is no corruption, y_t(s,a)=r_t(s,a), ∀ (s,a) ∈S×A.
Given the strong-contamination reward attack model and a failure probability δ∈ (0,1), our goal is to generate a robust estimate Q_t of the optimal value function Q^* such that with probability at least 1-δ, the ℓ_∞-error ‖Q_t - Q^* ‖_∞ is bounded from above by an error-function e(t,ε) that has optimal dependence on the number of iterations t and corruption fraction ε.
In the next section, we show that the vanilla synchronous Q-learning algorithm fails to achieve this goal. In Section <ref>, we then proceed to develop our proposed robust algorithm.
§ MOTIVATION
In this section, we will show that even with a small attack budget ε, an adversary can cause the vanilla Q-learning algorithm to converge to a point in ℝ^|𝒮|×|𝒜| arbitrarily far away from Q^*. To do so, it suffices to consider the Huber attack model <cit.>. Accordingly, in each iteration t, we toss a biased coin with probability of heads 1-ε. If the coin lands heads, the observed reward y_t(s,a) is drawn from the true reward distribution R(s,a), ∀ (s,a) ∈S×A. If it lands tails, y_t(s,a) is drawn from an arbitrary distribution C(s,a), ∀ (s,a) ∈S×A. Concretely, y_t(s,a) ∼ (1-ε) R(s,a) + εC(s,a). Under this Huber attack model, suppose the perturbed expected reward for (s,a) is given by R̃_c(s,a) = 𝔼[y_t(s,a)]. We have the following simple result.
Consider the vanilla synchronous Q-learning algorithm in Eq. (<ref>) with rewards perturbed based on the Huber attack model described above. If the step-size sequence satisfies α_t ∈ (0,1) with ∑_t=1^∞α_t = ∞ and ∑_t=1^∞α_t^2 < ∞, then with probability 1, Q_t →Q̃^*_c, where Q̃^*_c is the unique fixed point of the perturbed Bellman operator 𝒯̃^*_c: ℝ^|S| × |A|→ℝ^|S| × |A| defined by
(𝒯̃^*_c Q)(s,a) = R̃_c(s,a) + γ𝔼_s' ∼P(· | s,a)[max_a' ∈𝒜 Q(s',a')].
It suffices to simply note that under the Huber attack model, we end up running the synchronous Q-learning algorithm on a new MDP (𝒮, 𝒜, P, R̃_c, γ) that differs from the original MDP only in its reward function, where R̃_c is the perturbed reward function defined earlier. The claim then follows directly by appealing to the asymptotic convergence of synchronous Q-learning based on SA theory <cit.>.
Theorem <ref> tells us that under the Huber model, an adversary can bias the iterates generated by vanilla Q-learning towards the fixed point Q̃^*_c=𝒯̃^*_c(Q̃^*_c) of a perturbed Bellman operator 𝒯̃^*_c. However, it does not provide an explicit lower bound on the gap ‖Q̃^*_c -Q^* ‖_∞. Our next result reveals that this gap can be arbitrarily large.
There exists an MDP with finite state and action spaces for which the gap
‖Q̃^*_c -Q^* ‖_∞ can be arbitrarily large under the Huber attack model.
We will prove this result by constructing an explicit example that is inspired from <cit.>. To that end, consider the MDP in Fig. <ref>. We first describe the reward model without attacks. In state s =1, we get a deterministic reward d if a = L and -d if a = R, where d > 0 is a positive constant. For states s ∈{2,3}, irrespective of the chosen action, the reward is deterministically 1. Similarly, for states s ∈{4,5}, the reward is deterministically 0. Now consider a Huber attack model where the attacker only perturbs rewards in state s=1 as follows: for state-action pair (1, L) (resp., (1, R)), the observed reward is d (resp., -d) with probability 1-ε and -C (resp., C) with probability ε. Here, C > 0 is an attack signal that we will design carefully later. This immediately yields the following perturbed reward functions: R̃_c(1,L) = (1-ε)d-ε C and R̃_c(1,R)= -(1-ε)d+ε C. We now proceed to compute Q^* and Q̃^*_c. First, observe that given our choice of attack and MDP model, Q^* and Q̃^*_c only differ at state 1. Now using the Bellman optimality operators in (<ref>) and (<ref>), it is easy to verify that Q^*(s,a)=0 for s∈{4,5}, a ∈{L,R}, and Q^*(s,a)=(1-γ p)^-1 for s∈{2,3}, a ∈{L,R}. Next, one can similarly check that
Q^*(1,L) = d + β, Q̃^*_c(1,L) = (1-ε) d - ε C + β
Q^*(1,R) = -d + β, Q̃^*_c(1,R) = - (1-ε) d + ε C + β,
where β = p γ (1-γ p)^-1. Now pick the corruption signal C as ((2-ε)d + κ) ε^-1, where κ >0 is a tunable parameter. With this choice, we have Q̃^*_c(1,L)= -d - κ + β and Q̃^*_c(1,R)= d + κ + β, yielding ‖Q̃_c^* - Q^* ‖_∞ = max_a ∈𝒜|Q̃_c^*(1,a) - Q^*(1,a) | = 2d + κ. The claim of the theorem then follows from noting that κ can be chosen arbitrarily by the attacker. Additionally, since d, κ > 0, it is not hard to see that under the attack constructed above, the optimal action at state 1 also gets flipped: without the attack, the optimal action is L; under the attack, it is R.
Collectively, Theorems <ref> and <ref> reveal the vulnerability of the vanilla Q-learning algorithm to the Huber attack model. These findings directly motivate the robust algorithm we will develop in the next section.
§ – Q-LEARNING ALGORITHM
Motivated by the vulnerability of vanilla Q-learning to reward-corruption attacks - as revealed in the previous section - we will develop a novel robust variant of the model-free synchronous Q-learning algorithm in this section. The basic idea behind our approach will be to use historical reward data collected for each state-action pair to construct a robust empirical Bellman operator in each iteration. To achieve this objective, we will drawn upon tools from the robust statistics literature; in particular, our algorithm will employ the univariate trimmed mean estimator developed in <cit.> for robust mean estimation in the face of a strong-contamination model. Let us first briefly describe this estimator, and then explain how it can be used for our purpose.
∙ Univariate Trimmed-Mean Estimator. The robust mean estimator in <cit.> - outlined as Algorithm <ref> - takes as input a confidence parameter δ, a known corruption fraction ε, and a corrupted data set generated as follows. Consider a clean data set comprising of M independent copies of a scalar random variable Z with mean μ_Z and variance σ^2_Z. An adversary corrupts at most ε M of these copies, and the resulting corrupted data set is then made available to the estimator. The estimation process involves splitting the corrupted data set into two equal parts, denoted by Z_1, …, Z_M/2, Z̃_1, …Z̃_M/2. One of the parts is used to compute certain quantile levels for filtering out extreme values (line 2 of Algorithm <ref>). The robust estimate μ̂_Z of μ_Z is obtained by averaging the data points in the other part, with those data points falling outside the estimated quantile levels truncated prior to averaging (line 3 of Algorithm <ref>). In what follows, we will use [ {Z_i}_i ∈ [M], ε, δ}] to succinctly describe the output of Algorithm <ref>.
The following result characterizes the performance of Algorithm <ref>, and will be invoked by us in our subsequent analysis.
<cit.>
Consider the trimmed mean estimator in Algorithm <ref>. Suppose ε∈ [0,1/16), and let δ∈ (0,1) be such that δ≥ 4 e^-M/2. Then, there exists an universal constant C, such that with probability at least 1-δ,
|μ̂_Z - μ_Z| ≤ Cσ_Z(√(ε)+√(log(4/δ)/M)).
We now describe how the estimator in Algorithm <ref> can be used to generate robust estimates of the optimal Q function.
∙ ε-Robust Q-Learning Algorithm. Our proposed robust Q-learning algorithm - outlined in Algorithm <ref> - starts with an initial Q-function estimate Q_0, and takes as input a corruption fraction ε, and a failure probability δ∈ (0,1). In each iteration t, for each (s,a) ∈S×A, the learner gets to observe s_t(s,a) drawn independently from P(·| s,a) (as in the standard synchronous Q-learning setting), and a potentially corrupted version of r_t(s,a) ∼R(s,a) denoted by y_t(s,a). Here, the corruption adheres to the strong contamination reward attack model specified in Section <ref>. Our strategy to safeguard against adversarial reward contamination is twofold: reward-filtering and thresholding. We describe these ideas below.
Reward-Filtering: First, instead of directly using y_t(s,a) to update Q_t(s,a), we instead compute a robust estimate of the expected reward R(s,a) for state-action pair (s,a) by using the univariate trimmed mean estimator in Algorithm <ref>. This is done by invoking Algorithm <ref> with (i) data set {y_k(s,a)}_0 ≤ k ≤ t comprising of the history of reward observations for (s,a), (ii) the corruption fraction ε, and (iii) a finer (relative to δ) confidence parameter δ_1=δ/(2 |S| |A| T). We denote the operation carried out by Algorithm <ref> succinctly by using the function in line 6 of Algorithm <ref>. Let the output of this operation be denoted r̃_t(s,a). While it is tempting to use r̃_t(s,a) to update Q_t(s,a), one needs to exercise caution as we explain next.
Reward-Thresholding. From Theorem <ref>, a couple of points are worth noting about the guarantee afforded by Algorithm <ref>. First, the guarantee holds only if the number of data samples exceeds T_lim, where
T_lim≜2 log(4/δ_1).
Second, the guarantee is not deterministic; instead, it only holds with high probability. For technical reasons that will become apparent later in our analysis, we need the sequence {Q_t} of iterates generated by our algorithm to be uniformly bounded deterministically. However, the above discussion suggests that simply using the output of the function to perform updates will not suffice to achieve this goal. As such, we employ a second layer of safety by carefully defining a threshold function as follows:
G_t=
2 , 0 ≤ t ≤ T_lim
C (√(log(4/δ_1)/t)+ √(ε))+, t ≥ T_lim+1
where C is the universal constant in Eq. (<ref>).
[->, >=stealth', auto, thick] (-3,0.5) – (-1.5,0.5) node[midway,above] y_1(s,a);
[->, >=stealth', auto, thick] (-3,-0.1) – (-1.5,-0.1) node[midway,above] y_2(s,a);
at (-2.25,-0.3) ⋮;
[->, >=stealth', auto, thick] (-3,-0.7) – (-1.5,-0.7) node[midway,below] y_t(s,a);
[black,thick,fill=black] (-4,-0.3) rectangle (-3.6,0.1);
[thick] (-1.3,-1.1) rectangle (1.7,1);
at (-3.3,-0.1) :=;
at (-3.9,-0.5) ;
[thick] (-1.0,-0.20) – (-0.8,-0.20);
[thick] (-0.8,-0.20) – (-0.6,-0.20);
[thick] (-0.6,-0.20) – (0.2,-0.20);
[thick,dotted] (0.2,-0.20) – (1,-0.20);
[thick] (1,-0.20) – (1.25,-0.20);
in -1.0,-0.8,-0.6
[red] (,-0.20) circle (1 pt);
in -0.4,-0.2,0,0.2,0.4,0.8
[blue] (,-0.20) circle (1.5pt);
in 1, 1.25
[red] (,-0.20) circle (1 pt);
[dotted,purple,semithick, dash pattern=on 2pt off 1pt] (-0.50,-0.7) rectangle (0.9,0.3);
[below] at (0.15, 0.9) ;
[->, >=stealth', auto, thick] (1.75,0) – (3.5,0) node[midway,above] r̃_t(s,a);
[thick] (3.6,-0.35) rectangle (4.45,0.35);
[] at (4.05, 0)G_t;
[below] at (4.10, -0.40) ;
[thick,black] (-0.7,-6.3) rectangle (1.1,-5.3);
[below] at (0.2, -5.6) ;
[thick,black] (-0.8,-1.75) rectangle (1.2,-2.75);
[right] at (-0.3,-2.25) ;
[thick, >=stealth'] (1.3,-2.3) -> (2.6,-2.3);
[thick, >=stealth', auto] (2.6,-2.3) -> (2.6,-5.8);
[right] at (2.6,-4.1) ;
[black,thick,fill=black] (-2.2,-3.7) rectangle (-1.8,-4.1);
[thick,->, >=stealth', auto] (2.6,-5.8) -> (1.2,-5.8);
[thick, >=stealth', auto] (-0.8,-5.6) -> (-2.0,-5.6);
[thick, >=stealth', auto] (-2.0,-5.6) -> (-2.0,-2.4);
[right] at (-1.9,-4.5);
[right] at (-1.9,-3.2);
[right] at (0.4,-3.2) {r̃_t(s,a)};
[right] at (-1.98,-4.8) {y_i(s,a), ∀ i ∈ [t]};
[right] at (-1.76,-3.88);
[thick,->, >=stealth', auto] (-2.0,-2.4) -> (-0.9,-2.4);
[thick, >=stealth', auto] (-2.4,-6.0) -> (-0.8,-6.0);
[thick, >=stealth', auto] (-2.4,-2.0) -> (-2.4,-6.0);
[left] at (-2.5,-4) ;
[thick,->, >=stealth', auto] (-2.4,-2) -> (-0.9,-2);
[below] at (0.14, -7.5) Fig.1: Schematic Diagram of ε- Q-Learning Algorithm at time instant t ∈ [T].;
Whenever the output of the function exceeds G_t in magnitude, we control it via the thresholding operation in line 8 of Algorithm <ref>. This ensures boundedness of iterates. When adequate data samples have not been collected, i.e., t≤ T_lim, we cannot rely on the statistical bound from Theorem <ref>; hence, we use the crude bound on the rewards to design G_t. However, once enough samples have in fact been collected, we would expect the output of to concentrate around R(s,a) with high-probability. This motivates the choice of G_t for t ≥ T_lim+1 based on Eq. (<ref>). As we argue later in Lemma <ref>, the above choice of G_t ensures that for t ≥ T_lim+1, the condition in line 7 of Algorithm <ref> gets violated with high probability, and r̃_t(s,a) remains the output of the function instead of the more conservative estimate in line 8. This turns out to be crucial for achieving tight rates. After filtering and thresholding, Q_t(s,a) is updated as
Q_t+1(s,a) = (1- α)Q_t(s,a) + α(r̃_t(s,a) + γmax_a'∈𝒜Q_t(s_t(s,a),a')).
This completes the description of Algorithm <ref>. To state our main result concerning its finite-time performance, let us define d_t ≜‖ Q_t - Q^*‖_∞. Our main result is as follows.
(Robust Q-learning bound) Suppose the corruption fraction satisfies ε∈ [0,1/16). Then, given any δ∈ (0,1), the output of Algorithm <ref> with step-size α = log T/(1-γ)T satisfies the following bound with probability at least 1-δ:
d_T ≤d_0/T + O( /(1-γ)^5/2log T/√(T)√(log(|𝒮||𝒜|T/δ)) + √(ε)/1-γ).
We defer the proof of this result to Section <ref>.
Discussion. To parse the guarantee in Theorem <ref>, let us simplify the bound in the theorem to obtain
d_T ≤d_0/T+Õ( r̅/(1-γ)^5/2√(T))_T_1 + O( √(ε)/1-γ)_T_2.
From the above display, we note that our proposed Robust Q-Learning algorithm yields a high-probability ℓ_∞ sample-complexity bound that features two dominant terms: term T_1 captures the behavior of the algorithm in the absence of attacks, while term T_2 captures the effect of reward corruption. The dependence of term T_1 on both 1/(1-γ) and the number of iterations T exactly matches that in the recent works <cit.> and <cit.>. Thus, in the absence of attacks, our result is consistent with prior art. In the presence of strong-contamination attacks, our algorithm guarantees convergence to Q^* up to an additive error that scales as O(r̅√(ε)). We conjecture that such a term is unavoidable. To see why, consider a trivial MDP that comprises of just one state s and one action a. In this case, the problem of learning Q^* essentially boils down to estimating the mean R(s,a) using a set of noisy and corrupted data points. With this reduction, we can now invoke fundamental lower bounds from the robust mean-estimation literature which tell us that an error of Ω(σ√(ε)) is unavoidable <cit.> when the clean noise samples have variance σ^2 . Since the noisy rewards in our setting come from the interval [-, ], ^2 is precisely the variance-proxy for our setting. While the above argument can be formalized, we omit it here due to space constraints. To sum up, our work provides the first near-optimal guarantee for Q-learning under a strong reward corruption model.
Notice that our proposed algorithm uses knowledge of the bound r̅ on the rewards to design the dynamic threshold in Eq. (<ref>). At this stage, one might consider a much simpler algorithm that simply ignores rewards that have magnitude larger than r̅. Unfortunately, such a strategy is not guaranteed to work when the true reward distributions have infinite support. However, in Section <ref>, we will explain that with minor modifications to Algorithm <ref>, one can continue to handle reward distributions with infinite support, under the assumption of bounded second moments.
§ PROOF OF THEOREM <REF>
In this section, we will provide a detailed proof of our main result, namely Theorem <ref>. There are two sources of error in our update rule: one due to the randomness (i.e., noise) that originates from the sampling process, and the other due to reward-poisoning by the adversary. As such, the first step in our analysis is to set up an error-decomposition that clearly separates out the statistical terms in our update rule from those originating due to adversarial contamination. To that end, with some simple algebra, observe that our proposed Robust Q-learning update rule in Eq. (<ref>) takes the following form:
Q_t+1(s,a) = (1- α)Q_t(s,a)
+ α(R(s,a) +γ𝔼_s' ∼ P(· | s,a)[max_a' ∈𝒜 Q_t(s',a')])_(𝒯^* Q_t)(s,a) + α [r̃_t(s,a)- R(s,a)]_ℰ_t(s,a)
+ α(γmax_a'∈𝒜Q_t(s_t(s,a),a')- γ𝔼_s' ∼ P(· | s,a)[max_a' ∈𝒜 Q_t(s',a')])_𝒟_t(s,a),
where we used the definition of the Bellman operator in Eq. (<ref>). Stacking up the components Q_t(s,a), ℰ_t(s,a), and 𝒟_t(s,a) for each state-action pair (s,a) ∈𝒮×𝒜 into vectors Q_t, ℰ_t, and 𝒟_t ∈ℝ^|𝒮| |𝒜|, respectively, the display in Eq. (<ref>) takes the following compact form:
Q_t+1 = (1-α)Q_t + α(𝒯^* (Q_t) +ℰ_t+𝒟_t).
Now subtracting Q^* from both sides of Eq. (<ref>), using 𝒯^* (Q^*) =Q^*, and rolling out the resulting equation yields:
Q_t+1-Q^* = (1-α)^t+1(Q_0-Q^*)
+ ∑_k=0^tα (1-α)^t-k (𝒯^* (Q_k) - 𝒯^* (Q^*))+∑_k=0^tα (1-α)^t-kℰ_k
+ ∑_k=0^tα (1-α)^t-k𝒟_k.
Defining d_t ≜‖ Q_t - Q^*‖_∞, ∀ t ≥ 0, and taking the infinity norm on both sides of Eq. (<ref>), we obtain:
d_t+1≤ (1-α)^t+1d_0 + γ∑_k=0^tα (1-α)^t-k d_k + A_1,t + A_2,t,
where
A_1,t≜‖∑_k=0^tα (1-α)^t-k𝒟_k‖_∞, A_2,t≜‖∑_k=0^tα (1-α)^t-kℰ_k‖_∞.
To arrive at Eq. (<ref>), we used the contraction property of the Bellman operator in Eq. (<ref>) to infer that
‖𝒯^* (Q_k) - 𝒯^* (Q^*)‖_∞≤γ‖ Q_k - Q^*‖_∞≤γ d_k. We have thus proved the following lemma.
(Error-Decomposition)
The robust update rule in Eq. (<ref>) satisfies the error-bound in Eq. (<ref>) for all t≥ 0.
The remainder of our analysis will focus on providing bounds for the terms A_1,t and A_2,t featuring in Eq. (<ref>).
∙ Bounding the Effect of Noise. We first turn our attention to bounding the term A_1,t in Eq. (<ref>). To that end, let ℱ_t denote the σ-algebra generated by {Q_k}_0 ≤ k ≤ t. Now for each (s,a) ∈𝒮×𝒜, since s_t(s,a) is generated independently from P(·|s,a), and Q_t is ℱ_t-adapted, it is easy to see that 𝔼[𝒟_t(s,a)|ℱ_t]=0. Thus, ∑_k=0^tα (1-α)^t-k𝒟_k has a martingale structure that we can hope to exploit by appealing to the Azuma-Hoeffding bound for martingales. However, this requires care, as we describe next. To apply Azuma-Hoeffding in its standard form, we need to argue that the martingale difference sequence {D_t(s,a)} is uniformly bounded with probability 1. This difference sequence features Q_t, which, in turn, contains the filtered reward r̃_t(s,a); see Eq. (<ref>). Since r̃_t(s,a) depends on the sequence of observations {y_k(s,a)}_0 ≤ k ≤ t, and some of these observations can be arbitrarily corrupted by the adversary, we need to carefully justify the boundedness of 𝒟_t(s,a). Unfortunately, Theorem <ref> cannot help us in this regard since it only provides a bound that holds with high-probability; however, we seek a bound that holds almost surely. As revealed by our next result, this is precisely where the additional robustness afforded by the trimming step in line 8 of Algorithm <ref> will play a crucial role. To simplify the analysis, we will assume without loss of generality that Q_0=0.
(Boundedness of Iterates) The following is true for the iterates generated by Algorithm <ref>:
| Q_t(s,a) |≤3C/1-γ, ∀ (s,a) ∈𝒮×𝒜, ∀ t ≥ 0,
where C is the universal constant in Eq. (<ref>).
We will prove this result via induction. Fix any (s,a) ∈S×A. Since Q_0=0, the claim of the lemma holds trivially at t=0. This completes the base case of induction. Now suppose the bound claimed in the lemma holds for all 0 ≤ k ≤ t. To show that it also holds at iteration t+1, we first need an estimate on |r̃_t(s,a)|. From lines 7 and 8 of Algorithm <ref>, it is evident that |r̃_t(s,a)| ≤ G_t, where G_t is the trimming radius in Eq. (<ref>). Thus, to control |r̃_t(s,a)|, it suffices to bound G_t. Now from the definition of T_lim in Eq. (<ref>), and the fact that ε∈ (0,1), it is easy to see from Eq. (<ref>) that G_t ≤ 3C , ∀ t ≥ T_lim+1; here, we used C ≥ 1. Since G_t = 2 for 0 ≤ t ≤ T_lim, we conclude |r̃_t(s,a)| ≤ 3C , ∀ t ≥ 0. To proceed with the induction step, let us now observe from Eq. (<ref>) that
| Q_t+1(s,a) |≤ (1 - α) | Q_t(s,a) | + α|r̃_t(s,a) + γmax_a'∈𝒜Q_t(s_t(s,a),a')|
()≤(1 - α) | Q_t(s,a) | + α(|r̃_t(s,a)| + γmax_a'∈𝒜| Q_t(s_t(s,a),a')|)
()≤ (1 -α) 3C/1-γ + α( 3C + γ3C/1-γ)
= (1 -α) 3C/1-γ + α3C/1-γ = 3C/1-γ,
where for (a), we used the triangle inequality, and for (b), we used the induction hypothesis in tandem with the previously established fact that |r̃_t(s,a)| ≤ 3C , ∀ t ≥ 0. This completes the induction step and the proof.
Armed with Lemma <ref>, we can now appeal to the Azuma-Hoeffding Theorem for martingales to control the noise-induced term A_1,t. To do so, we note that a sequence of random variables Z_1, Z_2, Z_3, … is called a martingale difference sequence w.r.t. some filtration G_t if, for each t, it holds that 𝔼[Z_t| G_t-1]=0. To keep the paper self-contained, we recall the following version of Azuma-Hoeffding from <cit.>.
(Azuma-Hoeffding)
Let Z_1, Z_2, Z_3, … be a martingale difference sequence with |Z_i| ≤ c_i for all i ∈ℕ, where each c_i is a positive real. Then, for all λ≥ 0:
ℙ(|∑_i=1^n Z_i| ≥λ) ≤ 2e^-λ^2/2∑_i=1^nc_i^2.
We have the following result that controls the effect of noise in our update rule.
(Noise Bound) With probability at least 1- δ/2, the following bound holds simultaneously ∀ t ∈ [T]:
‖∑_k=0^tα (1-α)^t-k𝒟_k‖_∞≤6Cγ/1-γ√(2αlog(4|𝒮||𝒜|T/δ)),
where 𝒟_k is as defined in Eq. (<ref>).
Let us fix a state-action pair (s,a) ∈S×A, and a time-step t ∈ [T]. We have already argued that 𝔼[𝒟_t(s,a)|ℱ_t]=0, i.e., {D_k(s,a)} is a martingale difference sequence. Furthermore, from Lemma <ref>, we have
|𝒟_t(s,a)| = |γmax_a' ∈𝒜 Q_t(s_t(s,a), a') - γ𝔼_s' ∼ P(· | s, a)max_a' ∈𝒜 Q_t(s', a') |
≤γmax_a' ∈𝒜| Q_t(s_t(s,a), a') | + γ𝔼_s' ∼ P(· | s, a)max_a' ∈𝒜| Q_t(s', a') |
≤6Cγ/1-γ≜Δ.
Noting that {α (1-α)^t-kD_k(s,a)} is also a martingale difference sequence, and applying Lemma <ref>, we note that the following bound holds with probability at least 1-δ_2:
|∑_k=0^tα(1-α)^t-k𝒟_k(s,a)| ≤Δ√( 2 α^2 log(2/δ_2) ∑_k=0^t (1-α)^2(t-k))
≤Δ√( 2 α^2 log(2/δ_2) ∑_k=0^t (1-α)^(t-k))
≤Δ√( 2 α^2 log(2/δ_2) ∑_i=0^∞ (1-α)^i)
= Δ√( 2 αlog(2/δ_2)),
where for the second inequality, we used (1-α) < 1. Using the above fact, and union-bounding over all (s,a) ∈S×A, we conclude that the following bound holds with probability at least 1-δ_2|S| |A|:
‖∑_k=0^tα (1-α)^t-k𝒟_k‖_∞ = max_(s,a) ∈𝒮×𝒜|∑_k=0^tα (1-α)^t-k𝒟_k(s,a)|
≤Δ√( 2 αlog(2/δ_2)).
Union-bounding over all t∈ [T], we conclude that the bound in Eq. (<ref>) holds simultaneously for all t ∈ [T] with probability at least 1-δ_2 |𝒮||𝒜| T. The claim of the lemma now follows by setting δ_2 = δ/(2|𝒮||𝒜| T).
With the above developments, we now have a handle over the term A_1,t in the main error bound of Eq. (<ref>).
∙ Bounding the Effect of Adversarial Contamination. The effect of adversarial contamination gets manifested in the term A_2,t of Eq. (<ref>). The following key result helps us control this term.
(Adversarial Corruption Bound) Suppose ε∈ [0,1/16). Then, with probability at least 1- δ/2, the following bound holds simultaneously ∀ t ∈ [T]:
‖∑_k=0^tα (1-α)^t-kℰ_k‖_∞≤ 8 α C √(T log(8|𝒮||𝒜|T/δ))+C√(ε),
where ℰ_k is as defined in Eq. (<ref>).
We will split our analysis into two separate cases.
Case I. Consider first the case when t ∈ [T_lim]. Fix any (s,a) ∈S×A. Recalling that ℰ_t(s,a) = r̃_t(s,a)- R(s,a), we have
|E_t(s,a)| ≤ |r̃_t(s,a)| + |R(s,a)| ≤ G_t + ≤ 3,
where we used |r̃_t(s,a)| ≤ G_t, ∀ t ≥ 0 (based on lines 7 and 8 of Algorithm <ref>), and Eq. (<ref>). We also used the fact that since the uncorrupted reward r_t(s,a) ∈ [-, ], the expected value of r_t(s,a), namely R(s,a), can have magnitude no larger than . Thus, for t ∈ [T_lim], we have
|∑_k=0^tα(1-α)^t-kℰ_k(s,a)| ≤∑_k=0^tα(1-α)^t-k|ℰ_k(s,a)|
≤ 3 α∑_k=0^t(1-α)^t-k
≤ 3 α∑_k=0^T_lim(1-α)^t-k
≤ 3 α T_lim.
Since the above argument applies identically to every (s,a) ∈S×A, we conclude that for every t ∈ [T_lim],
‖∑_k=0^tα (1-α)^t-kℰ_k‖_∞ = max_(s,a) ∈𝒮×𝒜|∑_k=0^tα (1-α)^t-kℰ_k(s,a)|
≤ 3 α T_lim.
Note that the above bound holds deterministically.
Case II. Now suppose T_lim+1 ≤ t ≤ T, and fix a (s,a) ∈S×A as before. To control E_t(s,a) in this case, we wish to use the bound from robust mean estimation in Theorem <ref>. To that end, we make the following observations. First, under our synchronous sampling model, the reward samples {r_k(s,a)}_0 ≤ k ≤ t are i.i.d. random variables with mean R(s,a). Second, since r_t(s,a) ∈ [-, ], ∀ t≥ 0, the variance of these samples is at most ^2. Third, under the assumptions on our attack model, at most ε-fraction of the samples in {r_k(s,a)}_0 ≤ k ≤ t can be corrupted by the adversary. Thus, invoking Theorem <ref>, we note that the output r̃_t(s,a) of the function in line 6 of Algorithm <ref> satisfies the following bound with probability at least 1-δ_1:
|r̃_t(s,a) - R(s,a)| ≤C (√(log(4/δ_1)/t)+ √(ε))_(*).
Now consider an event X where the above bound holds simultaneously for all (s,a) ∈S×A, and for all T_lim+1 ≤ t ≤ T. Union-bounding over all state-action pairs and time-steps in the above interval, we note that event X has measure at least 1- δ_1 |S| |A| (T-T_lim) > 1 - δ_1 |S| |A| T. For the remainder of the analysis, we will condition on the event X. Since |R(s,a)| ≤, observe that given our choice of G_t in Eq. (<ref>), the output r̃_t(s,a) of the function satisfies |r̃_t(s,a)| ≤ G_t on event X. Thus, the condition in line 7 of Algorithm <ref> gets violated and line 8 gets bypassed. In other words, r̃_t(s,a) remains as in line 6 of Algorithm <ref>, and we obtain that |E_t(s,a)| is bounded from above by (*) on event X, implying
|∑_k=T_lim+1^tα (1-α)^t-kℰ_k(s,a)|≤∑_k=T_lim+1^tα (1-α)^t-k|ℰ_k(s,a) |
(a)≤ C ∑_k=T_lim+1^tα (1-α)^t-k( √(log(4/δ_1)/k) + √(ε))
≤α C √(log(4/δ_1))∑_k=T_lim+1^t1/√(k) + α C ∑_k=0^∞ (1-α)^k√(ε)
(b)≤2α C √(T log(4/δ_1)) + C√(ε).
For (a), we used |E_t(s,a)| ≤ (*) on event X, and for (b), we used a standard trick of bounding a sum by an integral to control the first term in the inequality. Combined with our analysis for Case 1, we then have on the event X:
|∑_k=0^tα(1-α)^t-kℰ_k(s,a)| ≤|∑_k=0^T_limα(1-α)^t-kℰ_k(s,a)|
+|∑_k=T_lim+1^tα(1-α)^t-kℰ_k(s,a)|
≤ 3αr̅ T_lim + 2α C √(T log(4/δ_1)) + C√(ε)
(a)≤ 3αr̅√(T_lim)√(T) + 2α C √(T log(4/δ_1)) + C√(ε)
(b)≤8 α C √(T log(4/δ_1)) + C√(ε),
where for (a), we used T_lim≤ T, and for (b), we used the expression for T_lim in Eq. (<ref>), and simplified using C≥ 1. We conclude that on the event X, the bound in Eq. (<ref>) applies simultaneously to all t ∈ [T], and all (s,a) ∈S×A. The claim of the lemma now follows immediately by picking δ_1 = δ/(2 |S| |A| T).
We are now ready to complete the proof of Theorem <ref>.
(Proof of Theorem <ref>) Our claim is that with probability at least 1-δ, the following bound holds simultaneously ∀ t∈ [T]∪{0}:
d_t ≤ (1-α(1-γ))^t d_0 + W/1-γ, where
0.7!W ≜6Cγ/1-γ√(2αlog(4|𝒮||𝒜|T/δ)) + 8 α C √(T log(8|𝒮||𝒜|T/δ))+C√(ε).
We remind the reader here that d_t = ‖ Q_t - Q^* ‖_∞. We will prove the above claim via induction. Trivially, the induction claim holds deterministically for t=0. Now from Lemmas <ref> and <ref>, let us note that there exists an event - say Y - of measure at least 1-δ on which A_1,t+A_2,t≤ W, ∀ t ∈ [T], where A_1,t and A_2,t are as defined in Eq. (<ref>). We will condition on the “good" event Y for the remainder of the analysis. As our induction hypothesis, suppose on the event Y, the claim in Eq. (<ref>) holds for all k ∈ [t]. To argue that it also holds for iteration t+1, we invoke the error decomposition in Lemma <ref> to obtain:
d_t+1 ≤ (1-α)^t+1d_0 + γ∑_k=0^tα (1-α)^t-k d_k + A_1,t + A_2,t
≤ (1-α)^t+1 d_0 + αγ∑_k=0^t (1-α)^t-k(1-α(1-γ))^k d_0_(**)
+ αγ W/1-γ∑_k=0^t (1-α)^t-k + W
≤ (1-α)^t+1d_0 + (**) + αγ W/1-γ∑_k=0^∞ (1-α)^k + W
= (1-α)^t+1d_0 + (**) + γ W/1-γ + W
= (1-α)^t+1d_0 + (**) + W/1-γ,
where for the second inequality, we used the induction hypothesis in tandem with the fact that on the event Y, A_1,t+A_2,t≤ W, ∀ t ∈ [T]. Now observe that
(**) = αγ (1-α)^t d_0 ∑_k=0^t(1+ αγ/(1-α))^k
=(1-α)^t+1d_0 [ (1+ αγ/(1-α))^t+1-1 ]
=(1-α (1-γ))^t+1d_0 - (1-α)^t+1d_0.
Combining the above display with Eq. (<ref>) establishes the induction claim for iteration t+1. We have essentially argued that the bound in Eq. (<ref>) holds for all t∈ [T] on the event Y. The claim we set out to prove follows by noting that Y has measure at least 1-δ. All that remains now is to simplify the bound in Eq. (<ref>) with t=T and α set to log T/(1-γ)T. Using 1-x ≤exp(-x), ∀ x ≥ 0, we obtain (1-α(1-γ))^T d_0 = d_0/T. Furthermore, a bit of simple algebra reveals that
W/1-γ = O( /(1-γ)^5/2log T/√(T)√(log(|𝒮||𝒜|T/δ)) + √(ε)/1-γ).
This completes the proof of Theorem <ref>.
§ TACKLING UNBOUNDED REWARD DISTRIBUTIONS
In this section, we will significantly relax the assumption of the rewards being bounded. In fact, we will not even require the true reward distributions to be sub-Gaussian. Let us now formalize the reward model. In the absence of corruptions, as before, an agent gets to observe a noisy reward r(s,a) ∼R(s,a) upon playing action a in state s. Moreover, R(s,a) ≜𝔼_r(s,a) ∼R(s,a)[r(s,a)], and 𝔼_r(s,a) ∼R(s,a)[(r(s,a)- R(s,a))^2] ≤σ^2. In words, playing action a in state s generates a noisy reward random variable r(s,a) with mean R(s,a) and variance upper-bounded by σ^2. The key departure from the model in Section <ref> is that we no longer require r(s,a) to be bounded; this naturally rules out the possibility of using trivial thresholding algorithms that ignore rewards falling outside a certain interval since the reward distribution R(s,a) can potentially have an infinite support. Moreover, since we only require finiteness of the second moment, the reward distributions can be heavy-tailed. This makes it particularly challenging to distinguish between a corrupted reward sample and a clean reward sample drawn from the tail of the distribution. Nonetheless, with little to no modifications to our developments thus far, we can handle the challenging setting described above. To make this precise, we make the basic assumption that the means of all reward functions are uniformly bounded[Note that this should not be confused with the noisy reward realizations being uniformly bounded.]; without such an assumption, one cannot even guarantee the finiteness of the state-action value functions. Accordingly, let B ≥ 1 be such that |R(s,a)| ≤ B, ∀ (s,a) ∈S×A. If we re-define r̅ as max{B, σ}, and plug in this definition of r̅ into the threshold in Eq. (<ref>), then with no further modifications to the algorithm or analysis, our main result in Theorem <ref> goes through. This follows by simply revisiting the argument in Case II of Lemma <ref>. The main message here is that even in the absence of corruptions, one can employ our proposed algorithm to handle heavy-tailed rewards, i.e., the additional robustness to heavy-tailed rewards comes essentially for free with our approach.
§ NUMERICAL RESULTS
§ CONCLUSION
We studied model-free synchronous Q-learning under a strong-contamination reward attack model, and developed a robust algorithm that provides near-optimal guarantees. There are several immediate directions that are part of our ongoing work: establishing fundamental lower bounds, considering the effect of asynchronous sampling, and studying the function approximation setting. In addition, the design of our algorithm requires knowledge of an upper bound on the means and variances of the reward distributions. It would be interesting to see if one can continue to achieve the bounds in this paper without such knowledge; this appears to be quite non-trivial.
unsrt
|
http://arxiv.org/abs/2409.03720v1 | 20240905172405 | Confidential Computing Transparency | [
"Ceren Kocaoğullar",
"Tina Marjanov",
"Ivan Petrov",
"Ben Laurie",
"Al Cutter",
"Christoph Kern",
"Alice Hutchings",
"Alastair R. Beresford"
] | cs.CR | [
"cs.CR"
] |
1]Ceren Kocaoğullar
1]Tina Marjanov
2]Ivan Petrov
2]Ben Laurie
2]Al Cutter
2]Christoph Kern
1]
Alice Hutchings
1]Alastair R. Beresford
[1]University of Cambridge
[2]Google
Confidential Computing Transparency
[
5th September 2024
===================================
§ ABSTRACT
Confidential Computing is a security paradigm designed to protect data in-use by leveraging hardware-based Trusted Execution Environments (TEEs).
While TEEs offer significant security benefits, the need for user trust remains a challenge, as attestation alone cannot guarantee the absence of vulnerabilities or backdoors.
To address this, we propose a Confidential Computing Transparency framework with progressive levels of transparency.
This framework goes beyond current measures like open-source code and audits by incorporating accountability for reviewers and robust technical safeguards, creating a comprehensive trust chain.
Our tiered approach provides a practical pathway to achieving transparency in complex, real-world systems.
Through a user study with 400 participants, we demonstrate that higher levels of transparency are associated with increased user comfort, particularly for sensitive data types.
§ INTRODUCTION
Confidential Computing is a security approach that aims to protect data while it is in-use.
This relatively new approach extends the long-standing ability of protecting data at-rest using encryption, and in-transit, using cryptographic protocols, such as TLS.
Confidential Computing uses hardware-based Trusted Execution Environments (TEEs) to achieve this level of privacy.
Several TEE implementations are available from prominent hardware providers, such as Intel SGX <cit.> and TDX <cit.>, AMD SEV-SNP <cit.>, and Arm CCA <cit.>.
Commercial cloud platforms, including Google Cloud <cit.>, Microsoft Azure <cit.>, and Amazon Web Services <cit.> have made Confidential Computing widely available.
Using TEEs provides Confidential Computing distinctive properties compared to other privacy-preserving computation techniques like homomorphic encryption and secure multiparty computation.
Specifically, TEEs provide confidentiality for both data and code in execution through techniques like hardware isolation and memory encryption.
TEEs also protect the integrity of data and code throughout execution.
With these unique security and privacy properties, Confidential Computing serves as a versatile tool that can be used independently or complement other privacy methods, such as homomorphic encryption.
Therefore, Confidential Computing is an esssential component for building a foundation for comprehensive private computation.
To provide Confidential Computing, a TEE must be attestable <cit.>.
In the realm of Confidential Computing, attestation refers to gaining information about the authenticity, integrity and certain runtime properties of a TEE through validating a hardware-signed attestation.
The attestation evidence contains relevant measurements of the Trusted Computing Base (TCB) <cit.>, which is the collection of firmware and software that the security of a system depends on <cit.>.
However, while attestation verifies that the intended software is running on an intended TEE, it does not guarantee the absence of vulnerabilities or backdoors within a Confidential Computing system.
Therefore, unless the user builds and installs the entire software stack that runs on top of the TEE by themselves, and the hardware is correct and reviewable, attestation alone does not provide enough evidence to the user that they can trust a Confidential Computing system (<ref>).
To address this trust gap, TEE providers are progressively adopting transparency measures.
Critical software components are increasingly open-source, including AMD SEV firmware <cit.>, Intel TDX modules <cit.>, and ARM Trusted Firmware <cit.>.
The AWS Nitro architecture also underwent a recent audit, the findings of which have been made public <cit.>.
While open-source code and professional audits contribute to reduced need for user trust, they are not sufficient on their own.
Additional transparency mechanisms are essential to hold those who write or review the source code accountable.
These mechanisms should also allow users to practically verify that a specific version of a piece of software source code matches its corresponding binary.
Such transparency mechanisms provide users with information about trustworthiness of a binary.
Attestation can serve as the final link in the trust chain, allowing users to verify that the trusted binary runs on a certain machine.
Although transparency does not ensure security, as evidenced by significant vulnerabilities found in open-source software <cit.>, it offers significant improvement over the alternative of naive trust.
Transparency allows for community scrutiny, which typically leads to quicker detection and resolution of issues, thereby promoting greater accountability.
Transparency is not limited to source code:
Certificate Transparency is now the standard method for monitoring TLS certification <cit.>, inspiring similar initiatives such as Key Transparency <cit.> and Binary Transparency <cit.>.
In this paper, we propose a transparency framework for Confidential Computing systems which systematises the landscape of possible solutions and considers a diverse range of possible methods, including open-source code, reproducible builds and provenance or endorsement statements by first- and third-party and community reviews processes.
These methods are underpinned by digital certificates and verifiable transparency logs to permit remote attestation of these steps to users.
Our framework supports a hierarchy of transparency where each level builds on the previous one (<ref>).
This layered approach supports a clear pathway of improvement for Confidential Computing systems to follow as time and technology allows.
While the focus of this paper is on Confidential Computing Transparency, the proposed ideas may also be applied to hardware and binary transparency in a wider context.
It is essential to understand how these advancements impact user trust.
To explore this, we conducted a user study targeting non-expert computer users who were recruited via Prolific.
We presented 400 participants with information on how data is processed by a variety of Confidential Computing systems (<ref>) to assess perceptions of trustworthiness when processing personal information.
The study focused on two key aspects: (1) how different levels of transparency affect end-user trust in Confidential Computing, and (2) what types of data end-users are willing to share with Confidential Computing systems at different transparency levels.
Our results indicate that end-users of computer systems largely recognise the benefits of increased transparency, and the trust they placed in Confidential Computing systems correlated positively with the levels assigned by our framework.
We also found that greater transparency is especially valued when it concerns data that users consider sensitive.
Our main contributions are summarised as follows:
* We establish key concepts related to transparency in Confidential Computing, covering its definition, significance, scope, beneficiaries and facilitators (<ref>).
* We propose the Confidential Computing Transparency framework, bridging technical security in TEEs and user trust through reviewer accountability and a robust trust chain (<ref>).
Our framework offers tiered transparency levels, adaptable for complex needs and limitations of real-world systems.
* We conduct the first user study to empirically study end-user trust in Confidential Computing with 400 participants (<ref>).
Results show that increased transparency positively impacts trust, especially for sensitive data.
§ DEFINING TRANSPARENCY
In this section, we answer five important questions about transparency in Confidential Computing: what is it and why is it important (<ref>), what is subject to transparency (<ref>), who stands to benefit from it (<ref>), and who facilitates it (<ref>).
We defer answering the question of how transparency can be achieved to <ref>.
§.§ Necessity of transparency
Attestation is a process that provides insights into the trustworthiness of a TEE.
It often involves a hardware-signed proof about some information about the origin and current state of a TEE <cit.>.
There are three types of claims that may be derived from this signed proof:
* Authenticity:
Attestation allows users to verify the origin of the TEE and the workload running inside it.
It involves verifying that the TEE has been produced by the expected manufacturer, and that it runs the expected workload.
* Integrity:
Attestation can also provide users with claims about whether the software components, such as the firmware, bootloader, operating system, and workload, have been tampered with.
* Runtime status: Attestation can additionally provide partial insights into the runtime status of the current state of the TEE, such as the execution mode.
Although attestation plays a pivotal role in building trust in the authenticity and integrity of a TEE <cit.>, it has limitations in providing comprehensive security assurance.
Most significantly, attestation cannot guarantee the absence of vulnerabilities or backdoors within the attested components.
For example, if firmware or hardware in the Trusted Computing Base (TCB) of the TEE contains a hidden backdoor granting unauthorised access to user data, attestation may still succeed.
This is because the backdoor does not compromise the measurable authenticity or integrity of the TEE, and the runtime-related data included in the attestation evidence is restricted in scope.
A transparent approach with thorough review processes is crucial for uncovering vulnerabilities or backdoors.
Such reviews can include inspecting the source code of critical binaries, evaluating the tools used in generating the binaries from the source code, and conducting comprehensive assessments of the system's architecture (<ref>).
Embracing this kind of transparency can mitigate hidden threats, empower users and stakeholders to make informed decisions, and ultimately provide a high level of assurance about the security of a Confidential Computing system.
§.§ Scope of transparency
When discussing transparency in Confidential Computing, it is crucial to extend our perspective beyond the TEE.
Communication channels with a TEE, where users load private data and code, must be secured, for example, by using a cryptographic protocol.
Similarly, when multiple TEEs, such as those on a CPU and a GPU, collaborate to process sensitive data, the communication between them must also be secure.
Therefore, we consider the entire end-to-end Confidential Computing systems rather than focusing only on TEEs.
In an end-to-end Confidential Computing system, we classify the components that could jeopardise the confidentiality or integrity of user data if they are compromised or malicious as sensitive components.
These include the TEE TCB for security and integrity (e.g. the components that process plaintext user data, generate or store keys), as well as endpoints and protocols used in TEE-to-user or TEE-to-TEE communication.
While transparency should ideally apply to all sensitive components, not all benefit equally.
We identify three prerequisites for benefiting from transparency in this specific context:
* Reviewability:
It should be possible to gain meaningful information about the security of a component through inspecting it.
This goal is inherently subjective and must be considered on a case-by-case basis.
For example, the ideal approach to achieve reviewability for a software component is granting reviewers access to the source code.
However, even with source code access, nuances arise <cit.>; access to the documentation, architecture, reference manuals, or expert guidance might impact the reviewability of a large and complicated code base.
The same applies to granting access to the commit history versus a snapshot of source code.
In situations where providing source code access is not feasible, other forms of reviewability can also be achieved, for example by executing a binary within a confined environment and analysing its behaviour.
While the highest level of reviewability should always be the primary goal, practical constraints may call for a best-effort approach.
* Certifiability:
It should be possible to generate and publish a digitally signed statement for a reviewed component.
The ability to provide a validation of a review is crucial for maintaining the integrity and transparency of the review process, enhancing the quality of reviews, aiding in compliance with legal and regulatory requirements, providing evidence in disputes, etc.
Certifiability can take different forms.
For instance, one can produce a signed statement for some source code that has been reviewed.
On the other hand, when dealing with a machine learning model that is pre-trained but undergoes frequent fine-tuning for different users, certifying the model's architecture rather than its weights might be a suitable option.
* Attestability:
While transparency is necessary for addressing the limitations of attestation in providing security assurance, attestability itself serves as a prerequisite for transparency.
This paradoxical need stems from the fact that attestation provides authenticity and integrity verification (<ref>).
If a component lacks attestability, users cannot confirm whether a specific instance corresponds to a reviewed version of a component, and certifiers cannot be unequivocally held accountable for their assessments.
As a result, if the component is not attestable, making it transparent cannot enhance its security assurance in an externally verifiable way.
We call the sensitive components that meet all three criteria as transparency-enhanced sensitive components.
Hardware transparency
A notable category of sensitive components that may not fit the transparency-enhanced description is hardware.
While hardware designs can be open-sourced, like OpenTitan <cit.>, reviewed, and even formally verified <cit.>, there appears no scalable method for verifying that any piece of hardware matches a particular design, with the same technical guarantees as software.
Methods like supply chain auditing and monitoring can provide some level of insight into hardware integrity.
Apple's Private Cloud Compute counters targeted hardware attacks by using high-resolution imaging in the manufacturing chain, and auditing hardware at the data centers under third-party oversight <cit.>.
Another approach can be inspecting random hardware samples to verify that they correctly correspond to the design.
However, these methods fall short of the certainty provided by reproducible software builds.
As a result, reviewability and certifiability of hardware components present significant challenges.
The same applies to attestability, as current attestation protocols mainly focus on software elements like firmware and drivers, offering little or no assurance about hardware.
These challenges are so fundamental that they hinder the user's ability to assess whether the hardware can support Confidential Computing; let alone its correctness.
Given these complexities, in this paper we focus on the software components of Confidential Computing systems.
Nevertheless, if the above challenges are resolved, our transparency framework can be used for hardware as well.
Until then, the security properties gained by the transparency framework we propose rest on the correctness of the Confidential Computing hardware.
Non-sensitive components
Addressing the transparency of non-sensitive components, which do not jeopardise the confidentiality or integrity of user data if compromised, requires establishing a well-defined trust boundary.
This trust boundary essentially assumes the role of a sensitive component, safeguarding the integrity and confidentiality of data.
Transparency can then be used to externally validate that this trust boundary prevents the non-sensitive components from accessing sensitive memory regions or otherwise compromising confidentiality and integrity of data.
Assurance in this context can be achieved through various means, such as hardware isolation or memory encryption.
If the trust boundary is transparency-enhanced, then non-sensitive components do not need to be included in the transparency discussions.
Otherwise, non-sensitive components should also be treated as sensitive and be included in the transparency efforts.
§.§ Agents of transparency
A Confidential Computing system achieves transparency when its users can directly review its sensitive components, or delegate this responsibility to others.
The reviewing entity must remain accountable and directly answerable to the users for their evaluations and decisions.
This accountability is achieved by issuing review certificates, which uniquely identify the reviewed component and bear the signature of the reviewer.
Therefore, we refer to the accountable reviewers who serve as the agents of transparency as the certifiers.
Traits: Each certifier has traits relating to their methodology and motivation.
In terms of methodology, certifiers can be either reporting or alerting.
A certifier's methodology affects the content of the certificate they issue.
Reporting certifiers adopt a methodology that involves uncovering both the strengths and shortcomings of the component under review.
They issue certificates that are similar to security assessment reports, which include the scope and findings of the review.
The number and content of reporting certificates, as well as the credibility of the certifiers all play into signalling trust about a component.
Users may choose their own trusted reviewers, similar to the open-source Rust code review system <cit.>.
Alternatively, rather than relying on a fixed set of trusted parties users might form a web of trust, in the same way as <cit.>, another Rust review system.
Also similar to <cit.>, users can define policies to ensure that their trusted reviewers assess the code according to specific criteria.
To provide guidance to reporting certifiers or achieve some coherence in structure among different reporting certifiers, the code owner may provide a signed public review plan as well.
Alerting certifiers have a methodology focused on finding bugs and vulnerabilities in the component under review.
They issue certificates detailing the discovered bug or vulnerability, including information such as the vulnerability type, root cause, impact description, and public references–similar to CVE records <cit.>.
These certificates are useful for urging the code owner to fix the issues and holding them accountable if they fail to do so.
Once the bug or vulnerability is fixed, an alerting certifier must assess the fix and issue a follow-up certificate that includes their opinion on the solution.
The number and content of alerting certificates, credibility of the certifiers, the promptness and effectiveness of the code owner's response to these certificates may serve as trust signals.
In terms of motivation, certifiers can either be independent, or they may be affiliated to the code owner.
Independent certifiers do not have vested interests in the code owner's outcomes and are not responsible to them.
They may have a greater commitment to ensuring the component's quality and security.
However, they may also be driven by self-interest, such as an independent researcher aiming to publish a paper about a vulnerability in the reviewed component, an open-source contributor who wants to gain experience, or individuals affiliated with competing organisations who aim to highlight weaknesses in the code owner’s product.
Similarly, a bug bounty hunter, motivated by financial rewards, seeks to find bugs without bearing any responsibilities toward or having vested interests in the code owner.
Affiliated certifiers share a formal or informal association with the code owner.
This affiliation may arise from contractual agreements, employment, partnerships, or having financial ties with the code owner.
While the affiliation introduces potential impact on their assessments, they are incentivised to maintain high standards due to accountability for their certificates and the risk to their reputation.
A structured reputation system tracking review quality, consistency, and peer feedback can further encourage thorough and truthful assessments.
Categories:
We categorise certifiers into three groups based on their access methods to the reviewed components, namely first-party, third-party, and community certifiers.
As Figure <ref> shows, the categories of reviewers are orthogonal to their traits.
First-party certifiers are internal to the entity responsible for producing the reviewed component, which we refer as the code owner.
These certifiers are usually either included in the component's development or they oversee it.
They should have the authority to directly make or otherwise invoke changes to the reviewed component if they notice a problem with it.
All first-party certifiers are affiliated, and most likely reporting in methodology.
However, there are also examples of alerting first-party certifiers.
For example, first-party security research teams like Google Project Zero <cit.>, Microsoft Security Response Center (MSRC) <cit.>, and IBM X-Force <cit.> search for and publicise vulnerabilities in their own closed-source <cit.> or open-source <cit.> software.
Third-party certifiers are granted exclusive access to the components for review.
The ideal third-party certifiers are independent experts.
This may include regulators, and to some degree, customer companies.
For instance, a cloud computing platform purchasing hardware might seek to review the firmware and drivers.
Similarly, a company outsourcing software development might want to review the components to ensure the quality of the work.
Customer companies might not be fully independent due to intricate business dynamics, such as financial investments, loyalty due to long-term partnerships, concerns about the performance impact on purchased components, etc.
Third-party certifiers may also be affiliated, such as auditors or consultants paid by the code owner.
It is worth noting that third-party certifiers might be obligated to create their certificates in such a way that does not expose any confidential information about the reviewed component.
Community certifiers have access to the code through open-sourcing.
These certifiers are individuals or groups of volunteers from the broader community, often with expertise to review the relevant components.
This includes academics, independent researchers, bug bounty hunters, and external adopters who want to use the open-source code in their own projects.
Notable community certification programmes are <cit.> and <cit.>, collaborative code review projects for verifying security and reliability of Rust dependencies.
Professional research teams such as Google Project Zero <cit.>, MSRC <cit.>, and IBM X-Force <cit.> also act as alerting community certifiers when reviewing code produced by other code owners <cit.>.
§.§ Beneficiaries of transparency
We identify four types of users who may benefit from transparency:
End users have their data processed within a Confidential Computing system, whether on a remote resource, such as a cloud infrastructure, or their own devices.
These users' primary concern revolves around ensuring the privacy and integrity of their data.
Application developers execute code within Confidential Computing systems, where their priority may be protecting the confidentiality of their code, their customers' data, or both.
This category of users includes the application developers using cloud-based services such as Google Cloud Confidential Computing <cit.>, Microsoft Azure Confidential Computing <cit.>, and AWS Nitro <cit.>.
It also includes users who leverage hardware of end-devices to execute their code, be it an operating system, a hypervisor, or other lower-level components.
Application providers execute third-party code, i.e. code they have not authored, within Confidential Computing systems.
Their primary concern is ensuring the security and trustworthiness of the code they run.
Platform providers include cloud providers, software-as-a-service providers, CPU manufacturers as platform providers for executing code in enclaves, Android with Private Compute Core <cit.> or Android Virtualisation Framework (AVF) <cit.>, and other entities offering infrastructure or services for Confidential Computing.
Their main objective is to establish and maintain a trustworthy Confidential Computing system for end users and application developers, ensuring the integrity and security of the systems and services they provide.
§ CONFIDENTIAL COMPUTING TRANSPARENCY
Following the reasoning and discussions we provide in <ref>, we define Confidential Computing Transparency as follows.
Transparency in Confidential Computing is the practice of allowing certifiers (as categorised in <ref>) to examine the security properties of the sensitive, reviewable, certifiable, and attestable (as defined in <ref>) components of a Confidential Computing system.
These certifiers should be able to certify their comprehensive assessments in a manner that is publicly verifiable.
In this section, we present a framework to answer the question of how this transparency can be achieved in Confidential Computing.
Our framework acknowledges the diverse transparency needs, limitations, and risk profiles of various system components.
For instance, some transparency-enhanced sensitive components might not allow open-sourcing or third-party certification due to IP restrictions or proprietary information.
This complexity is more pronounced in real-world systems involving multiple stakeholders and components from different vendors.
For example, even a setup that consists of a single TEE can include microcontroller units from different vendors, each with dedicated firmware and drivers.
To address complex requirements and challenges, our framework adopts a layered approach, defining three transparency levels, each incorporating a certifier category described in <ref> and a set of trust-building blocks.
As shown in Figure <ref>, each level builds on the previous one by adding more trust-building blocks and integrating an additional certifier category.
Based on needs and limitations, system designers can apply different transparency levels to different components, though we strongly advise aiming for the highest transparency level achievable.
§.§ Level 1 (L1)
Requirements
The foundational level of transparency is achieved through engaging first-party certifiers as accountable reviewers.
This level relies on the following trust-building blocks to ensure transparency:
Endorsement statement is a digitally signed statement by affiliated and reporting first-party certifiers, including a unique identifier of a binary (e.g. hash), as defined and used by Google's open-source project Oak <cit.>.
It can also include the time of issuance and the validity period.
By issuing an endorsement statement for a specific binary, the certifier confirms that the binary was generated from specific source code and endorses its use in production for the validity period.
The certifier also implicitly vouches for additional checks that are not explicitly captured in the endorsement statement, such as ensuring that the build and release pipelines work correctly.
After the validity period, the certifier may issue a new statement or passively revoke the existing one by taking no action (see <ref>).
Verifiable transparency log is defined as a publicly available append-only ledger maintained on a trusted infrastructure by an independent party, separate from the code owner or the certifiers <cit.>.
Transparency logs do not inherently guarantee truthful and accurate operations, so additional measures are required to ensure their integrity and consistency, as we discuss in <ref>.
Process
At this transparency level, the review and certification must happen prior to the binary's release.
There is one exception to this rule: if first-party researchers identify a vulnerability in the binary after its release, they may issue alerting certificates.
The first transparency level is achieved through the following steps.
* First-party certifiers review the security of the source code associated with the sensitive components, and approve it for release.
* The code owner builds a binary from the source code.
To ensure the integrity of the build process, it is essential that both the code owner and the first-party certifiers have established trust in the build toolchain and infrastructure used.
This trust is ideally built through direct review of the tools and processes.
If direct review is not feasible, it may be reasonable to implicitly trust open-source tools, or tools widely used across the company and maintained by a dedicated team.
The code owner can then release the binary, e.g. by distributing it to cloud servers, end devices, or publishing it to an app store.
*
First-party certifiers generate an endorsement statement that they sign with their private signing keys (separately or using a multisignature scheme <cit.>).
They publish this endorsement statement on a transparency log.
*
If first-party researchers discover a vulnerability after the binary's release, they generate a signed alerting certificate as described in <ref>, and publish it on the transparency log.
*
After receiving a binary, the user must verify that the transparency log contains a valid endorsement statement.
This involves: (I) confirming the endorsement's binary hash[If the binary is running on a server where the user as the client (as opposed to the user's end-device) the user must obtain the attestation evidence of the binary to retrieve the correct hash value.] and validity period, and (II) verifying the endorsement's signature(s) using the certifiers' public keys, unless relying on third-party monitors (<ref>).
In Binary Transparency, this verification is typically done by an auditor, usually represented as client-side software <cit.>.
Inclusion proofs can be used for efficiently verifying that an endorsement statement is added to the transparency log (<ref>).
The user (or auditor) must also periodically check the log for any alerting certificates, which may prompt them to discontinue using the binary.
Discussion
At this transparency level (L1), beneficiaries cannot directly see the attested components, but generating and publishing endorsement statements holds the code owner accountable for their released binaries.
This ensures the code owner cannot disown a particular release.
Furthermore, this transparency level ensures that any insider or outsider attacks on the released binary do not go unnoticed, as new transparency log entries or their absence are detectable.
This achieves `Signature Transparency', a concept implemented by Sigstore's Rekor <cit.> and Sigsum <cit.>.
L1 also guards against covert targeted attacks, where an attacker serves a specific user a targeted malicious binary by allowing users to verify they are served the same binary as everyone else.
However, this system is not immune to more overt, coordinated attacks.
Even if the transparency log operations are monitored for correctness as described in <ref>, an adversary can manipulate the timing of population updates.
For instance, the adversary might delay visibility of transparency log updates for certain IPs or intentionally cause out-of-band checks for update availability to fail for specific IPs.
This way, the adversary can potentially create split-view release streams with `good' and `bad' binary versions.
Even so, this transparency level can alert vigilant observers to anomalous activities like unusual release patterns or duplicate version numbers, serving as a deterrent and early warning mechanism against targeted attacks.
As all first-party certifiers are affiliated (see <ref>), reviewers in this transparency level are limited to the first and second quadrants of the certifier diagram (Figure <ref>).
In other words, there is no variety of motivation among certifiers at this transparency level.
§.§ Level 2 (L2)
Requirements
The second transparency level is achieved by introducing third-party certifiers as accountable certifiers.
First-party certifiers also participate at this level, performing reviews and issuing certificates similarly to how they operate in L1.
This ensures that code owners remain accountable for the binaries they release.
In addition to the trust-building blocks used in the first transparency level, the second level introduces the following ones.
Reproducible builds refers to a collection of tools and techniques that are used to ensure that every build for the same source code consistently generates the same bit-exact binary output <cit.>.
Provenance statement includes all the necessary configuration details for building the source code into a specific binary, such as what toolchain, commands and flags to use <cit.>.
It is provided by the code owner to certifiers alongside the component under review.
Process
The build, release, and first-party review process of the binary happens as described in L1 (<ref>).
To attain L2 transparency, the following additional steps must be followed.
At this transparency level, the review and certification can happen before or after the binary's release.
* Third-party certifiers review the source code of the sensitive components.
The source code must be reproducibly buildable to allow each certifier to build it by themselves, and verify that the source code and the resulting binary is the same as the ones reviewed and built by the other certifiers (see <ref> for the alternative).
*
Each certifier uses the provenance statement to learn the build configurations for the component and independently compiles the source code into a binary on trusted infrastructure.
Certifiers must establish trust in the build toolchain instructed by the provenance statement, which can be achieved by applying transparency principles to the toolchain itself (see <ref>).
*
Each third-party certifier generates a signed certificate (either reporting or alerting as described in <ref>) for the self-built binary, and publishes it on a transparency log.
*
Once the binary is received on an end-device (or the attestation evidence if the binary runs on a server), the user should check if a valid endorsement statement for it exists on a transparency log, as in L1 Step <ref>.
Unlike endorsement statements the existence and validity of certificates from third-party certifiers is not enough to provide trust to the beneficiaries; the content of the certificates also matter (<ref>).
Discussion
The second transparency level (L2) significantly improves trust by incorporating both affiliated and independent third-party reviewers, unlocking the third quadrant and expanding the diversity of reviewer motivations (Figure <ref>).
From a practical standpoint, incorporating third-party certifiers may introduce potential delays in the review process, particularly in dynamic production environments where code updates are frequent.
This creates a trust dilemma, especially in scenarios where a vulnerability is detected.
The code owner may need to decide whether to quickly implement a patch without waiting for certification, or follow the full review process and delay until certification is received.
To mitigate these time-related challenges, one potential solution might be implementing a transparent source control mechanism to allow certifiers to review only the code changes.
An alternative approach is issuing certificates after the binary is released.
Although this might mean users are not immediately assured that third-party certifiers have vetted the binary processing their data, this mechanism disincentivises the code owner from releasing malicious or subpar code.
To implement this approach, the code owner should include an explicit promise in the endorsement statement at Step <ref> that there will be a third-party audit certificate to follow.
This promise may also specify a timeline for certification, e.g. .
Having a signed explicit promise like this eliminates ambiguity and allows for more automated verification of the promise at Step <ref>.
Making the source code reproducibly buildable can be challenging, or the reviewers might find it impractical to build the binary themselves.
An alternative approach is using a trusted builder (see <ref>).
However, this option comes with certain caveats, such as the need for the certifiers and beneficiaries of transparency to trust the builder.
§.§ Level 3 (L3)
Requirements
The third and highest level of transparency employs community certifiers as accountable reviewers.
As in L2, community certifiers do not replace the first-party and third-party certifiers from the previous levels.
L3 must include first-party certifiers, and ideally, it should also incorporate third-party certifiers.
In addition to the trust-building blocks used by the other levels, this transparency level introduces a new one:
Open sourcing in this context refers to making the source code of a component publicly accessible on a platform, which the user does not need to trust.
Process
In this level, each community certifier follows the same process described in L2 (<ref>).
Discussion
This level of transparency significantly improves openness and accountability, driven by principles of public accessibility and community involvement.
It allows anyone with the neccessary skills and time to be able to review the source code.
It is important to recognise that open-sourcing, while a powerful tool for transparency, does not guarantee that all security issues will be detected immediately, as evidenced by past vulnerabilities in large-scale open-source projects with extensive communities including Heartbleed <cit.>, POODLE <cit.>, Log4Shell <cit.>, and Shellshock <cit.>.
Similar to L2 (<ref>), this transparency level may not always guarantee real-time verification.
Nonetheless, open-sourcing remains a crucial step towards minimising the need for user trust.
Moreover, unlike traditional open-sourcing, this transparency level requires reviewers to certify their assessments, serving as an additional trust signal (<ref>).
Availability of source code substantially improves the ability of independent alerting certifiers to analyze code for defects (<ref>), fully unlocking the last and fourth quadrant of the certifier diagram (Figure <ref>).
To incentivise such certifiers, the code owner may set up bug bounty programmes.
The code owner may also set up coordinated vulnerability disclosure mechanisms to allow the alerting reviewers to report issues to the code owner and giving it some time before issuing a public certificate.
In this case, if the issue is fixed before it is disclosed to the public, the certifiers may still issue their certificate after a patch has been released, similar to how first-party security research teams file CVE records after patch releases.
§.§ Revocation
Revocation is an essential part of the transparency framework, as the majority of software is ultimately revoked.
There are two main reasons to revoke an endorsement statement or a binary:
* The code owner or a certifier finds a vulnerability in the endorsed binary.
* A monitor, witness (<ref>), or auditor (<ref>) identifies an anomaly in the transparency log or endorsement statements.
In both of these cases, there are decisions to make about who will make the revocation decision and how will the revocation be carried out.
When the reason <ref> applies, and the code owner or a certifier discovers a vulnerability, the code owner can revoke the endorsement statement.
One way to achieve this is through passive revocation by issuing short-lived endorsement statements and simply not issuing a new endorsement statement for the affected binary. This allows users to detect the issue and stop using the binary.
Alternatively, code owner can actively revoke a statement by publishing its unique identifier on a publicly accessible certificate revocation list (CRL) <cit.>.
For both revocation reasons <ref> and <ref> above, users can independently choose to stop using a binary based on information they receive from auditors, monitors, or witnesses.
One way to implement this is through a policy in the client-side auditor software <cit.>, similar to the policies in Rust <cit.>.
This policy can alert the client about inconsistencies on a transparency log or the existence of alerting certificates about a binary.
Alternatively, monitors or witnesses can collectively issue global revocation statements, which they can then publish on CRLs.
§ ADDITIONAL CONSIDERATIONS
In this section, we describe optimisations and other modifications that can be applied to our transparency framework to accommodate different needs and constraints.
§.§ Monitoring transparency logs
Publicly available transparency logs do not inherently ensure truthful and accurate operations.
Monitors can watch logs for correct behavior, such as ensuring the log is append-only, hashes are valid, and the log does not present split views to different observers <cit.>.
To avoid split views, monitors can share their current and previous views of tranpsarency logs via gossiping
<cit.>, gaining a comprehensive understanding of the log's state and prevent propagation of inaccuracies.
Approaches like collective signatures from witnesses have been proposed to mitigate potential attacks on gossiping <cit.>.
Even if the transparency log behaves correctly, the mere presence of an endorsement statement in a log does not confirm its validity; it only means that the information is discoverable.
Without validating the endorsement statement, the trust relies on the expectation that someone will eventually detect any inaccuracies and raise an alarm.
So far, we have described this validity check as a client-side responsibility (see L1 Step <ref> and L2/L3 Step <ref>), which may be impractical due to resource constraints, technical expertise, or the large volume of data in transparency logs.
Additionally, clients would need certifier public keys, which can be challenging to manage.
Beneficiaries of transparency can instead delegate this validation responsibility to monitors, who can verify the correctness of statements and certificate signatures within the transparency log <cit.>.
Ideally, the code owner should offer an open-source monitoring mechanism, enabling individuals to scrutinise its source code and deploy them on their trusted machines.
The trustworthiness of the monitor code itself can be increased by using our transparency framework.
§.§ Inclusion proofs
When a user receives a binary or attestation evidence for a binary, they should verify the presence of a corresponding valid endorsement statement in a transparency log (see L1 Step <ref> and L2/L3 Step <ref>).
This can be efficiently done using inclusion proofs, similar to how they are used in Certificate Transparency to verify the inclusion of a certificate on a transparency log <cit.>.
To be able to have inclusion proofs, the transparency log is constructed as a Merkle tree <cit.>, where the tree head is signed by the log operator <cit.>.
An inclusion proof contains the shortest list of hashes needed to compute the tree head given a leaf node.
With an inclusion proof and the hash of an endorsement statement (as the leaf node), if the client can compute the tree head hash and also verify the signature of the tree head, this confirms that the endorsement statement has indeed been included in the transparency log (conditional upon the client having the correct public key to verify the signature).
To keep inclusion proofs scalable, the log operator periodically issues a signed commitment to the state of the log, sometimes called a checkpoint, which can be used instead of the original tree head <cit.>.
This approach reduces the verification complexity from linear to logarithmic in the number of certificates.
Certificate Transparency uses inclusion promises, signed commitments from log maintainers guaranteeing future entry addition, to address merge delays.
Unlike inclusion proofs, these promises require full trust in the log and additional monitoring <cit.>.
In Confidential Computing Transparency, where some latency at publish time is likely tolerable, we do not recommend inclusion promises to be used.
Alternative transparency log designs like Sunlight <cit.> can help reduce merge delays.
§.§ Trusted builders
In L2 and L3 (<ref> and <ref>), the certifiers are responsible for building the binary to ensure the binary they issue a certificate for is generated from the source code they have reviewed.
Alternatively, a trusted builder, a dedicated tool for building binaries, can take on this task.
Using a trusted builder presents an important tradeoff: the certifiers are relieved from the responsibility of building the binary themselves, but they need to trust the builder.
Despite this tradeoff, a trusted builder can be helpful especially in L3, where community reviewers may lack resources.
Moreover, using a trusted builder removes the requirement for the source code to be reproducibly buildable, which can be a difficult task for the code owner to achieve and maintain.
However, ideally, the source code should still be reproducibly buildable to allow independent verification.
A key difference with using a builder is that for each build, the trusted builder should generate a signed provenance statement and post it on a transparency log.
This statement is a verifiable and attributable claim that the binary was built honestly on a trusted, uncompromised platform, allowing the certifiers to gain insight into the build environment.
This is especially important since the code does not have to be reproducibly buildable, and the certifiers might not be able to build the binary themselves to compare it with the binary that the trusted builder produced.
To establish trust in a builder, users must first trust the build toolchain.
This can be achieved by using an open-source build toolchain and applying the transparency principles to it (also see <ref>).
Second, the toolchain must also operate on trusted infrastructure, ideally controlled by an actor unlikely to collude with the code owner.
For instance, Google's open-source Confidential Computing project Oak <cit.> has a trusted builder that uses the open-source SLSA build stack <cit.> and runs on GitHub, which is currently owned by Microsoft.
The level of assurance can be further increased by involving multiple independent builders.
For example, Oak runs a trusted builder instance on Google Cloud in addition to GitHub.
Another way of increasing the trustworthiness of a builder is to run the build toolchain’s TCB inside a TEE that is different from the one under scrutiny.
Using a hardened build platform with strong isolation and protection for the provenance signing key can also increase trust on the builder.
This idea is used in SLSA V1.0 as well <cit.>.
If a trusted builder is used in L2 or L3, the release process for the binary is revised as follows:
* The code owner initiates a build of the source code using a trusted builder.
* The trusted builder generates a signed provenance statement about the binary and the build process, publishes it on a transparency log, builds the code into a binary and returns it to the code owner.
If there are multiple trusted builders, then all builders do the same.
* The code owner verifies the signature of the provenance statement, and confirms that the information it contains about the build process is correct.
If there are multiple trusted builders, the code owner does these checks for each build.
If the binary is reproducibly buildable, the code owner also checks that all binaries are identical.
* If the checks about the build are successful, the code owner generates an endorsement statement about the binary.
If the code is not reproducibly buildable and there are multiple trusted builders, the code owner generates an endorsement statement for each unique binary.
These endorsement statements include a unique pointer to the provenance statement generated and published by the trusted builder in Step <ref>.
Following this build and release process, the third-party or community certifier in L2 or L3 review the source code.
If the source code is not reproducibly buildable, Step <ref> of L2 and L3 is replaced by the certifier verifying that a certain binary correctly corresponds to the reviewed source code.
To do so, the certifier gets the endorsement statement for that particular binary version from the transparency log, and checks that the signatures on both the endorsement and provenance statements are valid.
The certifier also checks if endorsement statement includes the correct binary hash.
If the source code is reproducibly buildable, the certifier can additionally build it on its own trusted infrastructure, as in Step <ref> of L2 and L3.
§.§ Automated certifiers
A certifier can be replaced by automated processes, introducing a fourth certifier category, which we call automated certifiers.
An automated certifier can be constructed by building a transparent mechanism that generates formal proofs of security properties of the component.
The transparency of the generation mechanism gives assurance to the beneficiaries (<ref>) that the proofs are generated correctly.
This alternative certifier type can be used for achieving transparency in a similar way in L2 and L3, also using a trusted builder as we describe in <ref>.
The level of transparency provided depends on the assurance of the generated proof.
This alternative certifier type can both be seen as an optimisation, since automated proofs may work more efficiently than human reviewers, and as a privacy enhancement, as the code owner can potentially generate proofs that do not reveal proprietary information, e.g. by using zero-knowledge proofs <cit.>.
§ USER STUDY
We conducted a user study to examine how varying levels of transparency within the Confidential Computing Transparency framework affect end users' perceptions of trust and confidentiality.
Given that end users often view certain types of data as more sensitive than others <cit.>, we anticipated that these perceptions might influence their preferences for transparency.
Therefore, we included different data types as a factor in our study.
§.§ Research questions and method
We aim to answer two main research questions:
* How do different transparency levels of the Confidential Computing Transparency framework shape end users’ sense of trust?
* What types of personal data are end users willing to share at different transparency levels?
In the study, we first introduced participants to Confidential Computing through a script presented as both an animated and narrated video[<https://www.dropbox.com/scl/fi/9swys6bkypg5n16mgrjvm/transparency_short.mp4?rlkey=ws2o90lw3v5zr3jrhxqwycrvk st=zvytmn3x dl=0>] and text (Appendix <ref>).
The script described Confidential Computing as a system that protects user data from app developers, emphasising that its trustworthiness depends on secure design and implementation.
This was illustrated with a vault analogy, emphasising that trust depends on confidence that the keypad does not secretly record PINs or have hidden mechanisms to bypass security.
The script also introduces the concept of transparency through reviews and certification, noting that these certifiers can be held accountable if their assessments prove faulty.
Participants' comprehension of key concepts was assessed with three true or false questions.
In the core phase of the study, we presented the participants with a series of imaginary virtual assistant apps running on Confidential Computing systems.
Each app required access to a different type of personal data, which we categorised into (1) social-economic, (2) lifestyle-behaviour, (3) tracking, (4) financial, (5) authenticating, and (6) medical-health data, drawing from the work of Chua et al. <cit.>.
We selected virtual assistant apps, as they represent a single app type that may require access to any of the six chosen data types.
Each app was randomly assigned a specific transparency level L1, L2, or L3, or no transparency at all.
We carefully described each transparency level using neutral language and labelled all certifier types as experts to avoid implying any hierarchy.
Exact descriptions are provided in Appendix <ref>.
While our framework incorporates first-party certifiers across all transparency levels, we omitted this detail from the study to maintain clear distinctions between levels, avoiding potential confusion and to not imply any hierarchy.
For each app, we also included an animated GIF illustrating the corresponding transparency level, taken from the informative video.
Participants rated their comfort level using each app, and their confidence in the app's ability to protect their data.
In the final phase of the study, participants rated their comfort level using a virtual assistant app reviewed under different transparency options (L1, L2, L3 and no transparency), regardless of data type.
They also evaluated their level of concern if their data were leaked or tampered with across various data types, without reference to transparency.
This question aimed to assess how privacy concerns are tied to perceptions of risk associated with specific data types.
We also collected information on participants' openness to adopting new technologies and their current habits of using virtual assistant apps to determine how general distrust or familiarity might influence privacy concerns.
We provide the full study script as well as a more detailed analysis and a breakdown of demographics in the Appendices <ref> and <ref>.
Additionally, we make the user study artefacts publicly available (see Appendix <ref>).
We ran the survey using Qualtrics and recruited participants via the Prolific academic platform <cit.>, as recommended by relevant research <cit.>.
We obtained ethics approval from the University of Cambridge review board (see Appendix <ref> for additional information).
We recruited 400 participants in small batches between 1 August 2024 and 20 August 2024.
The survey took on average ∼12 minutes, the participants were compensated at a fixed rate of £3 or ∼£15/h.
Participants that did not complete the study or were identified as bots were excluded and automatically replaced by Prolific.
Those who answered any of the comprehension questions incorrectly more than twice were removed from the study.
This filtered out random clickers, as the remaining participants demonstrated a clear understanding of at least one question.
Our final analysis after the removals was performed on 369 participants.
Each participant answered questions about the six data type / transparency level combinations of the app, resulting in 2,214 main observations.
§.§ Quantitative results
To address <ref>, we evaluate two key metrics: (I) participants' comfort levels in the core phase of the study, aggregated across all data types (see Figure <ref>), and (II) comfort with the virtual assistant app under given transparency options, irrespective of data type, which we gather from the last phase of the study (see Figure <ref>).
We find a similar distinct pattern in both cases, indicating a strong effect of transparency on comfort levels.
Participants reported the lowest comfort levels in the absence of transparency, a trend particularly pronounced when no specific data type was involved.
A significant increase in comfort was observed when any form of transparency was introduced.
Among the transparency levels, participants expressed the highest comfort with L2, followed by L3 and L1.
A broader variation in comfort levels was observed when participants were exposed to different data types, revealing a multimodal distribution, particularly for L1 and L3.
This variation is largely driven by considerable differences in comfort across various data and transparency combinations.
Conversely, when data types were not specified (see Figure <ref>), the variability in comfort levels decreased, with L2 and L3 yielding almost indistinguishable results.
We explore these trends further in <ref>.
We formally confirm the significant effect of transparency on participants' comfort through an ANOVA analysis, and we reject the null hypothesis that all transparency group means are equal (ANOVA, F = 159.992, p < 0.001, N = 2,214).
We additionally examine the effect of data type on participants' comfort regardless of the transparency level.
Our findings on participants' baseline concern about different data types largely align with Chua et al. <cit.>.
Participants express highest concern (and lowest comfort with sharing) about their financial and authentication data, followed by tracking and medical data.
They are least concerned about, and most willing to share, socioeconomic and lifestyle-behavior data.
Upon establishing a baseline, we investigate interaction effects between transparency and data type to answer <ref> (see Figure <ref>).
While L2 and L3 seem similar on aggregate level, more nuanced patterns emerge across data types.
Specifically, participants show a preference for L3 in apps handling financial and tracking data; two categories that evoke higher levels of baseline concern.
In contrast, L2 is favoured for apps involving lifestyle, social, medical, and, to a lesser extent, authentication data.
Despite these observed trends, the interaction effects are not statistically significant (ANOVA, F=0.801, p=0.6, N=2214).
Additionally, the consistent gap between least and most preferred transparency options across data types suggests a uniform increase in comfort from the baseline.
Demographic analysis reveals several correlations in comfort levels for data sharing.
There are weak correlations with gender and age: women express slightly higher comfort levels than men, and younger participants tend to be more comfortable sharing data compared to older individuals.
There is also a positive correlation between participants' willingness to explore new technology and their willingness to share data.
Our analysis covers four main regions: the Americas (144 participants), Europe (143), the Middle East and Africa (66), and Asia and the Pacific (14).
While region does not significantly affect overall comfort levels, two patterns emerge.
First, European and American participants exhibit higher baseline concern for their data, particularly regarding lifestyle, social, tracking, and medical data.
Second, these participants also report lower comfort levels with no transparency and L1.
However, the comfort levels for L2 and L3 are largely consistent for all regions.
Education and work field do not yield statistically significant patterns, but some trends are notable.
High school or trade school graduates tend to prefer L2 transparency, while university graduates favour L3.
Field-related patterns reveal that economists (54 participants) are indifferent to transparency levels.
Participants in arts/humanities (34), social sciences (28), STEM (108), and trades (21) prefer increasing transparency, with comfort levels rising from L1 to L3.
Health workers (34) and those in `other' fields (71) uniquely favour L2 transparency, being the only groups whose comfort levels decrease from L2 to L3.
§.§ User perceptions
In addition to quantitative data, we aimed to gain a deeper understanding of participants' perceptions and beliefs about the transparency levels.
To achieve this, we included an open-ended question asking participants to elaborate on their rationale behind their choices, allowing them to discuss any or all of the transparency levels freely.
We manually analysed the answers to the open-ended question by first removing non-informative answers that do not address the question (e.g.“I was honest”, “Personal experience”) or do not provide reasoning relating to any specific transparency level (e.g. “All seem very safe”, “I would rather trust my data to machines than to people”).
We started with 369 answers; after removing 185 empty or non-informative answers, we manually annotated the remaining 184 answers to extract the main themes.
We used an iterative inductive approach, allowing themes to emerge from the data until saturation is achieved and no new themes are identified.
Finally, two annotators independently classified the participants' answers into the theme classes.
The Cohen's κ agreement score was 0.82 (almost perfect agreement), with any disagreements later discussed and resolved.
Participant responses supported the quantitative results, expressing a strong dislike for no transparency (49 answers), believing that unreviewed systems can have vulnerabilities or that the app developer might hide something.
No answers defended unreviewed systems, however several responses noted reviewing is imperfect as humans make mistakes.
While the consensus is clear for no transparency, the same cannot be said for L1, L2 and L3.
The main theme emergeing from the data is that of objectivity and impartiality (90 answers).
Specifically, participants believe first-party certifiers cannot be objective and trusted due to conflict of interest, possible pressure from their employer, self-interest and bias.
External certifiers, on the other hand, are considered more neutral and trustworthy because they are not perceived as being influenced by pressure from the app developer.
Many participants express such views for both community and third-party certifiers (70); however, a smaller portion believes that only community certifiers can be truly independent (20).
Participants expressed a range of concerns, sometimes conflicting, in addition to their focus on objectivity.
Some participants believe first-party certifiers have a better understanding of the app, giving them an advantage in the reviewing process (4 answers).
On the other hand, others feel that third-party certifiers are more precise, and are true experts in the field specialised in reviewing (9).
Similarly, some participants exclusively addressed third-party certifiers as experts, or perceived them to have better and more exclusive access to the sensitive components, especially when compared to community certifiers (16).
In contrast, some participants are sceptical about third-party certifiers, believing they might still try to satisfy or be pressured by the app developer (10), albeit in smaller numbers compared to first-party certifiers.
Regarding community certifiers, some participants recognised the value of more diverse perspectives, `out of the box' thinking and potential to offer unique solutions (10 answers).
Additionally, a number of participants valued the increased likelihood of spotting problems due to more reviewers (17).
In direct contrast, some participants felt uncomfortable with having the code available to anyone to see as this might include malicious actors (16). This potentially explains the small dip in user's aggregate comfort for L3.
Finally, we identified a number of misconceptions among participants.
Despite highlighting otherwise in the introduction script (see <ref> and Appendix <ref>) and as included in our comprehension questions, a significant number of participants still believed that certifiers have access to participants' personal data or are able to tamper with it (26).
§ RELATED WORK
The security of our HTTPS connections relies on domains sending their X.509 certificates to browsers, which then verify if the certificate is issued by an authorised Certificate Authority (CA).
This mechanism inherently trusts CAs, creating a risk where compromised CAs could issue false certificates, leading to man-in-the-middle attacks <cit.>.
To counter this, Google launched Certificate Transparency in 2013 <cit.>, which is currently mandatory in all major browsers <cit.>.
Certificate Transparency allows public verification through three steps: CAs submit data to a transparency log, receive a signed inclusion proof, and incorporate it into the final certificate <cit.>.
Monitors (<ref>) check these logs for anomalies and trigger revocation if needed.
In end-to-end encrypted communication, a similar issue arises with key directory servers.
CONIKS <cit.>, inspired by Certificate Transparency, first introduced the concept of Key Transparency to ensure users receive the correct keys while protecting privacy.
Key Transparency has since evolved and found real-world applications including Google's open-source Key Transparency project <cit.>, along with implementations by WhatsApp and Apple <cit.>.
Binary Transparency <cit.>, addresses similar concerns in software supply chains to ensure accountability for software originating from central repositories and to counter targeted attacks.
It is actively used in Google Pixel binaries <cit.> and Go modules <cit.>.
Sigstore <cit.> is a collaborative project automating logging, signing, and monitoring for Binary Transparency. Similarly, Contour <cit.>, MADT <cit.>, and CHAINIAC <cit.> focus on software integrity through these methods.
Supply-chain Levels for Software Artifacts (SLSA) <cit.> is a framework incorporating Binary Transparency concepts to counter software supply chain threats.
It uses a tiered structure like ours, and can complement our approach to enhance the resilience of Confidential Computing systems.
Our L2 and L3 aligns with SLSA v0.1 L4, emphasising reproducible builds and having a two-person review requirement through affiliated reporting first-party certifiers (Figure <ref>, quadrant II).
Implementing our framework's L2 and L3 with hardened trusted builders (<ref>) achieves SLSA v1.0 L3.
Therefore, with specific implementation choices, our transparency framework can provide high levels of supply-chain security.
§ CONCLUSIONS
This paper introduces the Confidential Computing Transparency framework bridging the gap between the technical security offered by Trusted Execution Environments (TEEs) and user trust (<ref>).
Our framework offers progressive transparency levels involving first- and third-party, as well as open-source reviewers.
It extends beyond existing industry practices by adding reviewer accountability and a robust trust chain with verifiable transparency logs, signed statements, and reproducible builds.
The tiered approach offers a practical solution for implementing trustworthy Confidential Computing systems across diverse scenarios.
We also address key questions about transparency in Confidential Computing, including its definition, importance, scope, beneficiaries, and facilitators (<ref>).
Our user study highlights the critical role of transparency in building user trust (<ref>), with participants favouring higher transparency over unreviewed systems.
However, misconceptions about open sourcing underline the need for clear communication and education.
While focused on Confidential Computing due to the unique attestability of TEEs (<ref>), our transparency framework applies to various Binary Transparency scenarios.
For binaries in remote TEEs, attestation ensures integrity, while in controlled environments users can directly validate binaries via signature checks and transparency log inclusion proofs.
Our framework, while described through software, is also applicable to hardware within our transparency scope (<ref>).
Our levelled transparency framework allows system designers to tailor transparency based on needs and limitations.
For example, an open-source SPDM <cit.> driver may easily align with L3, while an IP-sensitive compiler might be more suitable for L2.
We recommend the highest feasible transparency level, while minimising the transparency surface by reducing sensitive components (<ref>).
plain
§ APPENDIX
§.§ Ethical considerations
Ethics approval was obtained from the University of Cambridge review board prior to research. Given that our study involved human participants, several important ethical considerations were addressed:
* Informed consent: We obtained informed consent from all users, with the option to withdraw from the study at any point or not answer any or all questions. We also provided direct contact with the researchers for any questions or concerns.
* Confidentiality and anonymity: The data was anonymised and as such no single individual can be identified. We collected minimal demographics information, with all data analysed on an aggregate level.
* Data Security: All data was stored in electronic form on encrypted disks or a secure server. Only researchers involved in the project have direct access to raw data. Any publicly released data is anonymised and aggregated.
* Compensation: The participants were compensated for their time at the rate slightly greater than the UK living wage per hour. The payment was made directly to the participant using the Prolific platform.
* Risks to participants or researchers: We did not foresee any risks or harms to either participants or the researchers involved in the study, or the greater public.
§.§ User study script
§.§ Welcome screen
Thank you for participating in this study by the University of Cambridge. Through this study we hope to learn more about people's preferences surrounding data sharing in different transparency scenarios. You will receive a payment of £3 for participating. The study is expected to take approximately 10 minutes.
Your participation is confidential and your identity will not be stored with your data. You will not be asked to provide any personally identifying information. Your answers will be used only for research purposes and in ways that will not reveal who you are. The results will be reported in aggregate form only, and cannot be identified individually. Any information that could identify you will be removed before data is shared with other researchers or results are made public. We will keep the raw data for the duration of the study and delete it by April 2025. For reproducibility, we may release a public appendix containing some aggregate and processed data.
You must be at least 18 years old to participate. By participating in this study, you consent to the data being used for this purpose. Your participation in this research is entirely voluntary and you have the right to withdraw consent at any time by closing your browser. For any questions during or after the study, please contact Ceren Kocaogullar on [email protected] and Tina Marjanov on [email protected].
§.§ Consent
If you do not wish to participate in this study, please return your submission on Prolific by selecting the `Stop without Completing'. Are you happy to continue? (Note: Participant chooses between “Continue with the study” and “Stop without completing”.)
§.§ Background information
Please watch the following informational video[footnote].
If you cannot watch it, click on the button below to see the text.
You can proceed after the video finishes playing.
§.§.§ Instructional script (written and video)
In this study, you will be presented with a number of imaginary apps.
These apps run on a Confidential Computing system that aims to protect your data in such a way that even the app developer should not be able to see your data.
However, this Confidential Computing system can only be trusted if it has been designed and built securely.
Imagine this system like a safe where your data is stored.
The safe has a keypad, where you can enter your PIN to lock and unlock it.
In order to trust the safe with protecting your data, you should trust that the keypad does not secretly record your PIN or have a special mechanism that opens the safe.
Sometimes, the Confidential Computing system can be checked by reviewers who certify that the product is safe to use.
If their certification turns out to be faulty the reviewers can be held accountable.
The reviewers can be:
* experts working for the app developer company
* experts who are granted exclusive access to the code for reviewing it. They may include consultants and auditors hired by the developer company, as well as regulatory authorities.
* experts from the broader software engineering community. The system is made publicly available for reviewing by academics, independent researchers, individuals who get rewards for finding bugs, or anyone who is interested in reviewing the code.
Or the system may not be reviewed at all.
Remember: reviewers only look at the Confidential Computing system and do not see users’ personal data.
§.§ Comprehension
Please answer the following questions. (Note: The answers are multiple choice between True / False)
* A Confidential Computing system claiming to be secure is always secure, regardless of how it was built.
* Reviewers who certify the system's safety can be held accountable if their certification is faulty.
* Reviewers get access to the full system, including users' personal data.
§.§ Instructions
You will now be presented with a number of different multi-purpose virtual assistant apps that help you with your everyday tasks.
To operate efficiently, each app requires access to different types of your data.
Each app runs on the previously described Confidential Computing system.
The apps have been developed by a startup and are just about to be released.
For each of the scenarios, you will be asked about your comfort and confidence using the app under the specified conditions.
§.§ Core study
(Note: Six versions of the following screen are presented to the participants, varying the data types and transparency level descriptions as listed after the questions.)
Virtual assistant app requires permission from you to access your . This includes ].
In order to assess if the Confidential Computing system is designed and built securely, the system was .
(Note: Participants answer each question using a slider ranging from 0 to 100.)
* How comfortable would you feel using this app?
* How confident would you feel about this app’s ability to keep your data safe?
§.§ Additional questions
Note: Participants answer each question using a slider ranging from 0 to 100.
* How would you best describe your approach to purchasing or experimenting with new technology?
* How likely are you to use a multi-purpose virtual assistant app (e.g. Siri, Alexa, Google Assistant)?
* How concerned are you about other people getting access to your data without your consent?
* Imagine your data was leaked or tampered with. How worried would you be for each of the data types listed below?
* Regardless of the data type, how comfortable would you feel with the virtual assistant app being reviewed with each of the following options?
Note: The following is an open-ended question with a text box for the response.
* Please explain how you assigned the scores to the review options.
§.§ Demographics
Note: We collect the following information: age, gender, level of schooling, field of work/study and country in which the participants lived the longes.
We omit the full demographics questions in the interest of space.
§.§ End of survey and payment
Thank you for taking part in this study.
Please click on the `Finish' button below to be redirected back to Prolific and register your submission.
(Note: Upon clicking the `Finish' button, the participants are redirected to Prolific platform and payment is made.)
§.§ User study demographics
|
http://arxiv.org/abs/2409.02987v1 | 20240904180001 | Central molecular zones in galaxies: 13CO(6-5) and molecular gas conditions in bright nearby galaxies | [
"F. P. Israel",
"R. Gusten",
"A. Lundgren"
] | astro-ph.GA | [
"astro-ph.GA"
] |
F.P. Israel
Sterrewacht Leiden, P.O. Box 9513, 2300 RA Leiden, the Netherlands
Max-Planck-Institut für Radioastronomie,
Auf dem Hügel 69, 53121 Bonn, Germany
Aix Marseille Université, CNRS, LAM, Marseille, F-13388, France
This paper summarizes all presently available J_upp≥5
and accompanying measurements of galaxy centers
including new J=6-5 and observations of eleven
galaxies with the Atacama Pathfinder EXperiment (APEX) telescope and
also Herschel high-J measurements of both species in five galaxies.
The observed J=6-5/J=1-0 integrated temperature ratios
range from 0.10 to 0.45 in matching beams. Multi-aperture data
indicate that the emission of (6-5) is more centrally
concentrated than that of (6-5). The intensities of (6-5)
suggest a correlation with those of HCO^+ but not with those of
HCN. The new data are essential in refining and constraining the
parameters of the observed galaxy center molecular gas in a simple
two-phase model to approximate its complex multi-phase structure.
In all galaxies except the Seyfert galaxy NGC 1068, high-J emission
from the center is dominated by a dense (n∼10^5) and
relatively cool (20-60 K) high-pressure gas. In contrast, the
low-J lines are dominated by low-pressure gas of a moderate density
(n∼10^3) and more elevated temperature (60-150 K) in most
galaxies. The three exceptions with significant high-pressure gas
contributions to the low-J emission are all associated with active
central star formation.
Central molecular zones in galaxies:
(6-5) and molecular gas conditions in bright nearby galaxies
F.P. Israel1
R. Güsten2
A. Lundgren3
Received ????; accepted ????
===================================================================================================
§ INTRODUCTION
Late-type galaxies frequently contain conspicuous central
concentrations of molecular gas. These may play an important role in
galaxy evolution when they serve as the reservoirs that feed
super-massive black holes, circumnuclear star formation, and massive
gas outflows. Various ladder surveys, in particular those conducted
with the Herschel Space Observatory, unambiguously point to the
simultaneous presence of both low-pressure and high-pressure gas in
these reservoirs (Mashian 2015, Kamenetzky et al. 2016, 2017,
Lu 2017, Crocker et al. 2019), requiring a multi-phase
analysis.
Table <ref> illustrates how different molecular line
transitions, in principle, can be used to determine molecular gas
temperatures and densities. Yhe values in that table were, however,
calculated ignoring radiative trapping and assuming optically thin
emission, whereas the , HCN, and HCO^+ transitions are
optically thick. Significant emission occurs well below the critical
density, at densities lower by one or two orders of magnitude
(cf. Shirley 2015). Nevertheless, the table provides useful upper
limits to the temperature and the density that can be deduced from a
transition-limited survey. For instance, J=2-1/J=1-0
intensity ratios distinguish kinetic temperatures omcreasingly poorly
above 20 K, and ladders up to J=4-3 fail to make meaningful
distinctions between temperatures above 100 K.
A complication is the degeneracy of optically thick ladders with
respect to the kinetic temperature, volume density and column density
(hence mass) of the gas. A striking illustration of their failure to
differentiate between even the very different environmental conditions
in NGC 6240 and Mrk 231 is provided by Meijerink
(2013). Likewise, Weiss (2007) found equally good fits to the
ladders of luminous galaxies but they could not resolve the
temperature-density ambiguity. Additional information preferably in
the form of optically thin emission from species such as is
required to alleviate the degeneracy (Bayet 2006, Israel 2020,
hereafter Paper I). Depending on the available data, two or three gas
phases can be modeled. For most purposes, a two-phase analysis
suffices as it can be made to fit most of the available observations
(e.g., the cases of M 82 and NGC 253 discussed by Loenen et al. 2010
and by Rosenberg 2014).
The presently available data on galaxy centers do not constrain the
relative amounts of low-pressure and high-pressure gas equally
well. In Paper I, we presented a systematical probe of the physical
condition of the molecular gas with ground-based surveys of both
and in transitions up to J_upper=4 and found densities
between 10^2 cm^-3 and ≥10^4 cm^-3 and temperatures
ranging from ∼30 K to ≥100 K. The elevated gas
temperatures, increased turbulence, and higher metallicities that
characterize galaxy centers cause a systematic overestimate of the
molecular hydrogen amounts by traditional methods. Instead, the
so-called X-factor relating CO intensities to H_2
column densities is an order of magnitude lower than the “standard”
value in galaxy centers, including the Milky Way center.
These low-J transitions are particularly sensitive to molecular gas
of a modest density (cf. Table <ref>) and constrain the column
density and mass of the low-pressure gas relatively well, even though
the available line intensities usually do not fully constrain even a
two-phase model. In particular, they do not adequately sample the
temperatures and densities at the high-pressure end which are much
more sensitive to feedback from active-galaxy nuclei (AGN) and from
starburst activity. This requires additional surveys of the higher-J
transitions such as those provided by the Herschel Spectral and
Photometric Imaging Receiver (SPIRE) and Photoconductor Array Camera
and Spectrometer (PACS) that cover transitions J_upp≥ 4
in a large number of galaxies (e.g., Mashian 2015, Rosenberg
et al. 2015, Kamenetzky et al. 2016). Such observations were attempted
with the Herschel Heterodyne Instrument for the Far-Infrared (HIFI)
overlapping the few cases where SPIRE sensitivities did also allow the
determination of line fluxes.
High-frequency observations of extragalactic lines are also
feasible with ground-based equipment but only at the high-elevation
facilities in Hawaii and the Chilean Andes. High atmospheric opacities
prevent ground based observation of the J=4-3 and J=5-4
transitions. The line intensities, already low in almost all
galaxies, further decrease with increasing J level. This leaves the
J=6-5 line as the most practical choice to sample the
high-excitation gas in galaxy centers from the ground.
§ GALAXY SAMPLE
The sample considered here includes the few galaxies with Herschel
detections of in J_upp≥5 transitions. These concern
fluxes extracted from SPIRE spectra covering a great spectral range
with low spectral resolution and from targeted Heterodyne Instrument
for the Far-Infrared (HIFI) observations with much higher spectral
resolution resolving the line profiles. Although SPIRE detected many
galaxies in various transitions, the weak lines were
unambiguously detected only in the brightest galaxies on the celestial
sky mostly in guaranteed observing time.
The (6-5) line was readily observed from the ground, first in the
bright galaxies accessible from Hawaii, notably M 82, NGC 253, and
IC 342 (Harris 1991; Ward 2003, Seaquist 2006)
and in some red-shifted luminous galaxies from lower-elevation sites
(cf. Weiss 2007). Less luminous closer galaxies followed
(e.g., Bayet 2004, 2006), but the ground-based detection of the
(6-5) line in NGC 253 (Hailey-Dunsheath 2008) so far
stood alone. The development of sensitive high-frequency receivers for
use in the southern hemisphere provided the opportunity to change this
situation. Inspection of the J=6-5 data in the Herschel
archive yielded a limited number of galaxies bright enough to attempt
J=6-5 detection from the southern hemisphere without the
need for prohibitively long integration times. The sample selected for
new observations is listed in Table <ref>. It includes six
galaxies with starburst centers, three have AGN centers, one is the
Luminous InfraRed Galaxy merger (LIRG) NGC 6240, and one has a mixed
AGN-starburst center (NGC 1365). The two northern galaxies bright
enough to have literature mid-J intensities (M 82 and
IC 342) have been included for completeness sake.
§ OBSERVATIONS
The observations are part of two separate programs with the Atacama
Pathfinder EXperiment telescope (APEX; Güsten 2006) at the
Llano de Chajnantor high-elevation site in the Chilean Andes. The
first series of observations was carried out with the Carbon
Heterodyne Array of the MPIfR (CHAMP+) receiver in guaranteed
observing time between 2008 and 2012 (projects X-081.F-1002-2008,
E-085.B-0094B-2010, E-088.B-0075A.2011, and X-089.F-0007-2012). The
second series of observations was carried out with the Swedish-ESO PI
instrument for APEX (SEPIA) receiver in 2019 guaranteed observing time
and in 2021 regular European southern observatory (ESO) time (projects
E-0104.B-0034A-2019 and E-0108.C-0586A.2021). At the observing
frequencies of 661.067 GHz ((6-5)), 691.473 GHz ((6-5))
and 806.651/809.350 GHz ((7-6)/(2-1)) the APEX full-width
half maximum (FWHM) beam sizes are 9.3", 8.9" and 7.7" according
to the on-line data sheets. Calibration scans on Jupiter and Mars
yield efficiencies needed to transform antenna temperatures T_A
into main beam temperatures T_mb of effectively η_mb =
0.48, 0.52, and 0.48 with uncertainties of 0.02[
www.mpifr-bonn.mpg.de/4482182/champ_efficiencies_16-09-14.pdf,
www.apex-telescope.org/telescope/efficiency/]. The conversion
factor S/T_mb of flux density S to brightness temperature
T_mb) is about 60 Jy/K.
§.§ CHAMP+ observations
CHAMP+ is a dual-band 2 × 7 element heterodyne array developed
by the Max Planck Institute für Radioastronomie (MPIfR) in Bonn (D),
the Stichting RuimteOnderzoek Nederland (SRON) in Groningen (NL), and
the Jet Propulsion Laboratory (JPL) in Pasadena (USA). It is a
principal investigator (PI) instrument operated for the APEX community
as a collaborative effort with MPIfR (Kasemann 2006, Güsten
2008). The array can be operated simultaneously in ALMA
(Atacama large millimeter/submillimeter array) bands 9 and 10, and we
used this property to obtain carbon J=2-1 (rest frequency 809
GHz) and J=7-6 (rest frequency 806 GHz) measurements
simultaneously with the Band 9 J=6-5 and line
measurements. Both sub-arrays have closely spaced pixels in a
hexagonal arrangement providing data sampling with half-beam spacing
in scanning mode. The backend is an autocorrelator array with a total
bandwidth of 32 GHz and 32768 spectral channels, subdivided into 32 IF
bands of 1 GHz and 1024 channels each. We used position-switching
with a throw of 900", well clear of the galaxy main
bodies. On-the-fly maps were obtained for all sources, mostly with
50" ×50" field-of-views. For the purposes of this paper, we
extracted single emission profiles by spatially binning all emission
within an area corresponding to the desired resolution. The Band 9
data were obtained with sky conditions varying from good (total system
temperature including sky T_sys = 840 K) to just acceptable
(T_sys = 1675 K). In Band 10, total system temperatures varied
from 2400 to 4000 K. The calibration is estimated to be accurate to
≤30%. This error is almost entirely governed by baseline
uncertainties. The emission from the observed galaxies typically
occupies about half of the 1200 window covered by the backend
and leaves limited room for accurate baseline definition. The baseline
errors are too large to derive reliable fluxes for the weak
emission.
§.§ SEPIA observations
The (SEPIA) is a single-pixela heterodyne receiver with a cryostat
accommodating three ALMA-like receiver cartridges (Belitsky
2018), provided by the group for advanced receiver development (GARD)
at the Onsala space observatory (S). We used the SEPIA660 cartridge
(Baryshev 2015), which is a dual polarization 2SB receiver
installed and commissioned by the Groningen NOVA group (NL) during the
second half of 2018. The SEPIA660 receiver covers the window between
597 GHz and 725 GHz. It has two IF outputs per polarization, USB and
LSB, each covering 4-12 GHz, adding up a total of 32 GHz instantaneous
IF bandwidth. The central frequencies of the two side-bands are
separated by 16 GHz. The backend was an FFT spectrometer with a
spectral resolution of about 61 kHz (26 m/s) with 65536 channels per
every 4 GHz. For NGC 253 and NGC 4945 we obtained a five-point cross
on the central position in ; all other observations were single
pointings. The sky conditions mostly varied from very good (T_sys
= 500 - 700 K) to good (T_sys = 700 - 1100 K). Throughout the
observations, the baselines were quite stable and the (much) wider
velocity coverage allowed good baseline definition and subtraction.
§.§ Additional observations
In the following discussion, we also include the mid-J
observations of bright galaxies that already exist in the literature.
These concern spaceborne and ground-based detections of NGC 253
(Hailey-Dunsheath 2008, Pérez-Beaupuits 2018),
IC 342 (Rigopoulou 2013), NGC 3034 (M 82; Loenen 2010,
Panuzzo 2010, Kamenetzky 2012), NGC 5128 (Centaurus A;
Israel 2014) and NGC 4945 (Bellochi 2020).
§ RESULTS AND ANALYSIS
§.§ Extracted line intensities
The velocity-integrated and peak J=6-5 and line
intensities measured with the CHAMP+ and SEPIA660 receivers are listed
in Table <ref>. The table also lists the J=7-6 and
J=2-1 fluxes measured with the CHAMP+ receiver in parallel
with the J=6-5 observations. To facilitate comparison, we have
binned these higher frequency data to the somewhat lower spatial
resolution of the J=6-5 measurements. The errors listed are
stochastic errors; they do not include systematic errors due to
calibration etc. For NGC 1365 and NGC 1808 intensities were
available from both the CHAMP+ and the SEPIA660 receiver on APEX.
Comparison shows a good agreement between the results obtained with
the different receivers on different occasions. In spite of the
different observing techniques, there is likewise good agreement, for
the three galaxies observed with Herschel/SPIRE that were also
mapped with APEX/CHAMP+. In comparable apertures, the CHAMP+ fluxes
are 20 per cent higher for both NGC 1068 and NGC 1097 and 20 per cent
lower for NGC 1365. The galaxies NGC 660 and NGC 1808 were not
observed with SPIRE.
Five of the sample galaxies are relatively close to our Galaxy with
distances D=3.4-4.5 Mpc (Table <ref>). At microwave
frequencies, they are among the brightest galaxies in the sky. Three
of them (NGC 253, M 82, and NGC 4945) accordingly have (6-5)
intensites an order of magnitude higher than the other galaxies in the
survey, including those at the same distance (IC 342 and M 83). The
remaining eight galaxies have intensities similar to the latter
although they are at three to five times greater distance, thirty
times greater in the case of the ultra-luminous galaxy NGC 6240. A
similar pattern applies to the considerably weaker (6-5)
fluxes that required integration times up to several hours.
In ten of the thirteen galaxies of the sample, (6-5) intensities
were measured at different spatial resolutions. They either have
multiple pointed observations in the different SEPIA660, HIFI and
SPIRE apertures, or they were mapped, for instance with the CHAMP+
receiver, which allowed extraction of intensities binned to various
resolutions θ. The results for individual galaxies are shown in
Fig. <ref> together with fits to the data of the form
log(T_mbdv) = a log(θ) + b. In this formulation, a=0
corresponds to an extended source of constant surface brightness and
a=-2 corresponds to a point source. Intensities at the various
resolutions in the narrow range -0.75≥ a≥-1.2 (9" to 43")
are listed in Table <ref> together with the fit
coefficients a (slope) and b (intercept). The nearly edge-on
galaxy NGC 3034 (M 82) has a flat slope a=-0.56 caused by a relative
lack of (6-5) emission from the center (see Seaquist
2006, Loenen 2010) but the remainder has an average slope
a=-0.92. A similar inverse proportionality between
intensity and aperture in the central regions of galaxies was also
found, with larger dispersion, for low-J emission from the
molecule (Paper I) as well the HCN and HCO^+ molecules that trace
molecular gas with critical densities similar to those of the
(6-5) transition (Israel 2023, hereafter Paper II). Such behavior
is characteristic for the centrally peaked emission from galaxies
illustrated in Figure 3 of Paper I. No dependence on distance is
apparent. The average a=-0.9 of the nearby galaxies NGC 253,
NGC 3034, NGC 4945, and NGC 5236 only samples the central 0.7-0.9
kpc. The intensities of the more distant galaxies (except NGC 6240)
sample the significantly larger inner regions with diameters of
2.6-4.5 kpc, yet have the same average a=-0.9.
§.§ J=6-5 / intensity ratios
The APEX survey provides direct determination of the J=6-5
^12CO/^13CO intensity ratio at a resolution of 9" in the
eleven galaxies listed in Table <ref>. The table includes the
corresponding J=3-2, J=2-1, and J=1-0 ratios in apertures
covering surface areas larger by factors of 2.4, 15, and 6.0,
respectively (Paper I). The J=1-0 to J=3-2 intensity ratios
are typically 10-15 in all galaxies except NGC 6240. In seven galaxies
the J=6-5 and J=1-0 ratios do not significantly differ. In the
other galaxies, the J=6-5 ratio exceeds the J=1-0 ratio by a
factor of up to two. Four galaxies have relatively high J=6-5 ratios
20. The intensity ratios of the LIRG NGC 6240 are the highest in
the sample. Papadopoulos (2014) suggest even higher ratios for
this galaxy. These and other literature ratios have, however, large
uncertainties because the very weak and broad lines are
highly sensitive to baseline errors. Measurements with Herschel
provide additional J=5-4 to J=8-7 intensity ratios for five more
galaxies at the substantially lower resolutions of 33"-43" listed
in Table <ref>, thereby sampling surface areas twelve to
twenty times larger than covered by the APEX beam. The
large-aperture / intensity ratios in the J_upp≥6
transitions in Table <ref> are two to three times higher. The
J=5-4 intensity ratio is transitional, with an in-between average of
∼16. The large-area Herschel J=6-5 ratios vary little,
ranging between 22 and 26, and are similar to the four highest J=6-5
ratios in the APEX sample. For NGC 253 and NGC 4945, J=6-5 ratios
are available in both apertures. In the fifteen times larger surface
area sampled by Herschel, the J=6-5 isotopologue ratio has increased
from the APEX value ∼13 to the almost twice higher value of
23. In these two galaxies, the intensities are thus much
weaker beyond the inner 200 pc and even more centrally peaked than the
intensities. At least half (six out of thirteen) of the
galaxies surveyed exhibits J_upp≥6 intensity ratios of 20-30 in
either small or large apertures. By analogy, this could also be the
case for the other galaxies in Table <ref> lacking Herschel
data.
We can take this a step further for NGC 253 and NGC 4945. After
subtraction of the contribution by the inner circumnuclear area
(APEX) to the larger central area (Herschel), the intensity ratio
in the residual 9"-33" zone of either galaxy becomes
(6-5)/(6-5)∼40, suggesting low optical depths also for
. Together with ratios in excess of 20-30 in transitions with
J_upp≥6 this is consistent with intrinsic []/[]
isotopologue abundances of about 40 (cf. Tang 2019, Viti
2020), as assumed in our earlier Papers I and II on
circumnuclear molecular gas.
§.§ Further line ratio comparisons
In these papers, we adopted a “standard” aperture of 22" for
intercomparison of the observed line intensities. For the galaxy
centers in this paper, we obtained such normalized (6-5)
intensities in a 22" aperture either by direct determination or
interpolation from Table <ref>, or by extrapolating the
observed 9" intensities from Table <ref> with the average
slope a=-0.9 just determined (NGC 613, IC 342, and NGC 2559). These
are directly comparable to the 22" J_upp≤4 line intensities
(Paper I) of the same galaxies. In Fig. <ref> (top) we show
relations between line ratios constructed from these data. We
also plotted transition intensity ratios in the 43" apertures of a
larger sample of twenty galaxies using the intensities compiled
by Kamenetzky (2016) and Paper I. Nine galaxies are common to
both samples. For comparison we added the nearby starburst dwarf
galaxy He2-10 (data from Bayet 2006) as well as both the inner
Milky Way and the Galactic Center (data from Fixsen 1999)
As noted in section 4.2, comparison of the data in
Tables <ref> and <ref> suggests a strong
aperture-dependency of the / ratio. In order to limit the
extrapolation as much as possible, we have adopted for this case a
common aperture of 11". We assume identical distributions for the
J=6-5 emission in the and lines over this small
range. If instead the (6-5) emission is point-like, its
normalized intensity is overestimated by ∼15%. For the
normalization of the (1-0) intensity we assumed emission
aperture ratios identical to those in the J=2-1 transition (Paper
I). In a similar way we extrapolated the (3-2) data from
14" to 11".
The (6-5) intensities are not so easily normalized to a 22"
aperture because only a single point per galaxy is observed at a
resolution of 9.5". In order to minimize aperture-dependent effects
we have limited the extrapolation to an aperture of 11" and we
assumed identical distributions for the emission in the thirco and
lines over this small range. If instead the (6-5)
emission is point-like, its normalized intensity is overestimated by
∼15%. For the normalization of the thirco(1-0) intensity we
assumed instead emission aperture ratios identical to those in the
J=2-1 transition (Israel 2020). In a similar way we extrapolated the
(3-2) data over the small range of 14" to 11". In
Figure <ref> (bottom) we show the relations of the
isotopologues to each other and to the J=6-5 and J=1-0
transitions.
§.§ Excitation of the (6-5) gas
Studies of the rotational ladders of galaxy centers have been
published by several authors (Bayet 2006, Weiss 2007,
eve 2014, Rosenberg 2015, Mashian 2015, and
Kamenetzky 2016) to which we refer for further detail. In
these studies, the rotational ladders are primarily interpreted in
terms of excitation and the heating and cooling balance of the
gas. The SPIRE and PACS ladders (Rosenberg 2015, Mashian
2015) illustrate the great variety in overall shape. This
variety is already apparent in the line intensity ratios shown in the
top left and center panels of Fig.<ref>. The excitation
represented by the (6-5)/(1-0) and the (3-2)/(1-0) ratios varies
for the galaxies in our sample in a manner not related to galaxy
type. The excitation of the emission in these transitions increases
significantly with decreasing aperture size. The trend is continued
when line intensity ratios observed at even higher
resolution are plotted (Fig. <ref> top right). The two ratios
are weakly correlated (slope a=0.17±0.09). The systematic
displacements imply that the excitation of the central molecular gas
increases towards the galaxy nucleus.
The panels in the bottom row of Fig. <ref> show the relations
of the isotopological and intensities to each other
and to the J=6-5/J=1-0 transitions ratios. Although the LIRG
NGC 6240 was observed, we exclude it from most of the following
analysis as its extreme distance, surface area measured, and
luminosity class (see e.g., Greve 2009, Papadopoulos
2014) set it too far apart from the other galaxies in the sample. For
the other galaxies, there is a clear relation between the
and ladders (bottom left panel) but the intensity ratio of the
(6-5) and (1-0) transitions increases more rapidly for the optically
thin than for the optically thick . On the other hand,
the J=6-5 / isotopological intensity ratio is not
correlated with the intensity ratio of the J=6-5 and J=1-0
or transition that track the gas excitation nor is the
J=1-0 / isotopological intensity ratio (bottom center
and bottom right panels).
The critical density of (6-5) is similar to that of HCN(1-0) and
falls in between those of HCO^+(1-0) and HCO^+(3-2), (see
Table <ref>, so that a mutual comparison may be of
interest. Again, we normalize all line intensities by the (1-0)
intensity. Fig. <ref> explores the behavior of J=6-5
and lines as a function of the HCN(1-0), HCO^+(1-0) and
HCO^+(3-2) intensities (data from this paper and from Papers I and
II). In all panels, the most extreme (6-5)/(3-2) ratios belong
to NGC 6240 (high) and NGC 5055 (low). No correlation is apparent
between the HCN(1-0) and either (6-5) or (6-5) line
intensities (Fig. <ref>, leftmost panels) or HNC(1-0) (not
shown) despite their very similar critical densities. There is,
perhaps, a correlation between the (6-5) and HCO^+(1-0) line
emission (top center panel) and, more convincingly in spite of the few
data points available, between HCO^+(3-2) and (6-5) and even
(6-5) (rightmost top and bottom panels). This suggests that
HCO^+ is linked to the excitation of high-J and
(but see also Papadopoulos etal 2010) and HCN is not. This is
consistent with the poor sensitivity of the HCN/CO intensity ratios to
both column density and fraction of dense gas noted by Priestley
(2024) in molecular cloud simulations and by Israel (2023) in
extragalactic multi-transition molecular line surveys. Although the
heating and cooling of extragalactic molecular gas can be determined
from the observed ladders, this is not so easily the case for
its physical parameters temperature, density, column density as these
are highly degenerate. Single-gas-phase models in general do not
adequately explain even relatively uncomplicated extragalactic
ladders and models with two or more distinct gas components are needed
(e.g., Greve 2014, Mashian 2015, Kamenetzky
2016). This need had already been reacognized in the analysis of
multiple low-J transitions of optically thick complemented by
optically thin transitions (for instance Israel 2001, 2005;
Bayet 2006).
§ MOLECULAR GAS PHYSICS REVEALED BY J=6-5 CO LINES
The objects in this paper were all included in the previously
published survey of the J=1-0, J=2-1, J=3-2 transitions of
and emission from galaxy centers whereas for half the sample
the J=4-3 transition was also measured (Paper I). The results
of that survey were evaluated with large-velocity-gradient (LVG)
models employing the RADEX radiative transfer code (Van der Tak
(2007). For the details of the analysis we refer to section 5
of Paper I. The LVG approximation efficiently solves the radiative
transfer equation in non-LTE environments and yields a first order
determination of the gas properties in a homogeneous medium. For each
case, the model provides an average description of all the molecular
gas in the aperture, thus lumping together gas of all temperatures and
densities. As the number of gas phases included is increased, the
models become ever more realistic. The RADEX analysis,
howver, requires four input parameters per phase and per species
(2 kinetic temperature and density, molecular column density per
velocity interval, relative molecular abundance). Even with
simplifying assumptions (such as identical 2 temperature and
density for all species), the number of phases that can be ltaneously
modeled is severely limited by the number of independent
measurements. In the case of the low-J and
measurements presented in our previous work only two gas-phases can be
modeled. This allows a first, coarse separation of the dominant gas
components, such as dense or diffuse, and cool or warm. Although a
simplification of reality, this is nevertheless already a great
improvement on single-phase models that produce averages with little
physical meaning. A complication is that for each gas phase, the
observed line intensities are always subject to degeneracies between
temperature, density, and column density per velocity interval. These
degeneracies are not always clearly resolved by the limited number of
transitions providing independent line intensities especially when
finite errors are taken into account. Instead of a well-constrained
unique result, the two-phase modeling produced for each galaxy a
number of possible solutions. These form a well-defined and limited
range of physical parameters; examples are given in appendix C.1 of
Paper I. With only low-J transitions, the number of independent
measurements is usually sufficient only to marginally constrain the
seven parameters needed to describe two phases, producing tight
constraints for some parameters but leaving others practically
unconstrained. Fits with a cut-off at the J=3-2 or J=4-3
transitions tend to underestimate the parameters of the high-pressure
gas, in particular the density. Biased fit results for the
high-pressure gas in turn influence the fit parameters of the
low-pressure gas, especially the temperature.
This is borne out by the new measurements in the J=6-5
transition. For each galaxy the various physical parameter sets that
provide good fits to the J_upp≤3 intensity ratios, including
the “best” fits listed in Table C.2 of Paper I, fail to adequately
predict the line intensities observed in the J=6-5 transition. The
observed intensities generally exceed the model-predicted values
by factors of two or more.
For (6-5), the result is no better. Even in galaxies where
the predicted 22" model isotopologue intensity ratios are broadly
similar to the observed 9" APEX ratios this is only because the
individual and intensities are both off by the same
factor. Thus, the observations of the lower J=1-0, J=2-1, and
J=3-2 transitions alone do not sufficiently constrain the modeling
of gas physical parameters to also allow successful prediction of the
higher J=6-5 transition intensities. The new J=6-5 and
especially observations provide information on the physical
condition of the molecular gas not apparent in the lower-J
measurements.
This has major consequences for the modeling presented in Paper I. The
addition of two more intensities increases the number of independent
parameters to the number required to fully describe the two gas
phases. The number of possible parameter sets derived from the
J_upp≤3 analysis for each galaxy is drastically reduced. To
agree with the J_upp=6 data, at least one of the two model phases
needs to have a kinetic temperature of 60 K or higher, and at least
one of the two phases needs to have a density of 10^4 or
higher. Sets falling short of this criterion can be removed – this
includes all but a few of the sets that were earlier found to provide
possible solutions in in the analysis described in Paper I. The
J_upp≤3 measurements very poorly distinguish temperatures and
densities much above these values (cf. the effective upper limits in
Table <ref>). High model temperatures and (column) densities
from the initial analysis need fine-tuning to fit the J=6-5 values
without compromising the low-J line ratios.
Still assuming a []/[] isotopic abundance ratio of 40
(Tang 2019, Viti 2020), we revised the two-phase
models of the sample galaxies from Paper I to accomodate the new
J=6-5 intensities. The newly determined parameters of the two
phases are summarized in columns 2, 3 and 4 of
Table <ref>. For each galaxy, two entries are given that
refer to the respective high-pressure (top) and the low-pressure phase
(bottom) identified in column 5. Columns 6 through 8 list the
fractions of the total (1-0) and (6-5) emission and the mass
associated with each gas phase. Finally, columns 9 through 11 show the
J=6-5 and model line ratios for the combined
emission from the two phases. These can be compared to the observed
(6-5)/(1-0) ratios listed in Table <ref> and the isotopologue
intensity ratios in Tables <ref> and <ref>. We did not
remodel NGC 6240 as it is incomparable to the other galaxies in terms
of distance, area covered, and luminosity class (see e.g., Greve
2009, Papadopoulos 2014).
The inclusion of the J=6-5 measurements thus results in fits that
are much more tightly constrained than those based on the
J_upp≤4 transitions only. This is largely due to the
intensities that render the isotopological intensity ratios
particularly sensitive to changes in the physical parameters. Most of
the solutions allowed by the analysis in Paper I are completely ruled
out by the present analysis. There remains a small residual
uncertainty due to the limited ability to distinguish between
temperatures above 200 K and densities well in excess of 10^5.
For two-phase models, much of the ambiguity previously present is
eliminated. Other parameter combinations are still possible but only
as long as they are close to those listed in Table <ref>. It
is, however, unfeasible to assign uncertainties to individual
parameters because of the trade-offs inherent in
degeneracies. Instead, we assign very roughly a factor of two
uncertainty to the overall result. An additional source of uncertainty
is the actual / abundance. Values as low as 30 and as
high as 70 have been published but most determinations settle around
40 which is the value we assumed. If in any galaxy the abundance is
different, this would lead to modestly different model parameters. We
note that such a situation seems to apply to luminous infrared
galaxies such as NGC 6240 with abundances of 100-200.
The two-phase model fits presented here provides a simplified but
robust picture of the molecular gas in the sample galaxies, especially
as concerns the division in gas of high and of low pressure. They are,
however, still a first approximation and not yet a fully realistic
description of that gas. Nevertheless, the agreement with similar
results derived independently by others is encouraging. From the
analysis of CS emission ladders in half a dozen galaxies Bayet
(2009), for instance, conclude to the general presence of two
high-pressure phases with kinetic temperatures all below 70 K, with
similar densities 0.4-1.6 × 10^5 for the dominant
cold high-pressure phase but with higher densities 2.5-40 times
10^5 for the more sparse warm high-pressure phase. The
galaxy NGC 253 also provides an interesting case for comparison,
because it has been comprehensively analyzed by Rosenberg
(2014) and Pérez-Beaupuits (2018), using all available
, , HCN, , and lines to fit three distinct
gas components. The results listed for the phases of NGC 253 in
Table <ref> are within a factor of two of the results for the
corresponding phases in these two analyses.
Compared to our earlier analysis by Paper I, the new results in
Table <ref> show either similar or moderately higher
temperatures and densities for the low-pressure gas forced by the new
high-pressure values. The high-pressure gas temperatures are also
roughly similar but the densities are revised up, in most cases
significantly, in order to reproduce the observed intensities of the
(6-5) and especially the (6-5) lines. Uncertainties are
much reduced.
The low-pressure gas is not very dense (500-3000 ) but tends to
be hot with kinetic temperatures from 60 K to 150 K. The high-pressure
gas is always very dense (0.5-1.0 × 10^5 or higher) and
significantly cooler with temperatures ranging from 20 K to 60 K.
Only in NGC 3034 (M 82) both gas phases have similar temperatures of
about 100 K.
Two of the twelve galaxies in Table <ref> stand out with gas
of a single phase responsible for essentially all of their CO line
emission. Hot, moderately dense low-pressure gas produces over 95%
of the emission from the center of NGC 1068 independent of transition
observed, but a small amount of cold, dense gas is still required to
explain the data. Having observed the lower transitions of HCN,
HCO^+ and CO isotopologues with the arcsec-sized SMA and NOEMA
beams, Krips (2011) obtained a very similar result. Almost all
of the emission arises from the gas within ∼ 150 pc from the
active Seyfert nucleus only resolved by interferometer arrays. The
circumnuclear gas in the starburst-dominated center of IC 342 is
also of limited extent but here it is the high-pressure gas that
provides almost all of the CO line emission from this nearby nucleus.
The cool and very dense gas in this reservoir produces typically
90% of the CO emission again independent of observed
transition. Only ten per cent of the gas in the IC 342 nuclear region
is moderately dense but rather hot which was also concluded by
Rigopoulou (2013), see also Montero-Castaño
(2006).
In two other galaxies, NGC 4945 and NGC 5236 (M 83), both warm and
modestly dense low-pressure gas and much more dense and colder
high-pressure gas contribute in roughly equal amounts to the J=1-0
CO groundstate emission. Similar to IC 342, these are relatively nearby
galaxies and the line measurements sample only the gas reservoirs in
the inner few hundred parsecs. In the likewise nearby galaxies
NGC 253 and NGC 3034 (M 82) the high-pressure gas is, however, only a
minor contributor to the groundstate CO emission as is also the case
in the other six galaxies.
Thus, the J=1-0 emission from all but three of the observed
galaxy centers primarily originates in low-pressure gas reservoirs.
More than 85 per cent of the groundstate emission from these galaxies
which dominates their molecular gas mass represents moderately
dense gas at kinetic temperatures above 60 K and reaching as high as
150 K. Both temperature and density suggest a heating mechanism other
than UV and are more compatible with mechanical heating from decaying
shocks and turbulent dissipation. Infrared spectroscopy with
Herschel and Spitzer already suggested this for NGC 1097
(Beirão 2012).
In all galaxies except NGC 1068 the situation is completely reversed
in the J=6-5 transition. More than 85 per cent of the (6-5)
emission in these galaxy centers comes from relatively cool but rather
dense molecular gas reservoirs in most cases representing a minor
fraction of the total mass.
The high-pressure gas has an optical depth of a few in the J=6-5
transition. Although it is radiatively important, its source of
excitation is not clear-cut as various mechanisms may compete as
discussed, for instance, by Rosenberg (2014) for the case of
NGC 253. If the high-J emission originates in thin outer
layers of molecular clouds, it could trace high-density gas excited by
external UV radiation. The low temperature, the angular extent of the
(6-5) emission and the apparent lack of accompanying
emission in NGC 253 and NGC 4945 are, however, more consistent with an
extended diffuse gas excited throughout its entire volume by
mechanical heating (see Loenen 2008, Kazandjian 2015,
Paper II).
§ CONCLUSION
In this paper we have summarized all fourteen presently available sets
of J_upp≥5 measurements of galaxies beyond the Local
Group. Together with the more abundant (6-5) measurements, they
yield thirteen J=6-5 / isotopologue ratios for
comparison with J_upp≤3 ratios established earlier. The
distances of the sample galaxies range from 3.5 to 21.5 Mpc. We also
observed the LIRG NGC 6240 at a distance of 116 Mpc but did not
include it in the analysis because of its discrepant nature. We have
determined (6-5) intensities in multiple apertures ranging from
9" to 43" in ten galaxies. On average, the surface brightness of
the galaxies in this sample is roughly inversely proportional to
aperture size, indicating centrally peaked emission. The (6-5)
emission reduced to 22" apertures is relatively bright with
velocity-integrated (6-5)/(1-0) brightness temperature ratios ranging
from 0.12 to 0.45. A wider sample of galaxies observed in a 43"
aperture yields on average significantly lower ratios of (6-5)/(1-0),
suggesting that the larger apertures include a higher fraction of
low-excitation gas and that molecular gas excitation increases towards
galaxy nuclei. Line intensities of (6-5) and (6-5) are
weakly correlated with those of HCO^+ and not at all with those of
HCN, although all three lines have similar (critical) densities and
presumably kinetic temperatures.
This paper not only covers all extragalactic (6-5)
measurements available to date but, by implication, also all available
intensity ratios J=6-5 / that can be compared to those
of the lower J_upp≤3 transitions. These ratios are the
emission-weighted average of a variety of molecular cloud types
ranging from dense and compact to tenuous and diffuse over relatively
large areas. In about a third of the galaxies observed, the
isotopologue intensity ratios vary little with transition, in the
remaining two thirds the J=6-5 ratio is notably higher than the
ratio in the lower transitions seen in somewhat larger apertures. In
four galaxies, ratios determined in the fifteen times larger
Herschel aperture increase with observed transition to much higher
values of typically 30. The increase to high values occurs around the
J=5-4 transition.
The actual and intensities in the J_upp≥6
transitions are not easily predicted from the J_upp≤3
transitions routinely available from ground based facilities. The
low-J and high-J lines originate in different and mostly unrelated
gas phases. Widely accessible line intensities in the J=1-0
through J=3-2 transitions fail to fully constrain these gas phases
even when they are accompanied by complementary
observations. We find from our two-phase RADEX models that additional
line intensities in the J=6-5 transition or higher
eliminate much of the degeneracy and consequent uncertainty in the
underlying physical parameters of the molecular gas. For the majority
of galaxies the models indicate that most of the observed J=6-5
and emission arises in a warm (T_kin≥20K) and
very dense (n_2≳10^5 gas. The observed J=6-5 CO
emission is important as a tracer of inner galaxy energetics but not
as a tracer of inner galaxy molecular gas mass.
We gratefully acknowledge the ESO APEX User support supplied by Carlos
de Breuck. We thank Enrica Bellocchi for supplying us with the
Herschel-SPIRE intensities of NGC 4945 in Table <ref>, and
Dimitra Rigopoulou for communicating the Herschel-SPIRE data for
IC 342 in advance of publication.
[]Baryshev, A. M., Hesper, R., Mena, F. P., and 24 co-authors, 2015, 577, A129
[]Bayet, E., Gerin, M., Phillips, T.G., & Contursi, A., 2004, 427, 45
[]Bayet, E., Gerin, M., Phillips, T.G., & Contursi, A., 2006, 467, 485
[]Bayet, E., Aladro, R., Martín, S., Vitim S., & Martín-Pintado, J., 2009, 707, 126
[]Beirão, P., Armus, L., Helou, G., and 39 co-authors, 2012, 751, 144
[]Belitsky, V., Lapkin, I., Fredrixon, M., and 31 co-authors, 2018, 612, A23
[]Bellocchi, E., Martín-Pintado, J., Güsten, T., and 8 co-authors , 2020, 642, A166
[]Carilli, C. L. & Walter, F. 2013, 51, 105
[]Chou, R.C.Y., Peck, A.B., Lim, J., and 6 co-authors, 2007, 670, 116
[]*Crocker, A.F., Pellegrini, E., Smith, J.-D. T., and 16-co-authors, 2019, 887, 105
[]Fixsrn, D.J., Bennett, C.L., & Mather, J.C., 1999, 526, 207
[]Greve, T.R., Papadopoulos, P.P., Gao, Y., & Radford, S.J.E., 2009, 692, 1432
[]Güsten, R., Nyman, L.A., Schilke, P., and 3 co-authors, 2006, 454, L13
[]Güsten, R., Baryshev, A., Bell, A., and 26 co-authors, 2008, in Society of Photo-OpticalInstrumentation Engineers (SPIE) Conference Series, Vol. 7020, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Serie
[]Harris, A.I., Hills, R.E., Stutzki, J., and 3 co-authors, 1991, 382, L75
[]Hailey-Dunsheath, S., Nikola, T., Stacey, G.J., and 5 co-authors, 2008, 689, L109
[]Israel, F.P., & Baas, F., 2001, 371, 433
[]Israel, F.P., 2005, 438, 855
[]Israel, F.P., Güesten, R., Meijerink, R., and 8 coauthors, 2014, 562, A96
[]Israel, F.P., 2020, 635, A131 (Paper I)
[]Israel, F.P., 2023, 671, A59 (Paper II)
[]Jansen, D., 1995, Ph.D. thesis, Sterrewacht, Leiden University (NL)
[]Kamenetzky, J., Glenn, J., Rangwala, N., and 14 co-authors, 2012, 753, 70
[]Kamenetzky, J., Rangwala, N., Glenn, J., Maloney, P.R., & Conley, A., 2016, 829, 93
[]Kamenetzky, J., Rangwala, N., & Glenn, J., 2017, 471, 2917
[]Kasemann, C., Güsten, R., Heyminck, S., and 7 co-authors, 2006, Presented at the Society of Photo-Optical Instrumentation Engineers, Proc. S.P.I.E 6275
[]Kazandjian, M., Meijerink, R., Pelupessy, F.I., Israel, F.P., & Spaans M., 2015, 574, A127
[]Krips., M., Martín, S., Eckart, A., and 11 co-authors, 2011, 736, 37
[]Loenen, A.F., Spaans, M., Baan, W.A., & Meijerink, R., 2008, 488, L5
[]Loenen, A.F., van der Werf, P.P., Güesten, R., and 23 co-authors, 2010, 521, L21
[]Lu, N., Zhao, Y., Díaz-Santos, and 18 co-authors, 2017, 230, 1
[]Mashian, N., Sturm, E., Sternbereg, A., and 16 co-authors, 2015, , 802, 81
[]Meijerink, R., Spaans, M., & Israel, F.P., 2007, 461, 793
[]Meijerink, R., Kristensen, L.E., Weisz, A., and 28 co-authors, 2013, 762, L16
[]Montero-Castaño, M., Herrnstein, R.M., & Ho, P.T.P., 2006, 646, 919
[]Panuzzo, P. Rangwala, N., Rykala, A., and 60 co-authors, 2010, 518, L37
[]Papadopoulos, P.P., van der Werf, P.. Isaak, K., & Xilouris, E.M., 2010, 715, 775
[]Papadopoulos, P.P., Zhang, Zhi-Yu, Xilouris, E.M., and 6 co-authors, 2014, 788, 153
[]P'erez-Beaupuits, Güsten, R., Harris, A., and 5 co-authors, 2018, 860, 23
[]Piñol-Ferrer, N., Fathi, K., Lundgren, A., van de Ven, G., 2011, 414, 529
[]Priestley, F.D., Clark, P.C., Golver, S.O.C., and 4 co-authors, 2024, in press, arXiv:2406.06702
[]Rigopoulou, D., Hurley, P.D., Swinyard, B.M., and 9 co-authors, 2013, 434, 2051
[]Rosenberg, M.J.F., Kazandjian, M.V., van der Werf, P.P., Israel, F.P., Meijerink, R., and 3 co-authors, 2014. 564, A126
[]Rosenberg, M.J.F., van der Werf, P.P., Aalto, S., Armus, L., Charmandaris, V., and 25 co-authors, 2015, 801, 72
[]Schöier, F.L., van der Tak, F.F.S., van Dishoeck E.F., & Black, J.H. 2005, 432, 369, Leiden Atomic and Molecular Database
[]Shirley, Y.L., 2025, 127, 299
[]Seaquist, E.R., Lee, S.W., & Moriarty-Schieven, G.H., 2006, 638, 148
[]Tang, X.D., Henkel, C., Menten, K.M., and 16 co-authors, 2019, 629, A6
[]van der Tak, F.F.S., Black, J.H., Schöier, F.L., and 2 co-authors, 2007 468, 627
[]Viti, S., Fontani, F., & Jiménez-Srra, I., 2020, 497,4333
[]Ward, J.S., Zmuidzinas, J., Harris, A.I., Isaak, K.G., 2003, 587, 171
[]Weiss, A, Downes, D., Walter, F., & Henkel, C., 2007, ASPC, 375, 25
|
http://arxiv.org/abs/2409.02225v1 | 20240903185335 | Unified Origin of Inflation, Baryon Asymmetry, and Neutrino Mass | [
"Ajay Kaladharan",
"Shaikh Saad"
] | hep-ph | [
"hep-ph",
"astro-ph.CO"
] |
[E-mail:][email protected]
Department of Physics, Oklahom State University, Stillwater, OK 74078, USA
[E-mail:][email protected]
Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel, Switzerland
§ ABSTRACT
In this work, we present a unified theoretical framework that simultaneously addresses some of the most intriguing puzzles in particle physics and cosmology, namely the origins of neutrino mass, baryon asymmetry, and cosmic inflation.
In our model, inflation is driven by a combination of the Standard Model Higgs, the type II seesaw Higgs responsible for neutrino mass generation, and the unified symmetry-breaking Higgs field. During inflation, non-zero values of the latter field ensure the absence of the monopole problem. The baryon asymmetry is generated through the Affleck-Dine mechanism, facilitated by the non-zero angular motion in the phase of a complex scalar field, which is part of the inflaton. We find that the successful parameter region for generating baryon asymmetry through a renormalizable term in the scalar potential requires a rather heavy type II seesaw triplet, with a mass well beyond the TeV scale. Inflationary observables, in particular, the spectral index is in excellent agree with experimental observation, whereas tensor-to scalar ratio is expected to be probed by the future LiteBIRD and CMB-S4 missions.
Unified Origin of Inflation, Baryon Asymmetry, and Neutrino Mass
Shaikh Saad
==================================================================
§ INTRODUCTION
Although the Standard Model (SM) of particle physics has been very successful in describing the fundamental laws of physics, it fails to incorporate, for example, the observed neutrino oscillation data and the matter-antimatter asymmetry in the universe. Resolving these issues requires going beyond the SM (BSM), and Grand Unified Theories (GUTs) <cit.> are the prime candidates for its ultraviolet (UV) completion. In this work, in particular, we focus on the minimal unified gauge group, namely SU(5) GUT. For neutrino mass generation, within SU(5) GUT, one of the simplest options is the type II seesaw mechanism <cit.> (numerous efforts have been made to attribute non-zero masses to neutrinos in SU(5) GUT. For instance, in Refs. <cit.>, neutrinos acquire their masses at the tree level, while in Refs. <cit.>, the masses emerge at the loop level).
Note that the generation of neutrino mass in this setup requires only a single generation of 15_H-dimensional Higgs representation. However, achieving a successful thermal leptogenesis demands a second copy of 15_H multiplet. From the point of view of minimality and predictivity, such an extension is not desirable since incorporating correct neutrino oscillation data requires only a single generation of 15_H multiplet.
As recently pointed out in Ref. <cit.>, one copy of scalar weak triplet with a hypercharge equal to one–employed for type II seesaw–is sufficient to give rise to the correct baryon asymmetry utilizing Affleck-Dine mechanism <cit.>. The prerequisites in realizing this mechanism are (i) that the triplet carries a charge under a global U_X(1) symmetry (e.g., X=B (baryon number) or L (lepton number)), (ii) the presence of a small term in the Lagrangian that breaks this global symmetry, and (iii) the triplet acquiring a displaced vacuum value in the early universe.
Interestingly, implementation of type II seesaw to give neutrinos non-zero masses guarantees that the triplet scalar carries a U_X(1) charge (typically the lepton number) due to its interactions with the SM fermions and the SM Higgs doublet. Moreover, certain mixed terms in the scalar potential breaks this U_X(1) symmetry, which is essential to generate Majorana neutrino masses. Remarkably, this same triplet scalar can also play a role in realizing cosmic inflation <cit.>, as a result, it can naturally get a large displaced vacuum value during inflationary phase <cit.>. Note that cosmic inflation, which refers to the rapid exponential expansion of the universe, is another major puzzle that remains unresolved within the SM of cosmology. The theory of inflation elegantly explains several observed features of the universe, such as the large-scale homogeneity and isotropy. GUT models also require inflation to washout superheavy monopoles and other topological defects generated during the subsequent symmetry breaking phases <cit.>. However, the exact mechanism driving this inflationary phase, as well as its connection to particle physics, remains one of the most significant open questions in cosmology.
In this work, we consider a minimal SU(5) GUT in which, the inflationary dynamics is governed by three fields: the GUT-breaking adjoint Higgs 24_H, the multiplet 15_H responsible for neutrino mass generation, and the SM Higgs contained in the fundamental representation 5_H. Specifically, we explore inflation with a non-minimal coupling to gravity (in the context of GUT, for earlier works, see, for example, Refs. <cit.>). Although the global B-L remains preserved in the original Georgi-Glashow <cit.> model of SU(5), in our framework, the generation of neutrino mass breaks this symmetry. This explicit breaking is controlled by a small term in the scalar potential, and the weak triplet within 15_H, being part of the inflaton, attains a displaced vacuum value very early in the universe, ensuring the genesis of baryon asymmetry through the Affleck-Dine mechanism. Intriguingly, the adjoint field that breaks the GUT symmetry also acquires non-zero field values during inflation, thereby evading the monopole problem—a major challenge in GUT models—without the need to introduce any additional fields. Moreover, inflationary observables are found to be in great agreement with experiments. Our proposed model, therefore, suggests a common origin for the generation of neutrino mass, matter-antimatter asymmetry, and cosmic inflation within an economical unified theoretical framework.
The key results of this work are briefly summarized here. In our setup, inflation is driven by three fields, whereas Ref. <cit.> considered two field scenario. Additionally, due to a unified approach, we have a few significant differences compared to the earlier works. First, unlike Ref. <cit.>, our model does not rely on higher-dimensional operators for generating the baryon asymmetry. Rather, a dimension four-term in the scalar potential (with coefficient λ), which explicitly breaks a global symmetry, plays a major role in producing the matter-antimatter asymmetry. Second, as a consequence of the renormalizable term, we find a rather narrow range for the mass of the type II seesaw triplet field, specifically, in the window 10^7 GeV≲ m_Δ≲ 10^10 GeV (depending on the value of λ), which is consistent with the observed baryon asymmetry in the universe. Lastly, as mentioned earlier, since the GUT-breaking Higgs participates in inflation and acquires non-zero field values during this period, our model intriguingly offers a novel resolution to the monopole problem.
This article is organized in the following way. In Sec. <ref>, we introduce the propose model. Fermion mass generation and gauge coupling unification are discussed in Sec. <ref>. In Sec. <ref>, the inflationary dynamics, along with the generation of baryon asymmetry, are studied. In this same section, by considering relevant washout and perturbativity constraints, we show the parameter space where the correct order of baryon asymmetry can be produced. Finally, we conclude in Sec. <ref>.
§ MODEL
Proposal:– In this work, we consider a GUT model based on SU(5) gauge group. In this setup, the SM fermions are embedded within the 5_F and 10_F dimensional representations
5_F=[ d^C_1, d^C_2, d^C_3, e, -ν_e ]^T,
10_F=1/√(2)[ 0 u^C_3 -u^C_2 u_1 d_1; -u^C_3 0 u^C_1 u_2 d_2; u^C_2 -u^C_1 0 u_3 d_3; -u_1 -u_2 -u_3 0 e^C; -d_1 -d_2 -d_3 -e^C 0 ].
In the above, we have suppressed the family index, and mass generation of the charged fermion masses will be discussed later in the text.
Moreover, GUT symmetry is broken via the vacuum expectation value (VEV) of the adjoint 24_H representations. The SM Higgs doublet, which is embedded within a fundamental representation 5_H, spontaneously breaks the electroweak (EW) symmetry. The decomposition of these Higgs fields, along with their VEV structures, are as follows:
Φ≡ 24_H =ϕ_8(8,1,0)+ϕ_1(1,3,0)+ϕ_0(1,1,0)
+ϕ_3(3,2,-5/6)+ϕ_3(3,2,5/6),
ϕ≡ 5_H =H(1,2,1/2)+T(3,1,-1/3).
and,
⟨ 24_H ⟩ =v_24diag( -1, -1, -1, 3/2, 3/2 ),
⟨ 5_H ⟩ = (0 0 0 0 v_5/√(2))^T ,
with v_5=246 GeV.
As a result of the GUT symmetry breaking, the superheavy vector bosons receive the following masses:
M_X,Y=√(25/8) g_GUT v_24,
where, g_GUT is the unified gauge coupling constant. These gauge fields induce d=6 operators that are responsible for mediating proton decay. Accordingly, the lifetime of the protons can be estimated as <cit.>
τ_p∼16π^2 M^4_X/g^4_GUTm^5_p,
where m_p is the proton mass. Then, from the current proton decay bound of τ_p (p→ e^+π^0)> 2.4× 10^34 yrs <cit.>, we obtain a lower bound on the GUT scale M_X∼ M_GUT≳ 6× 10^15 GeV, where we have used g_GUT=0.6.
Note that with the particle contents given above, neutrinos remain massless. In this work, we adopt one of the simplest ways to give neutrinos a non-zero masses, namely via type-II seesaw <cit.>. This requires the addition of a 15_H dimensional Higgs representation, which has the following decomposition:
Δ≡ 15_H=Δ_1(1,3,1)+Δ_3(3,2,1/6)+Δ_6(6,1,-2/3).
Once the EW symmetry is broken, the weak triplet, Δ_1, receives an induced VEV and generates neutrino mass (as will be discussed later in the text).
Scalar potential:– With the above-mentioned Higgs fields, the complete renormalizable scalar potential takes the form
V =
-m^2_Φ Tr[Φ^2] + λ^Φ_1 Tr[Φ^2]^2 + λ^Φ_2 Tr[Φ^4] + μ_Φ Tr[Φ^3]
-m^2_ϕϕ^†ϕ + λ^ϕ_1 (ϕ^†ϕ)^2
+m^2_ΔΔ^†Δ + λ^Δ_1 (Δ^†Δ)^2 + λ^Δ_2 (Δ^†ΔΔ^†Δ)
+μ_1 ϕ^†Φϕ + λ^Φϕ_1 (ϕ^†ϕ) Tr[Φ^2]+ λ^Φϕ_2 (ϕ^†ϕΦΦ)
+μ_2 Δ^†ΦΔ
+ λ^ΦΔ_1 (Δ^†Δ) Tr[Φ^2]+ λ^ΦΔ_2 (Δ^†ΔΦΦ)
+ λ^ΦΔ_3 (Δ^†ΔΦΦ)^'
+λ^ϕΔ_1 (ϕ^†ϕ)(Δ^†Δ) +λ^ϕΔ_2 (ϕ^†ϕΔ^†Δ)
+{μ_3 ϕϕΔ^† +λ_1 ϕϕΔ^†Φ
+h.c.}.
To avoid cluttering, we have suppressed all group indices. The prime here denotes a different contraction compared to the unprimed one. The SM Higgs quartic coupling is identified as λ^ϕ_1=m^2_h/(2v_5^2).
Since v_24≫ v_5≫ v_Δ, for all practical purposes, for the analysis of GUT symmetry breaking and computing the mass spectrum, it is sufficient to consider only non-zero v_24. The potential minimum where SU(5) is broken to the SM gauge group has the stationary condition,
m^2_Φ=(15λ^Φ_1 +7/2λ^Φ_2 )v^2_24 +3/4μ_Φ v_24.
Scalar mass spectrum:–
Utilizing the above relation, the physical masses of states arising from 5_H are given by
m^2_H =-m^2_ϕ+3/2μ_1v_24+(15/2λ^Φϕ_1+9/4λ^Φϕ_2)v_24^2,
m^2_T =-m^2_ϕ-μ_1v_24+(15/2λ^Φϕ_1+λ^Φϕ_2)v_24^2.
Doublet-triplet splitting is obtained by fixing
μ_1=2/3m^2_ϕ/v_24-(5 λ^Φϕ_1-3/2λ^Φϕ_2)v_24 .
The masses of the submultiplets of 24_H are
m^2_ϕ_0 =3/4μ_Φ v_24+(30λ^Φ_1+7λ^Φ_2)v_24^2,
m^2_ϕ_1 =15/4μ_Φ v_24+10λ^Φ_2 v_24^2,
m^2_ϕ_8 =-15/4μ_Φ v_24+5/2λ^Φ_2 v_24^2,
whereas, components of 15_H receive the following masses:
m^2_Δ_1 =m^2_Δ+ 3/2μ_2 v_24+(
15/2λ^ΦΔ_1
+9/4λ^ΦΔ_2
+9/4λ^ΦΔ_3
)v_24^2,
m^2_Δ_3 =m^2_Δ+ 1/4μ_2 v_24+(
15/2λ^ΦΔ_1
+13/8λ^ΦΔ_2
-3/2λ^ΦΔ_3
)v_24^2,
m^2_Δ_6 =m^2_Δ-μ_2 v_24+(
15/2λ^ΦΔ_1
+ λ^ΦΔ_2
+ λ^ΦΔ_3
)v_24^2.
In the subsequent sections, we examine the inflationary dynamics and check the consistency of the mass spectrum of the scalar fields as computed above with a successful generation of baryon asymmetry.
Global symmetry:
Note, in the Georgi-Glashow SU(5) model, although baryon number B and lepton number L are separately broken, B-L is still a global symmetry; therefore, B-L is conserved (hence, all nucleon decay operators conserved B-L). Consequently, neutrinos are massless. However, in extended scenarios, such as ours with Δ, this global B-L symmetry is broken. Consequently, neutrinos receive non-zero masses. For example, in the scalar potential, both the terms μ_3 and λ_1 violate it (however, Yukawa couplings respect this symmetry).
More specifically, the unbroken abelian global symmetry, U(1)_X, is identified as <cit.>
X=5(B-L)-4Y
such that X[5_F]=-3 and X[10_F]=+1. Therefore, the scalars carry the following charges under U(1)_X:
X[ϕ]=-2,
X[Δ]=+6,
X[Φ]=0.
From these charge assignments, one can see that the cubic term μ_3 and the quartic term λ_1 in the scalar potential Eq. (<ref>) explicitly break the global U(1)_X symmetry. As will be shown below, the latter term is responsible <cit.> for generating the matter-antimatter asymmetry in the universe. The cubic term, on the other hand, must be very small so as not to ruin <cit.> the predictability of the generated asymmetry during the inflation.
§ FERMION MASS GENERATION
Charged Fermion Sector:–
Among the scalar fields introduced in our model, only the fundamental Higgs can provide masses to the charged fermions. The relevant part of the Yukawa couplings reads
ℒ_Y⊃ Y_10^ab10_F^a10^b_F5_H+Y_5^ai 10^a_F 5^i_F 5^*_H.
Once the EW symmetry is spontaneously broken, charged fermions acquire the following masses:
M_U =√(2)v_5(Y^u+Y^u T),
M_E = v_5/2Y^d T,
M_D = v_5/2Y^d.
Since it predicts M_E=M_D^T at the GUT scale, this simplest setup fails to correctly reproduce the observed mass spectrum. Therefore, we introduce a pair of vectorlike fermions 10_F+10_F that allows additional Yukawa interactions of the form
ℒ_Y⊃ y^'10_F10_F5^*_H + (m_a + λ_a 24_H) 10_F 10_F^a,
where a=1-4. With these interactions, the modified mass matrices of the down-type quarks and the charged leptons are given by <cit.>
M_D=
[ (Y_5)^ijv_5/2 m_i-λ_i v_24/4; (Y_5)^4jv_5/2 m_4-λ_4 v_24/4 ]_4× 4,
M_E=
[ (Y_5^T)^ijv_5/2 (Y_5^T)^i4v_5/2; m_j- 3/2λ_j v_24 m_4-3/2λ_4 v_24 ]_4× 4.
Owing to the mixing with the vector-like fermions, the above mass matrices break the wrong mass relation, namely M_E=M_D^T, and a consistent fit to the charged fermion spectrum can be easily obtained.
Gauge coupling unification:
In the SU(5) setup, gauge coupling unification requires a few states to live below the GUT scale. Interestingly, within our scenario, a pair of vectorlike fermions employed to cure the wrong mass relations can greatly help in achieving coupling unification. As shown in Ref. <cit.>, the vectorlike quarks Q(3,2,1/6)⊂ 10_F having a mass in the range m_Q∼ 10^3-10^6 GeV can provide gauge coupling unification consistent with current proton decay limits from Super-Kamiokande. In this analysis, the weak triplet ϕ_1 and the color octet ϕ_8 from 24_H are assumed to live in the intermediate mass scale in between m_Q-M_GUT.
Neutrino Mass:–
In our framework, neutrino mass is generated by the type II seesaw. The corresponding Yukawa interaction is given by
ℒ_Y⊃1/2 Y^ij_ν5_F^i 5_F^j 15_H
⊃1/2χ_2 e^iθ_2{1/√(2)(Y_ν)_ij}ν^T_i C ν_j.
From the above Lagrangian, neutrinos receive the following masses:
m^ν_ij= v_Δ(Y_ν)_ij ,
where the induced VEV of the neutral component of the 15_H multiplet has the expression
v_Δ = -v^2_5/2m^2_Δ_1( μ_3+ 3/√(2)λ_1 v_24) _≡μ.
Recall, v_5=246 GeV and v_24∼ 10^16 GeV; therefore, if the second term dominates, the size of λ_1 must be somewhat small to replicate the neutrino mass scale. As discussed above, to predict the generated baryon asymmetry at the end of the inflation, the cubic term is taken to be rather small. Therefore, within our setup, we have an one-to-one correspondence between λ_1 and v_Δ. Although these parameters μ_3 and λ_1 do not enter in the expressions of the scalar masses, they are restricted from the EW-precision (upper bound) and neutrino mass constraints (lower bound due to perturbativity)
𝒪(1) GeV> v_Δ≳ 5× 10^-11 GeV.
§ INFLATIONARY DYNAMICS
Cosmic inflation addresses the horizon and flatness problems of standard Big Bang cosmology and explains the origin of structure formation in the observable universe. Moreover, the unwanted superheavy monopoles generated <cit.> during the phase transition
SU(5) SU(3)_C × SU(2)_L × U(1)_Y
are also diluted away by inflation. Without inflation, these stable monopoles would overclose the universe. Our proposed scenario is particularly compelling because inflation is achieved using only the minimal set of fields introduced in the previous section.
The inflationary dynamics will be
induced by a combination of the SM Higgs H∈ 5_H, the weak triplet Δ_1∈ 15_H, and the GUT breaking field ϕ_0∈ 24_H. To realize inflation, one must guarantee flat directions in the scalar potential, which we achieve through these fields having non-minimal couplings to gravity that generically arise in curved spacetime <cit.>. These couplings flatten the scalar potential at large field values, ensuring that the slow-roll parameters are adequately satisfied and that the predicted observational signatures match current CMB measurements <cit.>. Inflation of this type has been consider, for example, in Refs. <cit.>.
For the consideration of inflation, the only relevant fields are the neutral components that can acquire VEVs, which we denote by
(1,2,1/2)⊃ϕ^0=1/√(2)ρ_1 e^iθ_1,
(1,3,1)⊃Δ^0=1/√(2)ρ_2 e^iθ_2,
(1,1,0)⊃Φ^0=ρ_3.
Then the Lagrangian, including the non-minimal couplings to gravity, can be written as (in the Jordan (𝒥) frame)
ℒ^𝒥/√(-g^𝒥) ⊃ - ( M^2_pl/2+ξ_ϕϕ^†_0ϕ_0+ξ_ΔΔ^†_0Δ_0 +ξ_Φ/2Φ^2_0 )ℛ^𝒥
-g^μν(D_μϕ^0)^†(D_μϕ^0)
-g^μν(D_μΔ^0)^†(D_μΔ^0)
-g^μν(D_μΦ^0)^†(D_μΦ^0)
-V_inf
+ℒ_Yukawa.
Here ξ_ϕ, ξ_Δ, ξ_Φ are real dimensionless (non-minimal) couplings which are taken to be positive, ℛ^𝒥 is the Ricci scalar in the Jordan
frame, and M_pl is the reduced
Planck mass. Moreover, with the fields defined in Eqs. (<ref>)-(<ref>), the scalar potential for inflation V_inf takes the following form:
V_inf=
-1/2m^2_ϕρ_1^2 +λ^ϕ_1/2_≡λ_ϕ/8ρ_1^4
+1/2m^2_Δρ_2^2 +( λ^Δ_1/4+λ^Δ_2/4)_≡λ_Δ/8ρ_2^4
-1/2 m^2_Φρ_3^2 + ( λ^Φ_1/4+7λ^Φ_2/120)_≡λ_Φ/8 ρ_3^4
+( λ^ϕΔ_1/4+λ^ϕΔ_2/4) _≡λ_ϕΔ/4ρ_1^2ρ_2^2
+( λ^Φϕ_1/4+3λ^Φϕ_2/40) _≡λ_Φϕ/4ρ_1^2ρ_3^2
+( λ^ΦΔ_1/4+λ^3ΦΔ_2/40 +3λ^ΦΔ_3/40) _≡λ_ΦΔ/4ρ_2^2ρ_3^2
+ √(3)μ_1/4√(5)_≡μ̂_1ρ_1^2ρ_3
+ √(3)μ_2/4√(5)_≡μ̂_2ρ_2^2ρ_3
+ μ_Φ/4√(15)_≡μ̂_4ρ_3^3
+{λ_1 √(3)/4√(10)_≡λ e^i(2θ_1-θ_2)ρ_1^2ρ_2ρ_3+h.c.}
+{μ_3/2√(2)_≡μ̂_3 e^i(2θ_1-θ_2)ρ_1^2ρ_2+h.c.}.
In this work, we will assume the cubic couplings to be sufficiently small (that can be easily arranged) so that they do not affect the inflation dynamics.
In this setup, the inflation phase starts when one or more of the modulus fields are displaced from their minimum, entering a large-field regime, i.e.,
ξ_ϕρ_1^2+ξ_Δρ_2^2+ξ_Φρ_3^2 ≫ M^2_pl. For the analysis of the inflationary dynamics, it is customary to work in the Einstein frame (ℰ), which is related to the Jordan frame through a local rescaling of the spacetime metric:
g^ℰ_μν=Ω^2(ρ_1,ρ_2,ρ_3)g^𝒥_μν,
where, Ω^2(ρ_1,ρ_2,ρ_3)=1+ξ_ϕρ_1^2+ξ_Δρ_2^2+ξ_Φρ_3^2/M_pl^2.
This Weyl transformation serves to both restore the minimal coupling to gravity and flatten the potential for large values of the modulus fields. However, in this Einstein frame, the metric, g^ℰ_μν, is no longer trivial due to the non-canonical form of the kinetic terms <cit.>. In the large-field regime (Ω^2≫ 1), the Einstein frame scalar potential can be written as
V^ℰ(ρ_i)=Ω^-4V^𝒥(ρ_i)
≈M_pl^4/8 (ξ_ϕρ_1^2+ξ_Δρ_2^2+ξ_Φρ_3^2)^-2{λ_ϕρ_1^4+λ_Δρ_2^4+λ_Φρ_3^4
+2λ_ϕΔρ_1^2ρ_2^2
+2λ_Φϕρ_1^2ρ_3^2
+2λ_ΦΔρ_2^2ρ_3^2+16λcos(θ_2)ρ_1^2ρ_2ρ_3}.
In the above potential, the only term that violates the global U(1)_X symmetry is the one with coefficient λ, which needs to be very small so that it does not spoil the inflationary trajectory. This term, however, plays a crucial role in generating baryon asymmetry. In this three-field inflationary scenario, hyper-valley condition for realizing inflation successfully demands <cit.>
A_ϕΔ>0, A_ϕΦ>0, A_ΦΔ>0 ,
where,
κ_ij ≡λ_ijξ_i-λ_iξ_j,
γ_ij=γ_ji ≡λ_iλ_j-λ_ij^2>0,
A_ij=A_ji ≡ξ_kγ_ij+λ_ikκ_ji+λ_jkκ_ij.
As we will show later with an example benchmark point that all these conditions can be satisfied for 𝒪(1) quartic couplings. For the analysis, we use the following parametrisation for the underlying field space:
ρ_1=φcosαsinβ, ρ_2=φsinαsinβ, ρ_3=φcosβ.
where, α and β are defined as,
cos^2 α=A_ΔΦ/A_ΔΦ+A_ϕΦ, cos^2 β=A_ϕΔ/A_ϕΔ+A_ϕΦ+A_ΔΦ.
The above parametrization leads to
Ω^2=1+ξφ^2/M_pl^2,
with
ξ=ξ_ϕcos^2αsin^2β+ξ_Δsin^2 αsin^2β+ξ_Φcos^2 β.
In terms of φ, the Lagrangian in the Jordan Frame is given by,
ℒ^𝒥/√(-g^𝒥) =-M_pl^2/2ℛ^𝒥 -ξ/2φ^2ℛ^𝒥 -1/2 g^μν∂_μφ∂_νφ
-1/2φ^2 sin^2 αsin^2 β g^μν∂_μθ_2 ∂_νθ_2-V(φ,θ_2),
where the potential is given by
V(φ,θ_2)=-1/2 m^2φ^2+(μ̃+2μ̂cosθ_2)φ^3+1/4λ̃φ^4+2λ^'cosθ_2 φ^4 ,
and we have defined the following quantities:
m^2=m^2_ϕcos^2αsin^2 β+m^2_Δsin^2 αsin^2 β+ m^2_Φcos^2 β,
μ̃=μ̂_1 cos^2αsin^2 βcosβ+μ̂_2sin^2 αsin^2 βcosβ+μ̂_4cos^3β,
μ̂=μ̂_3 cos^2 αsinαsin^3 β,
λ̃=λ_ϕ/2cos^4αsin^4 β+λ_Δ/2sin^4 αsin^4 β+λ_Φ/2cos^4 β
+λ_ϕΔcos^2αsin^2αsin^4 β + λ_ϕΦcos^2 αsin^2 βcos^2 β
+λ_ΔΦsin^2 αsin^2 βcos^2 β,
λ^'=λcos^2 αsinαsin^3 βcosβ .
Now reparametrizing φ in terms of canonically normalized field, which we denote by χ,
dχ/dφ=√(6ξ^2φ^2/M^2_pl+Ω^2)/Ω^2,
it allows us to write χ in terms of φ,
χ(φ)/M_pl=1/ξ ( √(1+6ξ)sinh^-1 ( √(ξ+6ξ^2)φ/M_pl ) .
. -√(6ξ)sinh^-1 ( √(6ξ^2)φ/M_pl/√(1+ξφ^2/M^2_pl) ) ).
Therefore, the Lagrangian in the Einstein frame takes the following form:
ℒ^ℰ/√(-g)= -M^2_pl/2R-1/2 g^μν∂_μχ∂_νχ
-1/2 f(χ)g^μν∂_μθ_2 ∂_νθ_2-U(χ,θ_2),
where
f(χ)=φ(χ)^2sin^2αsin^2 β/Ω^2,
U(χ,θ_2)=V(φ(χ),θ_2)/Ω^4.
From the above Lagrangian, the equations of motion become,
χ̈-1/2 f_,χθ̇_̇2̇^2+3Hχ̇+U_,χ=0,
θ̈_̈2̈+f_,χ/f(χ)θ̇_̇2̇χ̇+3Hθ̇_2+1/f(χ)U_,θ_2=0,
and the Hubble parameter is given by
H^2=1/3M^2_p ( 1/2 f(χ)θ̇_2^2+1/2χ̇^2+U(χ,θ_2) ).
Using slow-roll approximation, one can straightforwardly finds,
χ̇≈ -M_p U_,χ/√(3U) , θ̇_2 ≈ -M_p U_,θ_2/f(χ)√(3U).
We reparametrizing τ=t H_0, where H_0=m_s/2 (m_s=3× 10^13 GeV is the Starobinsky mass scale <cit.>) and denote a derivative with respect to τ by a prime. We, therefore, get
χ^''-1/2f_,χθ_2^'^2+3H̃/M_plχ^'+U_,χ/H_0^2=0,
θ_2^''+f_,χ/f(χ)θ_2^'χ^'+3H̃/M_plθ_2^'+1/f(χ)H_0^2U_,θ_2=0,
where we have defined the reduced Hubble parameter as
H̃^2=1/3 ( 1/2χ^'^2+1/2 f(χ)θ_2^'^2+U/H_0^2 ).
§.§ Inflationary Observables
Since Eq. (<ref>) describes an approximate single-field inflationary setup, we can utilize it to analyze the evolution of the inflationary phase and associated predictions.
We choose the relevant parameters such that the inflationary trajectory is preserved and the dynamics of θ_2 negligibly affect inflation.
As discussed above, non-minimal couplings are responsible for flattening the potential in the large field limit, and the Affleck-Dine mechanism is realized via the motion of the dynamical field θ_2. The amount of the asymmetry generated is determined by the size of the non-trivial motion induced in θ_2 sourced by inflation. Since during inflationary phase, m, μ̃, μ̂≪ϕ, and λ^' is small, the quartic coupling λ dominates the inflation dynmaics. Then the scalar potential Eq. (<ref>), in this regime, can be written as
U(χ)≃3/4 m^2_s M^2_pl( 1-e^-√(2/3)χ/M_pl)^2,
which mirrors the Starobinsky form <cit.>, and therefore, inflationary observables are expected to be in full agreement with inflationary observations.
From the scalar potential Eq. <ref>, the slow-roll parameter is given by
ϵ≃3/4N^-2_*,
where N_* represents the e-foldings number from the horizon
exit to the end of the inflation, which is required to be in the range 50< N_*< 60. N_* is computed using
N_*≃ M^-2_pl∫^χ_*_χ_end dχV/dV/dχ.
Two of the observables, spectral index and the tensor-to-scalar ratio, are given by
n_s≃ 1-2N^-1_*,
r≃ 12 N^-2_*,
which must satisfy the Planck observations <cit.>:
n_s = 0.9649 ± 0.042 (68%C.L.) ,
r < 0.036 (95%C.L.).
Using the approximated analytical formula given in Eq. (<ref>), one sees that within the viable range of e-folding number, namely 50< N_*< 60, the spectrum index is predicted to be 0.96≲ n_s≲ 0.9667, which is fully consistent with current measurements. On the other hand, from Eq. (<ref>), the tensor-to-scalar ratio is predicted to be in the range 3.3× 10^-3≲ r ≲ 4.8× 10^-3. Interestingly, r in this range can be probed by the upcoming LiteBIRD
telescope <cit.> and CMB-S4 experiment <cit.>. By solving the corresponding equations numerically, the predicted spectral index and the tensor-to-scalar ratio as a function of the e-folding number are depicted in Fig. <ref>.
Furthermore, to match the measured scalar power spectrum amplitude at the horizon exit, we demand <cit.>
A_s≃1/12π^2V^3/M^6_P|dV/dχ|^2 = 2.1× 10^-9
Fulfilling this condition requires
λ/ξ^2∼ 4.4× 10^-10,
which restricts the size of the λ coupling.
§.§ Generation of Baryon Asymmetry
The observed baryon asymmetry of the universe is measured to be <cit.>
η_N^obs =n_B/s≃ 8.5× 10^-11,
where s=(2π^2/45)g_*T^3 is the entropy density and
n_B=-(28/79)n_X/s is the baryon number density of the universe. As aforementioned, in our scenario, the asymmetry n_X is generated through the Affleck-Dine mechanism <cit.>. This mechanism is based on inducing non-zero angular motion in the phase of a complex scalar field, which is charged under a global Abelian symmetry, such as U(1)_X within our framework. In the early universe, this complex scalar field attains a large initial field value and begins to oscillate once the Hubble parameter drops below its mass. If the scalar potential includes an explicit U(1)_X breaking term, this motion generates a net X charge asymmetry. Therefore, if this symmetry is associated with the global X=B or X=L symmetries, a baryon asymmetry can be established before the electroweak phase transition.
From the analysis performed above, the U(1)_X charge asymmetry is generated during inflation (Fig. <ref>), and its value at the end of inflation is determined as
n_X= Q_B-Lφ_end^2sin^2 αsin^2 βθ̇_2,
≈ -Q_L4φ_end^2M_plλ^'sinθ_end/√(3λ̃),
where slow roll approximation is used.
This net asymmetry will be transferred to the baryonic sector through
equilibrium sphaleron processes prior to the electroweak phase transition.
The field value at the end of inflation corresponds to χ≈ 0.67 M_pl, as can be seen from Fig. <ref>, which shows the evolution of the field χ during the inflation.
As mentioned above, we assume that among the terms (m, μ, μ̂, λ̃, λ^') in the potential Eq. (<ref>), the quartic term with λ̃ dominates the inflationary dynamics, whereas the asymmetry is generated due to the presence of the λ^' coupling (which has negligible effect on the inflation). The values of quartic coupling and non-minimal couplings to gravity that lead to this inflationary trajectory are provided in Tab. <ref>.
To find the final value of the asymmetry, one needs to evolve the asymmetry after the end of inflation. The spontaneous symmetry breaking of the GUT field after the end of inflation will disrupt the inflationary trajectory described by Eq. (<ref>). After the ending of the inflationary phase, the inflation starts oscillating and eventually decays. During this time, we assume that the universe goes through a matter-like epoch. Therefore, the total baryon asymmetry generated during inflation, which is transferred to the SM thermal plasma during reheating, is given by:
n_X≃ n_X^end×( a_end/a_rh)^3= n_X^end×( H_rh/H_end)^2 .
For a typical parameter space, H_rh/H_end∼ 10^-2 <cit.>, which leads to
η_B≡. n_B/s |_T=T_reh=η_B^obs ( n^end_X /10^-16 M^3_pl ) ( g_⋆/112.75 )^-1/4.
Here, to obtain an approximated good estimation, following Higgs inflation, we have used the reheating temperature T_rh≃ 2.2× 10^14 GeV <cit.> (see also, Refs. <cit.>). A dedicated analysis of reheating may
be different in our scenario due to the presence of additional scalars on top of the SM Higgs, which, however, is beyond the scope of this work.
Moreover, in our analysis, not to violate unitarity, we consider ξ < 350 <cit.>.
Solution to the monopole problem: Typically, GUT models suffer from the overproduction of monopoles and require inflation to dilute their density. In SU(5) GUT, the adjoint Higgs breaks the GUT symmetry to the SM, and stable superheavy monopoles are formed at this stage. Typically, GUT singlets are introduced to realize inflation <cit.>. Our model, on the other hand, has the special feature that the monopole problem is naturally solved since the adjoint Higgs participates in inflation and already has non-zero field values during inflation. Therefore, the associated topological defects are inflated away. The model proposed in this work is highly attractive since no additional scalar fields (such as singlets) are required beyond the minimal set for the implementation of inflation.
Collider constraints:
Since the VEV of the triplet is expected to be somewhat small within our scenario, its primary decay is to SM leptons. Current LHC searches of doubly-charged scalars utilizing pp→Δ^++_1Δ^–_1→ℓ^+ℓ^+ℓ^-ℓ^- provides a lower limit on their masses ≳ 800 GeV <cit.>. Future colliders can probe this mass up to about 4 TeV <cit.>.
Therefore, neutrino mass, when combined with collider bounds and suppressing the washout effects, leaves up with the valid range of 0.05 eV≲ v_Δ≲ 10^-5 GeV. As we will show below, a stronger bound on this VEV is obtained from the condition of preventing the generated asymmetry from being washed out.
Lepton Flavor Violation: In this setup, both the singly-charged (Δ^±_1) and doubly-charged (Δ^±±_1) scalars within the weak triplet lead to lepton flavor violating processes <cit.>. The most stringent constraint comes from the tree-level decay of Δ^±±_1 leading to μ→ 3e. On the other hand, μ→ eγ is generated at one-loop order via both Δ^±_1 and Δ^±±_1 fields. Low energy experiments probing such lepton flavor violating rare decays can put a limit as high as about 10^3-10^4 TeV on the triplet mass. The exact bounds on these masses largely depend on the size of the relevant Yukawa couplings. However, within our framework, washout constraints on the parameter space provide even stronger bounds than the lepton flavor violation.
Washout:
Owing to large reheating temperature, the triplet Δ_1 will rapidly
thermalize at the beginning of the radiation era, and subsequently washout the generated asymmetry. Therefore, we require that the LL↔ HH is never in thermal equilibrium, which, when combined with the requirement of reproducing the correct neutrino mass scale, leads to m_Δ≲ 5× 10^11 GeV <cit.>. Moreover, to ensure that processes like LL ↔Δ_1 and HH ↔Δ_1 do not co-exist demands v_Δ m^1/2_Δ≲ 3.6× 10^-4 GeV^2/3 <cit.>.
We illustrate the constraints in Figure <ref>, by plotting the quartic coupling λ versus the mass of the triplet, m_Δ, assuming μ̂ is negligible and considering λ^' = 10^-1λ which holds true for a typical parameter space. In this scenario, the lower bound for λ≳𝒪(10^−14) comes from producing the observed baryon asymmetry. The λ^' term should be sufficiently small that it does not significantly alter the inflationary trajectory. Therefore, we restrict ourselves with λ≲ 10^−9. Moreover, as aforementioned, the approximate collider bound on the weak triplet mass, m_Δ≳ 1 TeV, translates into v_Δ≲ 10^-5 GeV. However, we find that a more stringent constraint arises from preventing washout and generating appropriate baryon asymmetry that requires μ>10^-2 GeV, which translates into m_Δ≳ 10^7 GeV and thereby providing an upper bound on the VEV v_Δ≲ 10^-7 GeV. For a fixed value of λ^', the upper limit of the m_Δ is provided by the perturbativity constraints of Yukawa couplings. After taking into account all these theoretical and experimental constraints, the remaining white space in Figure <ref> corresponds to a viable parameter space for generating the correct baryon asymmetry of the universe. As can be seen from Figure <ref>, a viable range for the mass of the type II seesaw triplet field is 10^7 GeV≲ m_Δ≲ 10^10 GeV, which is rather heavy and a very different window as obtained in Ref. <cit.>. This difference is owing to the fact that unlike Ref. <cit.>, in this work, we considered a renormalizable scalar potential. In our setup, the baryon asymmetry is generated by utilizing a dimension four term in the Lagrangian.
§ CONCLUSIONS
The origins of neutrino mass and baryon asymmetry represent two of the most profound and unresolved questions in particle physics. Simultaneously, cosmic inflation—an essential mechanism for explaining several fundamental issues in the standard Big Bang model—remains a major enigma in cosmology. In this work, we have proposed a unified theoretical framework, in particular, a minimal SU(5) Grand Unified Theory, that simultaneously addresses the origins of cosmic inflation, baryon asymmetry, and neutrino mass generation. Our model integrates these phenomena into a coherent structure where inflation is governed by three essential fields: the GUT-breaking adjoint Higgs 24_H, the Standard Model Higgs residing in the fundamental representation 5_H, and the multiplet 15_H, which is crucial for neutrino mass generation. Specifically, we have adopted inflation with a non-minimal coupling to gravity. The weak triplet within 15_H, part of the inflaton, achieves a displaced vacuum value early in the universe, leading to baryon asymmetry via the Affleck-Dine mechanism. In this setup, we have found that the viable mass range for the type II seesaw triplet field is 10^7 GeV≲ m_Δ≲ 10^10 GeV, which depends on the value of the quartic coupling (λ_1)–the coefficient of a renormalizable term in the scalar potential responsible for explicitly breaking a global symmetry and for the genesis of baryon asymmetry. Additionally, the adjoint Higgs responsible for breaking the GUT symmetry acquires non-zero field values during inflation, which helps evade the monopole problem. Within our model, the inflationary observables, especially the spectral index, show excellent agreement with experimental data, while the tensor-to-scalar ratio is expected to be probed by the future LiteBIRD and CMB-S4 experiments. In summary, our unified approach addresses key challenges in particle physics and cosmology, providing a novel resolution to these issues.
§ ACKNOWLEDGEMENT
We thank Rabindra N. Mohapatra for discussion. The work of A.K. is supported by the U.S. Department of Energy under grant number DE-SC 0016013. S.S. acknowledges the Center for Theoretical Underground Physics and Related Areas (CETUP* 2024) and the Institute for Underground Science at Sanford Underground Research Facility (SURF) for providing a conducive environment for the finalization of this work. Some computing for this project was performed at the High Performance Computing Center at Oklahoma State University, supported in part through the National Science Foundation grant OAC-1531128.
style
|
http://arxiv.org/abs/2409.02966v1 | 20240904034839 | A classification of $C_{p^n}$-Tambara fields | [
"Noah Wisdom"
] | math.AT | [
"math.AT",
"math.GR"
] |
Two-pole structure of the h_1(1415) axial-vector meson:
resolving mass discrepancy
Hyun-Chul Kim
September 9, 2024
====================================================================================
§ ABSTRACT
Tambara functors arise in equivariant homotopy theory as the structure adherent to the homotopy groups of a coherently commutative equivariant ring spectrum. We show that if k is a field-like C_p^n-Tambara functor, then k is the coinduction of a field-like C_p^s-Tambara functor ℓ such that ℓ(C_p^s/e) is a field. If this field has characteristic other than p, we observe that ℓ must be a fixed-point Tambara functor, and if the characteristic is p, we determine all possible forms of ℓ through an analysis of the behavior of the Frobenius endomorphism and an application of Artin-Schreier theory.
§ INTRODUCTION
For G a finite group, G-Tambara functors are the basic objects of study in equivariant algebra. They arise in homotopy theory as the natural structure adherent to the homotopy groups of a G-𝔼_∞ ring spectrum, though they additionally arise through many important situations in commutative algebra. For example any finite Galois field extension gives rise to a Gal-Tambara functor, and the representation rings of G and its subgroups naturally have the structure of a Tambara functor.
Roughly speaking, the notion of a G-Tambara functor is obtained by abstracting the notion of a Galois extension with Galois group G. More precisely, in this setting, one has intermediate fields for each subgroup H ⊂ G which have residual Weyl group W_G H action, contravariant inclusions between intermediate fields, as well as covariant transfer and norm maps between intermediate fields, all satisfying formulae relating various compositions. In a G-Tambara functor, we ask merely for rings k(G/H) for each subgroup of G, and do not require that restriction maps are inclusions. Here we still have transfers, norms, and Weyl group actions, whose compositions satisfy similar formulae. A morphism of G-Tambara functors is a collection of ring maps, one for each level G/H, which commute with restrictions, norms, transfers, and Weyl group actions.
While G-Tambara functors are the equivariant algebra analogues to rings, Nakaoka <cit.> <cit.> has defined field-like Tambara functors as those nonzero k for which every morphism k →ℓ with ℓ≠ 0 is monic. In particular, Nakaoka defines an ideal of a Tambara functor and shows that every Nakaoka ideal is obtained as the collection of kernels at each level of a map of G-Tambara functors. Next, Nakaoka observes <cit.> that k is field-like if and only if k(G/e) has no nontrivial G-invariant ideals and all restriction maps in k are injective. Additionally, upcoming work of Schuchardt, Spitz, and the author <cit.> classify the algebraically closed (or Nullstellensatzian) fields in Tambara functors: they are precisely the coinductions of algebraically closed fields.
Fields play an important role in homotopy theory and higher algebra; the rings _p are among the most fundamental objects, viewed as 𝔼_∞-ring spectra via the Eilenberg-MacLane construction. While this construction makes sense for any discrete ring, the most powerful computational tools of this form are usually obtained by feeding in a field. In equivariant homotopy theory, there is a similar Eilenberg-MacLane construction, although in the literature, computations are typically carried out with respect to the constant Tambara functors associated to fields (or the initial Tambara functor). These are indeed field-like Tambara functors, although they do not have the property that all of their Mackey functor modules are free! On the other hand, there are many other Tambara fields, for which there are relatively few computations in the literature, which do have the property that all of their Mackey functor modules are free (namely those which are coinduced from fields). We hope that the results of this article will serve as a source of inspiration for equivariant computations. For example, we pose the following question: what are the RO(C_p^n)-graded homotopy groups of all C_p^n-Tambara fields?
We aim to give a complete classification of the field-like C_p^n-Tambara functors, for C_p^n the cyclic group of order p^n. The impetus of this work is the following observation of David Chan and Ben Spitz <cit.>. They showed that if k is field-like, then k(G/e) is a product of copies of a field permuted transitively by the G-action. Despite the fact that this may be deduced relatively quickly from Nakaoka's results, it suggests that an enormous amount of structure on a Tambara functor is forced by the field-like condition. To capture the special case of the Chan-Spitz result for which k(G/e) is a field, we introduce the following definition.
Let k be a Tambara functor. If k(G/e) ≅ Fun(G/H,R) for some H-ring R and proper subgroup H ⊂ G, we call k separated. Otherwise we call k pure.
If a field-like Tambara functor k is separated, we may express this suggestively as k(G/e) ≅Coind_H^G, where Coind_H^G is the coinduction functor from H-rings to G-rings, right adjoint to the restriction morphism. A similar right adjoint exists on the level of Tambara functors, also called coinduction and written Coind_H^G. To reduce clutter, we introduce the notation Coind_i^n for the coinduction from C_p^i to C_p^n (and Res_i^n for the restriction from C_p^n to C_p^i).
For C_p^n the cyclic order p^n group, if k is a field-like C_p^n-Tambara functor, then
k ≅Coind_i^n ℓ
for some pure C_p^i-Tambara functor ℓ.
This reduces the classification problem of C_p^n-Tambara fields to those which are pure, ie have C_p^n/e level a field. If the characteristic of this field is prime to p, or the C_p^n-fixed point subfield is perfect, then the classification of such Tambara fields is straightforward.
Suppose k is a pure field-like C_p^n-Tambara functor. If char(k(G/e)) ≠ p, or k(C_p^n/e)^C_p^n is perfect, then k is canonically isomorphic to the fixed-point Tambara functor associated to k(C_p^n/e), ie the restriction maps determine isomorphisms k(C_p^n/C_p^i) → k(C_p^n/e)^C_p^i.
In the case of characteristic p with k(C_p^n/e)^C_p^n nonperfect, it turns out we may still classify all possible structure. Roughly speaking, a pure field-like C_p^n-Tambara functor is obtained by choosing a descending collection of subrings of a field with C_p^n-action. The chief obstruction to an arbitrary collection of subrings forming a C_p^n-Tambara functor is that they must contain the appropriate norms and transfers. In particular, one may first show that all such subrings must be subfields which contain the image the C_p^n-fixed points under the nth iterate of the Frobenius endomorphism.
With this niceness condition, we describe how any pure C_p^n-Tambara field k of characteristic p may be constructed from suitably compatible pure C_p^n-1-Tambara and C_p-Tambara fields of characteristic p, respectively ℓ_t and ℓ_b (along with one additional minor piece of information). Recursively, this reduces the classification to pure C_p-Tambara fields of characteristic p. If the C_p action on the C_p/e level is trivial, this classification is straightforward, and if it is nontrivial, we utilize Artin-Schreier theory, culminating in the following proposition, which completes the classification.
Every pure C_p-Tambara field of characteristic p is obtained by choosing a sub-Tambara-field of k(C_p/e) according to one of the following two situations:
* Let C_p act trivially on , set k(C_p/e) =, and set k(C_p/C_p) to be any subfield of containing the image of the Frobenius endomorphism.
* Let →[x]/(x^p-x-α) be any Artin-Schreier field extension. Set k(C_p/e) = [x]/(x^p-x-α) (with C_p acting as the Galois group). If p is odd, choose k(C_p/C_p) any subfield of containing both α and the image of under the Frobenius endomorphism. If p is even, then k(C_p/C_p) must be .
Interestingly, the possible structure depends on whether p is odd or p = 2. In particular, there are "fewer" C_p^n-Tambara fields when p = 2 than when p is odd.
In section 2, we review the necessary background on field-like Tambara functors and coinduction. Section 3 provides the reduction of the classification problem to pure Tambara functors. Finally, section 4 explains how to construct any pure C_p^n-Tambara functor from pure C_p-Tambara functors, and classifies all pure C_p-Tambara functors.
The author would like to thank his advisor, Mike Hill, for many deep and insightful conversations. Additionally, the author thanks David Chan for sharing the proof of Proposition <ref>, due to David Chan and Ben Spitz. Finally, the author thanks Haochen Cheng for helpful conversations.
§ RECOLLECTIONS ON TAMBARA FUNCTORS
For a complete introduction to Tambara functors, see <cit.>. Recall that, for G a finite group, a G-Tambara functor k is roughly the following data:
* Rings k(G/H) for each transitive G-set G/H. We say k(G/H) is in level G/H, and refer to k(G/e) (resp. k(G/G)) as the bottom level (resp. top).
* Ring maps k(G/H) → k(G/K) for every morphism of G-sets G/K → G/H.
* Multiplicative norm and additive transfer maps k(G/H) → k(G/K) for every morphism of G-sets G/H → G/K.
Note that the Weyl group W_H = N_H/H of H ⊂ G describes the automorphisms of the transitive G-set G/H, hence the rings k(G/H) all possesses Weyl group actions, which are intertwined by the restriction maps. The norm, transfer, and restriction maps are required to satisfy various formulae. One of these is the double coset formula, which we describe here under the assumption that G is abelian. For H ⊂ L, we have that the composition of the transfer T_L^H : k(G/H) → k(G/L) followed by restriction R_L^H : k(G/L) → k(G/H) is equal to the sum of the Weyl group orbits
R_L^H T_L^H = Σ_g ∈ L/H c_g
where c_g denotes the action of g ∈ G → W_H on k(G/H). An analogous formula holds for the norm in place of the transfer, where the sum is replaced with a product.
Finally, given a Tambara functor k, we may identify it with the unique extension to a product-preserving functor from finite G-sets to rings; by product preserving, we mean k(G/H ⊔ G/K) ≅ k(G/H) × k(G/K). This perspective will be required for the discussion of coinduction below.
Let R be a ring with G-action. The fixed points Tambara functor is the G-Tambara functor R with R(G/H) = R^H. Noting that all restriction maps are inclusions, transfers and norms are uniquely defined as the appropriate sums (resp. products) of orbits via the double coset formula. The fixed point G-Tambara functor construction is functorial, and right adjoint to the functor k ↦ k(G/e) from G-Tambara functors to G-rings.
A nonzero G-Tambara functor k is called field-like, or a G-Tambara field, if every nonzero morphism with domain k is monic.
By this definition, any field-like Tambara functor k may be viewed as a subfunctor of the field-like Tambara functor k(G/e). This is because the adjunction unit k →k(G/e) is nonzero, hence monic (hence injective in all levels). By this fact, to specify a field-like Tambara field, it is enough to specify a subring of each level of a Tambara field R which collectively are appropriately closed under taking transfers, norms, and restrictions.
A G-Tambara functor k is field-like if and only if all restriction maps are injective and k(G/e) has no G-invariant ideals (recalling W_e = G).
Directly from this, we may prove the following result of David Chan and Ben Spitz. While this is an elementary consequence of the statement that k(G/e) has no G-invariant ideals (in fact, it is equivalent to it), it greatly illuminates the structure of Tambara fields.
Let k be a field-like G-Tambara functor. Then k(G/e) is a product of copies of a field permuted transitively by the G-action.
Let m be a maximal ideal of k(G/e), and consider the G-set { gm | g ∈ G }. Since G acts transitively, it is isomorphic to G/H for some subgroup H ⊂ G. Now consider ∩_gH ∈ G/H gm. This is a G-invariant ideal, hence by <cit.> it must be the zero ideal. Writing = k(G/e)/gm (which does not depend on the choice of g), by the Chinese remainder theorem, k(G/e) ≅^|G/H|. Since G acts transitively on the gm, it transitively permutes the factors in the product.
This result suggests the following definition, with which we reinterpret Nakaoka's result.
A G-ring is field-like if it has no nontrivial G-invariant ideals. Equivalently, it is a product of fields permuted transitively by the G action.
A Tambara functor k is field-like if and only if all restriction maps are injective and k(G/e) is a field-like G-ring.
Without knowing Proposition <ref> or <cit.>, it is a priori possible for a field-like Tambara functor k with k(G/e) ≅/n to exist for some composite integer n. Fortunately, there is an intrinsic notion of characteristic of a G-Tambara functor, which by Proposition <ref> may be identified with the usual possibilities for characteristic of a field.
The characteristic of a Tambara functor k is the equivalence class determined by the following equivalence relation: k ∼ℓ if k ⊠ℓ≠ 0.
The characteristic of k may be identified with the characteristic of k(G/e).
Use the formula (k ⊠ℓ)(G/e) ≅ k(G/e) ⊗ℓ(G/e) and the fact that k(G/e) and ℓ(G/e) are finite products of fields.
There is likely interesting combinatorial structure captured by the box-product of field-like Tambara functors, although a more serious investigation falls outside the scope of this paper.
Finally, we review the coinduction functor. Given H ⊂ G, the coinduction Coind_H^G ℓ of an H-Tambara ℓ to a G-Tambara functor is obtained by precomposition with the restriction functor from finite G-sets to finite H-sets. This requires us to view ℓ as a functor on all finite G-sets, rather than merely the transitive ones. For G = C_p^n and ℓ a C_p^k-Tambara functor, we supply a pictoral description of the coinduction Coind_k^n ℓ below:
( Coind_k^n ℓ) ( C_p^n/C_p^n) ≅ℓ(C_p^k/C_p^k)
( Coind_k^n ℓ) ( C_p^n/C_p^n-1) ≅ℓ(C_p^k/C_p^k)^× p
⋮
( Coind_k^n ℓ) ( C_p^n/C_p^k+1) ≅ℓ(C_p^k/C_p^k)^× p^n-k-1
( Coind_k^n ℓ) ( C_p^n/C_p^k) ≅ℓ(C_p^k/C_p^k)^× p^n-k
( Coind_k^n ℓ) ( C_p^n/C_p^k-1) ≅ℓ(C_p^k/C_p^k-1)^× p^n-k
⋮
( Coind_k^n ℓ) ( C_p^n/e ) ≅ℓ(C_p^k/e)^× p^n-k.
One immediately observes using <cit.> that if ℓ is a field-like H-Tambara functor, then so is Coind_H^G ℓ. Coinduction is right adjoint to the restriction functor Res_H^G, which is given levelwise by precomposition with the coinduction functor from H-sets to G-sets <cit.>. Heuristically, one may view coinduction as "preserving the top level" and restriction as "preserving the bottom level". In particular, restriction does not in general preserve Tambara fields, although we have the following.
Suppose k is a pure G-Tambara field. Then for any subgroup H ⊂ G, Res_H^G k is a pure H-Tambara field.
There is also a coinduction functor from H-rings to G-rings, which is right adjoint to the restriction functor. It is given by R ↦ Fun(G/H,R), which we abbreviate by Coind_H^G R.
§ SEPARATED TAMBARA FIELDS
In this section we aim to reduce the classification of field-like C_p^n-Tambara functors to those whose bottom level C_p^n/e is a field. To describe Tambara fields of this form, we introduce the notion of a pure Tambara functor.
Let k be a Tambara functor. If k(G/e) ≅Coind_H^G R for some ring R and proper subgroup H ⊂ G, we call k separated. Otherwise we call k pure.
Let R an H-ring. Then Coind_H^G R≅Coind_H^G R.
Since coinduction is right adjoint to restriction and the fixed-point construction is right adjoint to the "bottom-level" functor, it suffices to prove that the left adjoints commute, ie for all G-Tambara functors k, we have
( Res_H^G k ) (H/e) ≅Res_H^G (k(G/e)) .
Now the left-hand side is defined as k ( Coind_H^G (H/e) ) ≅ k(G/e) with H acting through restriction of the G-action. This is precisely Res_H^G (k(G/e)), as desired.
Let k a G-Tambara functor with k(G/e) ≅Coind_H^G R for some H-ring R, and suppose that the restriction map k(G/H) → k(G/e) is injective. Then k(G/H) ≅Coind_H^G S for some subring S ⊂ R.
Let { x_gH} denote the set of idempotents corresponding to projection on the each factor R in level G/e. Note that this set is isomorphic to G/H. The double coset formula implies that the composition of the norm map from the bottom level G/e to level G/H with the restriction map to the bottom level sends each x_gH to itself (the product over the H-orbits). Therefore k(G/H) contains a sub-G-set of idempotents isomorphic to G/H, which induce the desired isomorphism.
Suppose k is a field-like G-Tambara functor and k(G/e) ≅Coind_H^G for some H-field . Then whenever L ⊂ H, k(G/L) ≅Coind_H^G R for some subring R of .
The restriction map k(G/H) → k(G/e) factors through k(G/L), hence the sub-G-set of idempotents of k(G/H) isomorphic to G/H is also contained in k(G/L)
Suppose G is abelian, k is any G-Tambara functor such that k(G/H) is a product of copies of some ring R permuted freely and transitively by the Weyl group G/H, and L is a subgroup of G containing H such that the restriction k(G/L) → k(G/H) is injective. Then the restriction map has image k(G/H)^L.
Choose an idempotent x ∈ k(G/H) corresponding to projection onto a factor R and choose r ∈ R arbitrary. The double coset formula implies that transferring rx up to k(G/L) and restricting the resulting element down to k(G/H) results in
r ( Σ_g ∈ L/H gx ) .
Repeating this process through all choices of x and r ∈ R, we observe that the image of the restriction contains a collection of copies of R, embedded in k(G/H) via the diagonal embedding R → R^× L/H followed by any of the |G/L| inclusions R^× L/H↪ R^× G/H. Therefore the subring generated by the image is precisely the L-fixed points of R^× G/H.
The previous two lemmas show that any field-like G-Tambara functor "looks like" a coinduced one in any level G/L such that L either contains or is contained in some fixed subgroup H ⊂ G. So, we can only deduce that field-like Tambara functors are always coinduced for families of abelian groups for which the subgroup lattice is a well-ordered set. This is why we only obtain a classification of fields for groups C_p^n. The extent to which field-like Tambara functors are coinduced from pure ones for more general G remains open (though likely not difficult).
If k is a field-like C_p^n-Tambara functor, then k ≅Coind_s^n ℓ for some pure C_p^s-Tambara functor ℓ.
By Proposition <ref>, k(C_p/e) ≅Coind_s^n for some C_p^s-field . Composing the canonical map to the fixed point Tambara functor of the C_p^n/e level with the isomorphism of Lemma <ref> supplies a map k →Coind_s^n which is manifestly an isomorphism in level C_p^n/e.
As rings, set ℓ(C_p^s/C_p^j) to be the subring R_j of appearing in Corollary <ref>, and identify k(C_p^n/C_p^j) with Coind_s^n ℓ(C_p^s/C_p^j). The C_p^s-equivariant restriction maps for ℓ are obtained as the restriction of the restriction maps for k to the eC_p^s factor (the proof of Corollary <ref> shows that this is well-defined). The norm and transfer maps are defined similarly, observing that the double coset formula along with injectivity of the restriction maps imply that the restriction of the norm (resp. transfer) in k(C_p^n/C_p^j) to the eC_p^s factor lands in the eC_p^s factor, for j ≤ s. The exponential and double coset formulae for k then become the double coset formulae for ℓ.
We may alternatively construct ℓ as follows. Note that Res_s^n k has an action of C_p^n/C_p^s arising from the free and transitive permutation of the C_p^s-orbits of the C_p^s-sets
Res_s^n ( Coind_s^n C_p^s/C_p^k)
which corresponds in each level to permuting the |C_p^n/C_p^s| factors ℓ(C_p^s/C_p^j) of k(C_p^n/C_p^j). We define ℓ as the subfunctor of Res_s^n k formed by the C_p^n/C_p^s-fixed points of this action.
Now ℓ is a pure field-like C_p^s-Tambara functor, and we may coinduce the canonical map
ℓ→ℓ(C_p^s/e)
to
Coind_s^n ℓ→Coind_s^n ℓ(C_p^s/e)≅Coind_s^n
Finally, the image of
k(C_p^n/C_p^i) →Coind_s^n (C_p^n/C_p^i)
is precisely the image of Coind_s^n ℓ(C_p^n/C_p^i); when i ≤ k this is by construction of ℓ, and when i ≥ k this is by Lemma <ref>. Since k and Coind_s^n ℓ are both field-like, they are naturally isomorphic to their images in Coind_s^n, hence to each other.
§ PURE TAMBARA FIELDS
In this section we aim to classify the pure Tambara fields. In many cases the double coset formula forces pure Tambara fields to be isomorphic to fixed-point Tambara functors; we start by collecting these results.
Suppose k is a pure C_p^n-Tambara functor of characteristic different from p. Then the canonical map k →k(C_p^n/e) is an isomorphism.
Consider the restriction of the transfer map k(C_p^n/e) → k(C_p^n/C_p^s) to the C_p^s-fixed points. The double coset formula implies that postcomposing this map with the restriction k(C_p^n/C_p^s) → k(C_p^n/e) is multiplication by p^s, which is a unit in k(C_p^n/e) by assumption. Therefore the restriction has image k(C_p^n/e)^C_p^s. Since it is injective by Nakaoka's theorem, it is an isomorphism k(C_p^n/C_p^s) ≅ k(C_p^n/e)^C_p^s. This is precisely the statement that k →k(C_p^n/e) is an isomorphism.
The category of field-like C_p^n-Tambara functors of characteristic other than p is adjointly equivalent to the category of field-like C_p^n-rings of characteristic other than p.
By Proposition <ref> and Theorem <ref>, the functor R ↦R is an inverse adjoint equivalence to the functor k ↦ k(G/e).
In general, we conjecture that the above corollary is true for any finite group G, so long as the characteristic does not divide the order of the group. It seems more likely that this is at least true for abelian groups G.
Let k be a field-like C_p^n-Tambara functor of characteristic different from p. Then any morphism of field-like C_p^n-Tambara functors ℓ→ k which is an isomorphism on the bottom level C_p^n/e is an isomorphism of Tambara functors.
This corollary may be of homotopical use. Namely, it heuristically says that the C_p^n/e level homotopy group functor is conservative on C_p^n-𝔼_∞-ring spectra whose homotopy groups are appropriately built out of field-like Tambara functors of characteristic other than p.
We will see later that these corollaries fail in characteristic p. Recall that Artin's lemma states that if a finite group G acts on a field , then the inclusion of G-fixed points ^G → is a Galois extension. The Galois group is the homomorphic image of G in Aut().
Suppose k is a pure C_p^n-Tambara functor such that k(C_p^n/e)^C_p^n is a perfect characteristic p field. Then the canonical map k →k(C_p^n/e) is an isomorphism.
As in the argument of Proposition <ref>, it suffices to show that each restriction map k(C_p^n/C_p^s) → k(C_p^n/e) has image k(C_p^n/e)^C_p^s. Since any Galois extension of a perfect field is perfect, our assumption ensures that each fixed-point field k(C_p^n/e)^C_p^s is perfect.
Now consider the restriction of the norm k(C_p^n/e) → k(C_p^n/C_p^s) to the C_p^s-fixed points. The double coset formula implies that postcomposing this map with the restriction k(C_p^n/C_p^s) → k(C_p^n/e)^C_p^s is x ↦ x^p^s, ie the s-fold iterate of the Frobenius map. Since k(C_p^n/e)^C_p^s is perfect, the Frobenius map is an isomorphism, so we observe that the restriction map is surjective, as desired.
Combining Proposition <ref> with Proposition <ref>, we obtain Theorem <ref>. Next, we analyze what happens when k is a pure Tambara field with k(C_p^n/e)^C_p^n a possibly non-perfect characteristic p field.
Let k be a pure C_p^n-Tambara functor of characteristic p. Writing ϕ for the Frobenius endomorphism, we call the subfield ϕ^n(k(C_p^n/e)^C_p^n) of k(C_p^n/e) the lower bound field of k.
Suppose k is a pure C_p^n-Tambara functor of characteristic p. Then each k(C_p^n/C_p^s), viewed as a subring of k(C_p^n/e), is an intermediate field of the extension ϕ^n ( k(C_p^n/e)^C_p^n) ↪ k(C_p^n)
By the double coset formula, the lower bound field of k is contained in the image of the composition of the norm map k(C_p^n/e) → k(C_p^n/C_p^n) with the restriction k(C_p^n/C_p^n) → k(C_p^n/e). In particular, it is contained in the image of all restriction maps. Therefore each k(C_p^n/C_p^s) is a subring of k(C_p^n/e) (via the restriction map) containing the lower bound field.
To show each k(C_p^n/C_p^s) is a field, it suffices to show each element has an inverse. Note that k(C_p^n/e) is algebraic over the lower bound field, because it is a Galois extension of its C_p^n-fixed point subfield by Artin's lemma and any characteristic p field is algebraic over the image of an iterate of the Frobenius endomorphism.
Letting x ∈ k(C_p^n/C_p^s), we see that x is a root of some polynomial over the lower bound field. In particular, the subring of k(C_p^n/C_p^s) generated by x and the lower bound field is a finite-dimensional vector space over the lower bound field, hence is Artinian. Since it is a subring of a field, it is an integral domain, hence a field. Thus x has an inverse in k(C_p^n/C_p^s).
Let k be a pure C_p^n-Tambara functor of characteristic p. Then we may construct a C_p^n-1-Tambara functor ℓ_t which captures the "top piece" of k as follows. Observe that for s ≥ 1 each k(C_p^n/C_p^s) has a C_p^n-1 action with kernel C_p^s-1 (namely, regard the Weyl group as a quotient of C_p^n). First, set ℓ_t(C_p^n-1/C_p^s-1) = k(C_p^n/C_p^s) for 1 ≤ s ≤ n. Next, define the restriction maps for ℓ_t via the restriction maps for k. Since the restriction maps for k are appropriately equivariant, so are those for ℓ_t.
Finally, define the norm and transfer maps for ℓ_t via the norm and transfer maps for k with the appropriate domain and codomain. To check that ℓ_t is a C_p^n-1-Tambara functor, it suffices to check that the appropriate double coset and exponential formulae are satisfied. In fact, we may do this in a universal example. Since we have already defined norms and transfers on ℓ_t, via the map k →k(C_p^n/e) it suffices to check that our construction produces a C_p^n-1-Tambara functor when applied to a fixed-point Tambara field . This is clear, however, as our construction produces the fixed-point C_p^n-1-Tambara functor ^C_p^n-1.
On the other hand, we may extract a C_p-Tambara field ℓ_b from k which recovers the "bottom piece" of k by ℓ_b := Res_1^n k. Unwinding definitions, we have ℓ_b(C_p/e) = Res_1^n k(C_p^n/e) and ℓ_b(C_p/C_p) = Res_0^n-1 k(C_p^n/C_p), with restriction, norm, and transfer for k giving the restriction, norm, and transfer maps for ℓ_b.
Every pure C_p^b-Tambara field k of characteristic p is obtained from the following:
* a choice of pure C_p^n-1-Tambara field ℓ_t of characteristic p
* a choice of C_p^n-field = k(C_p^n/e)
* a choice of pure C_p-Tambara field ℓ_b of characteristic p.
These choices must satisfy the following compatibility criteria:
* ℓ_b(C_p/C_p) = Res_0^n-1ℓ_t(C_p^n-1/e)
* ℓ_b(C_p/e) = Res_1^n
* The ring map
ℓ_t(C_p^n-1/e) ≅ℓ_b(C_p/C_p) →ℓ_b(C_p/e) ≅
is C_p^n-1-equivariant.
Given ℓ_b, ℓ_t, and k(C_p^n/e) as above, we define a C_p^n-Tambara functor k as the following subfunctor of the fixed-point C_p^n-Tambara functor . Set k(C_p^n/e) = and k(C_p^n/C_p^s) = ℓ_t(C_p^n-1/C_p^s-1) for s ≥ 1. The restrictions, norms, and transfers which do not factor nontrivially through C_p^n/C_p are well-defined (in the sense that their codomain contains their image) because they are well-defined for ℓ_t and ℓ_b respectively. The remaining restrictions, norms, and transfers are well-defined because they are compositions of well-defined restrictions, norms, and transfers, respectively.
This recursively reduces the classification of pure C_p^n-Tambara fields of characteristic p to pure C_p-Tambara fields of characteristic p. Let k be such a C_p-Tambara functor. If C_p acts trivially on k(C_p/e), then the composition of the norm map with the restriction may be identified with the Frobenius endomorphism, and the transfer map is zero. Thus k(C_p/C_p) may be any subfield of k(C_p/e)^C_p containing the image of the Frobenius endomorphism.
We may form a C_p-Tambara functor of the above type as follows. First, consider the fixed-point Tambara functor associated to the trivial C_p action on _p(t). We may form a sub-Tambara functor with the same bottom level C_p/e, but top level equal to the image of the Frobenius endomorphism _p(t^p). The inclusion of this sub-functor provides an example of a morphism between Tambara fields which is an isomorphism on the bottom level, but is not an isomorphism.
We may also form a C_p-Tambara functor with nontrivial C_p-action on the bottom level. Let k(C_p/e) = _p(t)[x]/(x^p-x-t) with C_p acting as the Galois group over _p(t). The transfer of x is 0 (or -1 for p=2), and the norm of x is -t. So k(C_p/C_p) must not only contain the image of the Frobenius endomorphism on _p(t), but t as well. Consequently k(C_p/C_p) must be _p(t) (since it must be a subfield of the fixed points). On the other hand, we could have chosen k(C_p/e) = _p(t)[x]/(x^p-x-1), in which case we could have set k(C_p/C_p) = _p(t^p) (for p > 2). For p = 2, the transfer of tx is -t, so we must have k(C_p/C_p) = _p(t).
Suppose k is a pure C_p-Tambara field of characteristic p with C_p acting nontrivially on k(C_p/e). Then k(C_p/C_p) may be any subfield of k(C_p/e)^C_p which contains the image of the Frobenius endomorphism, as well as the image of the norm and transfer from k(C_p/C_p).
This observation may be further refined by studying the possible norm and transfer maps. First, note that k(C_p/e) is a Galois extension of k(C_p/e)^C_p with Galois group C_p. Artin-Schreier theory <cit.> implies that k(C_p/e) is the splitting field of an Artin-Schreier polynomial, which has the form x^p-x-α, where α is fixed by C_p. The norm of any root of this polynomial is the constant term - α, and the transfer of any root is either 0 when p ≠ 2 or -1 for p = 2.
The arbitrary element of k(C_p/e) is a k(C_p/e)^C_p-linear combination of powers of some fixed root of an Artin-Schreier polynomial. Now observe that the norm of a sum of elements may be expressed as a polynomial combination of the norms of the summands and transfers of products of the summands <cit.>. Hence any subfield containing -α, the lower bound field (the image of k(C_p/e)^C_p under the Frobenius endomorphism), and the image of k(C_p/e) under the transfer contains the norm of every element in k(C_p/e).
Next, since the transfer is additive, it suffices to determine the transfer of an arbitrary k(C_p/e)^C_p-multiple of a root of an Artin-Schreier polynomial. Restricting down to k(C_p/e), we get the sum over the C_p-orbits; since any b ∈ k(C_p/e)^C_p is fixed, we may pull b out of the sum. If p is odd, we obtain zero, but if p = 2, we obtain -b. In this case the image of the transfer is k(C_p/e)^C_p.
Every pure C_p-Tambara field of characteristic p is obtained by choosing a sub-Tambara-field of k(C_p/e) according to one of the following two situations:
* Let C_p act trivially on , set k(C_p/e) =, and set k(C_p/C_p) to be any subfield of containing the lower bound field.
* Let →[x]/(x^p-x-α) be any Artin-Schreier field extension. Set k(C_p/e) = [x]/(x^p-x-α) (with C_p acting as the Galois group). If p is odd, choose k(C_p/C_p) any subfield of containing both α and the lower bound field. If p is even, then k(C_p/C_p) must be .
This concludes the classification of field-like C_p^n-Tambara fields.
alpha
tocsection
|
http://arxiv.org/abs/2409.02543v1 | 20240904090121 | StyleTokenizer: Defining Image Style by a Single Instance for Controlling Diffusion Models | [
"Wen Li",
"Muyuan Fang",
"Cheng Zou",
"Biao Gong",
"Ruobing Zheng",
"Meng Wang",
"Jingdong Chen",
"Ming Yang"
] | cs.CV | [
"cs.CV"
] |
StyleTokenizer
[1]Equal contribution
W. Author et al.
Ant Group, Hangzhou, China
{liwen8459,fangmuyuan}@gmail.com {wuyou.zc,gongbiao.gb,
zhengruobing.zrb,darren.wm,jingdongchen.cjd,m.yang}@antgroup.com
StyleTokenizer: Defining Image Style by a Single Instance for Controlling Diffusion Models
Wen LiMuyuan FangCheng Zou Biao Gong Ruobing Zheng
Meng Wang Jingdong Chen Ming Yang
============================================================================================
§ ABSTRACT
Despite the burst of innovative methods for controlling the diffusion process, effectively controlling image styles in text-to-image generation remains a challenging task. Many adapter-based methods impose image representation conditions on the denoising process to accomplish image control. However these conditions are not aligned with the word embedding space, leading to interference between image and text control conditions and the potential loss of semantic information from the text prompt. Addressing this issue involves two key challenges. Firstly, how to inject the style representation without compromising the effectiveness of text representation in control. Secondly, how to obtain the accurate style representation from a single reference image. To tackle these challenges, we introduce StyleTokenizer, a zero-shot style control image generation method that aligns style representation with text representation using a style tokenizer. This alignment effectively minimizes the impact on the effectiveness of text prompts. Furthermore, we collect a well-labeled style dataset named Style30k to train a style feature extractor capable of accurately representing style while excluding other content information. Experimental results demonstrate that our method fully grasps the style characteristics of the reference image, generating appealing images that are consistent with both the target image style and text prompt. The code and dataset are available at https://github.com/alipay/style-tokenizerhttps://github.com/alipay/style-tokenizer.
§ INTRODUCTION
The field of image generation has experienced remarkable growth since the advent of diffusion-based methods, including notable examples such as DALLE-1/2/3 <cit.>, Stable Diffusion <cit.>, and Midjourney <cit.>. These advancements have paved the way for a diverse range of content control techniques <cit.>, enabling precise manipulation of layout, lines, depth, and other conditions. This not only enhances the stability of diffusion models but also broadens their applicability.
Despite such progress, achieving effortless and effective control over the fine-grained styles of synthesized images remains a formidable challenge. This limitation restricts the practical applicability and convenience of diffusion methods in various applications.
Previous GAN-based methods <cit.> have achieved some level of style control, but the generated effects are difficult to compare with those of diffusion models.
Diffusion-based methods such as Textual Inversion <cit.>, LoRA <cit.>, and Dreambooth <cit.> utilize a small amount of data of the same type to fine-tune pre-trained
text-to-image models to better reflect new aspects in training images. These methods can generate images with similar styles as the training images. However, they are also prone to overfitting with the specific content (e.g., a particular person or object) present in the training images. This makes it challenging to decouple style and content, resulting in difficulties in achieving precise style control.
For precise style control, one intuitive approach is to employ adapter-based techniques like IP-adapter <cit.>,
StyleAdapter <cit.>, InstantStyle <cit.>, etc. These strategies embed style representation within the UNet architecture by introducing an extra cross-attention layer. Yet, as text and style representations span in distinct spaces, managing them via individual cross-attention processes may lead to discrepancies in the control signals.
As illustrated in Fig.<ref>, the adapter-based approaches apply text and style conditions simultaneously during the denoising process, which may cause interference between the controls and losing semantics.
Thus, the crucial challenge lies in introducing style representation while preserving the integrity of text representation for control purposes.
Leveraging the approach of tokenizing visual features for alignment with linguistic space, such as LLaVA <cit.>, we can improve the handling of intricate details in both images and text. Specifically, tokenizing style elements like text prompts significantly enhances the coordination and control within text prompts. This approach also effectively decouples style and content representations: after tokenization, the style from reference images and content extracted from text prompts remain in their distinct semantic spaces without overlap. Consequently, in this way, we can simultaneously achieve style and content control during generation without any interference.
Another challenging aspect is how to obtain accurate style representation from a single reference image, as most existing methods <cit.> simply using the coarse-grained supervision trained CLIP encoder struggle with independent style control.
To address this, we have developed a style-focused dataset with over 300 style categories and 30,000 images, all professionally annotated.
In addition, based on the tokenization method we analyzed in the previous paragraph for precise control and decoupling, we have trained a style-specific embedding extractor, enhanced by contrastive learning, to distinguish and represent style nuances. This refinement boosts the encoder's adaptability to new styles and overall robustness.
Our contributions can be summarized as follows:
∙ We introduce StyleTokenizer, a novel method for style control in diffusion models. This approach allows for accurate style control of generated images using a single arbitrary reference image in a training-free manner, while minimizing the impact on the control effectiveness of text prompts. Experimental results demonstrate the outstanding performance of our proposed method compared to other state-of-the-art approaches in this field.
∙ We curate a Style30k dataset comprising over 300 widely distributed style categories, manually collected by professional designers. This dataset includes a total of 30,000 images and, to the best of our knowledge, is currently the largest and most diverse open-source style dataset available. Using this dataset, we train a robust style encoder capable of effectively representing style information based on a single reference image.
§ RELATED WORKS
§.§ Text-to-Image synthesis
Text-to-image synthesis has experienced a phenomenal technological breakthrough in recent years.
DALL-E 1, 2, 3 <cit.> has demonstrated impressive results in text-to-image synthesis by utilizing a text encoder to control auto-regressive transformers and diffusion models. This led to substantially refined and high-visual-quality synthetic images. Stable Diffusion <cit.> and Imagen <cit.> have also shown promising results in text-to-image synthesis by leveraging diffusion models.
Furthermore, StyleGAN-T <cit.> has explored the potential of GANs in text-to-image synthesis and demonstrated remarkable results. These approaches typically involve using text encoders like CLIP <cit.> and GPT <cit.> and subsequently controlling the generators.
There are many attribute control methods derived from text-to-image generation models. ControlNet <cit.> incorporated additional control features into Unet's feature space. Composer <cit.> added extra feature inputs to cross attention and time embedding. Text inversion <cit.> and blip-diffusion <cit.>, on the other hand, controlled attributes by introducing novel text embeddings. These methods have greatly advanced the field of text-to-image synthesis.
§.§ Style Control in Image Generation
Style is a quite subtle attribute of images that is hard to define or even describe by people.
Even though style control has been widely adopted in diverse applications, such as design and short video entertainment, it involves considerable efforts including scaffolding tuning and try-and-error from users. Gatys <cit.> proposed the use of VGG and Gram matrix to extract style features for loss supervision. CycleGAN <cit.> and StarGAN <cit.> established cycle mechanisms to achieve style transfer with a limited set of styles while keeping the content intact. BlendGAN <cit.> employed an additional style encoder to control StyleGAN <cit.> for zero-shot style transfer on face datasets. CAST <cit.> utilized a contrastive loss to extract image styles. ALADIN <cit.> used the AdaIN <cit.> module to extract styles and control the decoder for style image generation. Wu <cit.> attempted to extract style information from visual features by aligning the generated images from diffusion with the style and content components of the prompt.
Some methods leverage prior knowledge by assuming that styles mainly reside in certain feature categories to achieve style control. DiffStyle <cit.> combined diffusion models of both style and content during denoising. ProsPect <cit.> incorporated content and style reference images at different denoising stages in the diffusion process. P+ <cit.> controlled the features of Unet at different resolutions separately for content and style. SDXL <cit.> incorporated an additional prompt module that enables users to specify limited styles. Nonetheless, these prior arts still require some costly efforts of data collection or finetuning from the end user. This limitation has been resolved by our method, which involves extracting style and applying control from a single image.
§ METHODOLOGY
§.§ Overview
Our method is derived within the Stable Diffusion framework, which decouples content and style conditions in the image generation process, resulting in visually appealing and coherent outputs. In this section, we introduce the overall pipeline of our method.
Compared with the traditional Stable Diffusion framework, we introduce two novel modules, as illustrated in Fig. <ref>, a Style Encoder for style representation and a Style Tokenizer for style control. These two modules are trained in two stages. In the first stage, the Style Encoder is trained on a style dataset named Style30K to acquire style representation capability. In the second stage, style representations are extracted from reference images by Style Encoder, and Style Tokenizer learns to convert them into style tokens, which are aligned with text tokens in the word embedding space. Finally, both text tokens and style tokens are concatenated and input to the SD pipeline as a condition to generate the image.
§.§ Style30K Dataset
Describing the style of an image verbally is even challenging for artists. The meaning of image style is very rich and subtle, encompassing various perspectives, such as color distribution, lighting, line styles, artistic styles, brushwork, emotions, etc. These characteristics are visually perceptible but difficult to precisely and comprehensively describe in language. Therefore, extracting features directly from images is a more reasonable choice, rather than relying on text. Existing feature extraction methods often rely on generic semantics or classification features and are not specifically trained for style-related tasks. Hence, we construct a style dataset named Style30K to train a style encoder that focuses on capturing style-oriented features, shown in Fig. <ref>.
While it is challenging to describe image styles in language, people can intuitively judge whether two images share the same style or not. Therefore, we adopt a semi-manual approach to construct the dataset. The collection process of Style30K consists of three stages. In the first stage, we gather images with various styles, where each style is represented by three sample images as queries. Subsequently, we use different embedding extractors to extract the embedding of these queries and perform retrieval within a large dataset. In the second stage, we manually filter and collect images from the retrieval results that share the same style as the three queries for each style category. Each collected image requires a consensus among three annotators for it to be included. In the third stage, we annotate each collected image with a content prompt using CogVLM <cit.>. In detail, the image and the instruction of "Describe the image, only the content of the image, do not include the style of it." are input into the caption model, which yields captions that are solely related to the content of the image. The reason for this way is to ensure that style and content control signals are independent to each other. During the model's training, the prompt provides information related only to content, while the style is provided by the Style Tokenizer.
§.§ Style Encoder
In this section, we describe the process of training the Style Encoder ℰ_s, which extracts the style cue from images I_s and encodes it into style embedding f_s. This embedding is then used to guide the generation process.
Obtaining accurate style representation from a single reference image is a challenging task. Previous methods extract image representation from the CLIP image encoder to enable content and style control. This approach has shown promising results in providing effective control over various visual aspects, including color, subject, layout, and more. However, it lacks the ability to independently control these aspects, particularly style control, as CLIP is trained using coarse-grained semantic information as supervision.
To address this limitation, we use the well-labeled style dataset Style30K to train a style representation extractor capable of accurately representing style while excluding other content information. Given that the style data includes an accurate category label, the style encoder is trained using supervised representation learning. It enforces the model to only focus on category-related information (style) and ignore category-irrelevant information (content). To further enhance the generalization of the style encoder, we employ contrastive loss for supervision. As shown in the left of Fig.<ref>, it allows the model to focus on the distance differences between diverse styles. Different images of the same style are clustered together in the embedding space, images of similar styles are closer, while images of different styles are scattered. This approach enhances the robustness of the style encoder in processing new styles and improving its ability to handle novel style variations.
§.§ Style Control
Previous adapter-based methods have the capability of image prompting in the diffusion model. It significantly enhances the ability to generate content that cannot be described in a prompt. These methods incorporate style representation into the Unet module using an additional cross-attention layer. However, they apply text and style conditions simultaneously during the denoising process, which may cause interference between the controls and loss of semantics.
Representations from the word embedding space of SD have rich style control capabilities. On the one hand, Dreambooth <cit.> and Textual Inversion <cit.> have demonstrated that new word embedding outside the existing dictionary can express diverse contents. Yet, these methods require some reference images for tuning and are prone to overfitting with the specific content. On the other hand, carefully crafted descriptions by prompt experts can also yield desired image styles. However, directly using textual descriptions to control the style is still challenging. The text descriptions used during the training of SD lack a detailed description of the style for each image. Besides, image styles encompass a wide range of aspects that are difficult to fully express in natural language.
Therefore, we aim to find a comprehensive and accurate style description that can be applied to each image and is acceptable by diffusion pipelines. Considering that our Style Encoder is already capable of extracting a unique style embedding for any given image, a reasonable approach is to map these styles to representations in the space of word embedding. We utilize a 2-layer MLP named Style Tokenizer T_s to implement this mapping. The Style Tokenizer T_s takes the embedding f_s extracted by the Style Encoder ℰ_s and maps them into style embedding tokens e_s. In the training process, the parameters of the original SD model are frozen, and only the parameters of Style Tokenizer are updated, enabling the mapped embedding e_s to provide a comprehensive and precise representation of image styles. The style embedding e_s is concatenated with word embedding tokens e_t following Eq. <ref> and then fed into SD's text encoder. By doing so, style images can be used as a style prompt when generating images to better describe the style. Besides, the style from reference images and content extracted from text prompts remain in their distinct semantic spaces without overlap.
e_ts = [e_start,e_s,e_t,e_end],
where e_s = T_s(f_s).
During the inference process, we employ the classifier-free guidance <cit.>. In order to independently control the weight of both text and style, we adopt a similar approach as InstructPix2Pix <cit.>, as described in Eq. <ref>.
ẽ_θ(z_t, c_t, c_s) = e_θ(z_t, ∅, ∅)
+ s_t · (e_θ(z_t, c_t, ∅) - e_θ(z_t, ∅, ∅))
+ s_s · (e_θ(z_t, c_t, c_s) - e_θ(z_t, c_t, ∅)),
where c_t and c_s represent the text and style condition. The strength of s_t and s_s represents the intensity of the text and style condition, respectively.
§ EXPERIMENTS
§.§ Experimental Details
We collect 10 million high-quality images on the Internet for model training. The Style Encoder adopts the visual encoder of CLIP ViT-B/32 as the pre-trained backbone. We first use these 10 million images for pre-training, and then use Style30K for supervised training. In terms of style control, we adopt SD v1.5 as the generation model. The style embedding is tokenized by the Style Tokenizer with a shape of 8×768. Then, these tokens are concatenated with text embedding tokens and fed into the text encoder of the SD pipeline.
During this stage, all the images are used for training the Style Tokenizer. When the denoising target image I is from the Style30K dataset, the style reference image I_s is randomly selected from the images within the same style category as I. Otherwise, when I does not have style annotation, the style reference is I itself.
§.§ Qualitative Evaluation
To facilitate a comparison with other methods, we conducted a test on both our method and the previous approaches, including StyTr^2<cit.>, InST<cit.>, CAST<cit.>, SD<cit.> controlled by style prompt, and IP-Adapter<cit.>. To evaluate the performance of these methods in both style control and prompt following, we prepare a benchmark consisting of 52 prompts and 28 style reference images, these prompts are from the settings used in the StyleAdapter <cit.>. The prompts encompass a rich variety of content, including human, animal, object, and scene. The reference images cover some common styles as well as some that are difficult to describe in words. Note that both of them are excluded from the training process. Our aim with the aforementioned setup is to comprehensively evaluate the strengths and weaknesses of the different methods. Some images generated by these methods can be viewed in Fig.<ref>, where each column represents the results produced by different methods using the same prompt and reference image. Below, we provide a detailed analysis of the experimental results.
As shown in Fig.<ref>, StyTr^2 and InST achieve relatively similar performance, successfully capturing the dominant color palette of the reference images. However, their grasp of the overall style such as texture is not very well. As seen in column H, they capture the red color information from the reference image but fail to comprehend the cut-paper style. Furthermore, their image quality is generally inferior to that of other methods. Utilizing style prompts for control facilitates a certain level of style control in simpler style categories, like oil and ink wash paintings, but the absence of a reference image as input led to significant discrepancies in the finer details. For more complex styles that are difficult to articulate, their ability to control style is lost. IP-Adapter produces images with a style very close to the original, but in most cases, it struggles to decouple the content from the reference image, leading to poor prompt-following ability. For instance, in columns B and E, although mountains and sunflowers are generated, the human from the reference images also appear in the output. IP-Adapter's strengths are mainly in image variation and image editing. In contrast, our method demonstrates a high degree of consistency with the reference image in terms of style, including line, texture, color, emotion, and more. It also shows a strong advantage in following text prompts. Moreover, the overall aesthetic quality of the images is superior to that of the previous methods.
§.§ Quantitative Evaluation
In this section, our method is compared with the state-of-the-art approaches using some quantitative metrics for a fair evaluation. We employ the following metrics on the images generated in the Sec.<ref> to evaluate the quality and effectiveness of the generated images:
Text-Image Similarity. We use the CLIP model to extract embedding from generated images and their corresponding text prompts. Then cosine similarity between embedding from the prompt and the generated image is calculated. Higher cosine similarity indicates better capability of the instruction following.
Aesthetic Score.
To assess the aesthetic quality of the generated images, we predict the aesthetic score for each generated image with LAION-Aesthetics Predictor <cit.>. This metric measures the visual appeal and artistic quality of the image. A higher aesthetic score indicates a visually pleasing image.
Style Similarity.
Since there is no generally accepted method for assessing style similarity, we imitate the text-image similarity metric calculated by CLIP. Style embedding of both the style reference image and the generated image are exacted by Style Encoder. After that, we compute their cosine similarity. A higher cosine similarity indicates better control of the desired style in the generated images.
User Study.
To assess the style similarity more comprehensively, we conduct user studies. For the generated images produced by each method, we have 20 users (10 professional designers and 10 users) vote anonymously for the image that they believe has the most similar style to the reference image.
The normalized votes (vote rate), serve as the style similarity score.
For each of these metrics, we calculate the average across all generated results to provide an overall evaluation of the performance of the existing style control models. Experimental results are summarized in Tab. <ref>, where we compare our method with the SOTA approaches using the aforementioned evaluation metrics.
Our method significantly outperforms other SOTA approaches in terms of style similarity. In the user study, our method also receives more votes than other methods. These results highlight the effectiveness of our approach in preserving the desired style in the generated images. Furthermore, our method is trained on large-scale high-aesthetic data and thus achieves a higher aesthetic score than the base SD model. As shown in Fig.<ref>, it brings better results in terms of aesthetics than other methods. As for the instruction following, the text-image similarity of our method has comparable performance with the base SD model. It shows that our method does not lead to a decrease in the ability to follow instructions during style control. In summary, the experimental results demonstrate that our method can achieve better style control capability and generate visually appealing images, while the instruction following is not affected by this.
§.§ Evaluation of Style Encoder
We conduct an evaluation of our Style Encoder and compare it with several publicly available feature encoders, namely CLIP <cit.>, VGG <cit.> and BlendGAN <cit.>. The evaluation is performed on a validation set of Style30K, consisting of 12 different style categories with a total of 900 images that are non-repetitive with the training style categories.
We use different methods to extract the style embedding of each image in the validation set, and then visualize the distribution of these embeddings in the representation space, as shown in Fig. <ref>. Style embeddings belonging to different categories are represented by points of different colors and distributed in their own clusters at different locations in space. Our Style Encoder demonstrates the ability to effectively cluster images belonging to the same style category, resulting in compact intra-class distances and large inter-class differences. It indicates that our method has a better ability to present style from images and is robust enough to handle novel style variations.
Additionally, we perform quantitative evaluation by Silhouette Coefficient <cit.> and Calinski-Harabasz metrics <cit.> in Tab. <ref> used to evaluate
the effect of clustering. Both visual results and cluster metrics demonstrate that our method effectively extracts features with better clustering results for images with the same styles compared with other extractors.
§.§ Ablation Studies
In this section, we conduct ablation studies to assess the effectiveness of our Style Encoder and Style Tokenizer in Fig. <ref> and Table <ref>. Fig. <ref>(b) represents that style embedding is not first aligned to the word embedding space by the style tokenizer, but is directly concatenated with text embedding.
Fig. <ref>(c) represents that the style encoder is not used for style representation, but the CLIP visual encoder is used directly to encode the image.
Experimental results show that if any one of them is missing, the generated images either have a weakened ability to follow instructions or have poor style consistency.
§.§ Other Application
Since our method can maintain the style in the reference image, if multiple images of different styles are used as references, the fusion between styles produces new styles. We use two styles for blending style in Fig.<ref>. By starting with the palette style as a control and gradually incorporating the sketch style, the images that are generated show a progressive transition from the palette to the sketch style.
§ CONCLUSION
In this work, we propose a novel zero-shot method to precisely control the style of generated images from diffusion models. In order to decouple style and content conditions, we first construct a fine-labeled style dataset called Style30K and propose a Style Encoder that can extract style representation from reference images. Then we propose a Style Tokenizer to align style and text tokens in a uniform space. Finally, the aligned tokens are used as a condition in the denoising process of the diffusion model. Our method offers a flexible and effective solution for incorporating style control in image generation, opening up new possibilities for generating high-quality stylized content.
splncs04
|
http://arxiv.org/abs/2409.03171v1 | 20240905015829 | MARAGS: A Multi-Adapter System for Multi-Task Retrieval Augmented Generation Question Answering | [
"Mitchell DeHaven"
] | cs.CL | [
"cs.CL"
] |
[email protected]
Darkhive
San Antonio
Texas
USA
§ ABSTRACT
In this paper we present a multi-adapter retrieval augmented generation system (MARAGS) for Meta's Comprehensive RAG (CRAG) competition for KDD CUP 2024. CRAG is a question answering dataset contains 3 different subtasks aimed at realistic question and answering RAG related tasks, with a diverse set of question topics, question types, time dynamic answers, and questions featuring entities of varying popularity.
Our system follows a standard setup for web based RAG, which uses processed web pages to provide context for an LLM to produce generations, while also querying API endpoints for additional information. MARAGS also utilizes multiple different adapters to solve the various requirements for these tasks with a standard cross-encoder model for ranking candidate passages relevant for answering the question. Our system achieved 2nd place for Task 1 as well as 3rd place on Task 2.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010179.10003352</concept_id>
<concept_desc>Computing methodologies Information extraction</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010258.10010262</concept_id>
<concept_desc>Computing methodologies Multi-task learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010179.10010182</concept_id>
<concept_desc>Computing methodologies Natural language generation</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Information extraction
[500]Computing methodologies Multi-task learning
[500]Computing methodologies Natural language generation
MARAGS: A Multi-Adapter System for Multi-Task Retrieval Augmented Generation Question Answering
Mitchell DeHaven
School of Physics, Nankai University, Tianjin, 300071, China
===============================================================================================
§ INTRODUCTION
Retrieval augmented generation (RAG) has been a popular approach for question answering systems for some time <cit.>, although recently has become a very popular approach for a wide range of tasks due to the zero-shot capabilities of large language models (LLMs) with an appropriate prompt and access to the relevant context for the task. Despite the existence of numerous question answering benchmarks, many do accurately reflect the diverse usage of current RAG systems. Thus, tracking both the efficacy of certain RAG architectures as well as tracking process remains difficult. The CRAG <cit.> benchmark aims to resolve this with 3 different subtasks representing realistic RAG usage scenarios.
The final key element of the CRAG benchmark is it's scoring metric which explicitly punishes hallucinations. With the rising capabilities of LLMs, increasingly their outputs are taken at face value, despite the known issue of hallucinations. This has lead to high profile incidents causing concern with their use <cit.>. The CRAG score aims to punish hallucinated answers and encourages returning missing answers, equivalent to returning "i don't know" from the model, by giving scores of 1, 0, and -1 to correct, missing, and hallucinated answers respectively.
To address these various tasks, we train individual adapters for each of the various tasks, as well as API call generation required for accessing information in the mock API. This approach allows us to use a single LLama 3 <cit.> in memory while swapping out adapters based on the current needs.
§ RELATED WORKS
The initial approach of Lewis et al. <cit.> showed the benefits of providing additional text context for seq2seq models for NLP tasks that are knowledge entensive. Using BART, they were able to improve question answering tasks using dual biencoders for retrieval and training the model jointly, without the need for knowing which documents were relevant.
Adapters have become increasingly used since introduced by Houlsby et al. <cit.>. LoRa <cit.> has become a popular adapter approach, particularly for LLMs as they have grown substantially larger in recent years. The use of adapters allows modifying a models output without training the entire network, which substantially saves on VRAM memory when training. Hu et al. <cit.> discovered that when replacing a dot product between large vectors with an intermediate dot product of a much lower rank vector, the impacts on performance were minimal while further reducing the training parameters required.
Finally, Stickland and Murray <cit.> produced a multi-adapter model based on BERT, an approach that our system follows. In particular for the GLUE <cit.> benchmark, which is comprised of multiple datasets, they showed that simply training a task specific adapter per dataset, they could improve the average performance by 0.5 points for BERT, while only introducing 10% more parameters.
§ CRAG DATASET
CRAG is question answering dataset aimed at providing a realistic task to benchmark RAG systems as they are used in practice with a diverse set of questions, including 8 distinct question types and 5 distinct domains. Additionally, two sources of diversity which pose difficulty for LLMs are how dynamic a question's answer is and the popularity of the topic of the question. As shown in the baseline systems, real-time answers pose a challenge for RAG systems and they similarly struggle when the topic of the question is less common (referred to as "torso" and "tail" questions).
§.§ Task 1
For the first task the system must process 5 candidate HTML documents for generating answers, reflecting a standard web-based RAG application. A caveat is that the 5 candidates are sampled from the top-10 relevant documents retrieved from web search. Thus, there is no guarantee that the relevant information for answering the question is actually found within the top 5 documents. This creates an interesting challenge for hallucinations, as in some cases the answer should be easily generated by the model without the context from retrieved documents.
§.§ Task 2
Task 2 reflects a more sophisticated RAG scenario, where the system is provided with the same 5 HTML documents from before, however it now has access to a knowledge graph, accessible via a REST API. The system must determine which API endpoint to call, with the correct arguments, to retrieve additional relevant context from the knowledge graph.
§.§ Task 3
Finally, Task 3 represents an extension of Task 2, where the system has access to both HTML and the knowledge graph API. However, in this task, the number of HTML documents that are to be processed is 50. This task is meant to measure both the computational efficiency of the approach as well as its ability to filter large amounts of potentially irrelevant information.
§ MARAGS PIPELINE
§.§ Webpage Processing
Our webpage processing pipeline utilizes BeautifulSoup4 <cit.> for generating candidate segments from the HTML documents provided to the system. A common difficulty with RAG systems is determining a process for segmenting documents into smaller chunks to narrow the candidates to relevant sections and also reducing the length of the text sent to the model, given that a single document could exceed the context window of a model.
In our case, we utilize the structure of the HTML to provide the segmentation. We traverse the tree structure of the parse HTML with breath first search. Any time the length of the text contained within a node (which includes all of the text of its decedents) is less than 2000 characters, we treat that as a segment. If a leaf node is reached with greater than 2000 characters, the text is split on the white space and into as many segments are needed such that each segment is under the threshold. The segment length was determined via inspection of of HTML documents and their associated segmentation, thus future work could treat this as a hyperparameter and tune for performance.
§.§ API Call Generation
For Task 2 and 3, the knowledge graph mock API is available to be used for gathering additional information. The difficulty, however, is not only determining which API endpoint is the most appropriate, but also the proper arguments and their formatting for getting valid results from the API.
Each API endpoint was transformed to a Python function with relevant documentation describing the purpose of the endpoint, the arguments, and what the endpoint returned. Each function also has an associated formatting function, which takes the returned JSON and converts it into segmented strings. The doc strings for each Python function are used to provide additional information to help guide the model on which one is the most appropriate to use.
For training a model to generate API calls with associated arguments, we use LoRa <cit.> to train one of the several adapters we use with Llama 3 8B. For generating the target string for training, we first use Llama 3 to generate an initial prediction for the API call. Any initial prediction that successfully calls a function is retained as a target, regardless of whether or not the relevant information for the question is contained in the returned JSON. Initial predictions that fail to make a successful call are inspected and manually corrected if the correct function call is clear from the initial prediction and the question. Again, the manually modified targets are evaluated only on successfully calling an endpoint, though not validating that the relevant information is returned by the API. Any question where the target cannot be quickly modified is changed to a target of "None".
We acknowledge that this approach to annotation is not optimal, as it likely results in successful, but incorrectly selected API endpoint calls. However, manually annotating each question to determine the correct API call and validating the returned information were indeed relevant would have been too time consuming give the size of the dataset.
§.§ Candidate Ranking
For candidate ranking, we attempted 4 different candidate ranking approaches. We utilized TF-IDF, a biencoder, cross-encoder, and an ensemble of mentioned approaches (using mean rank as the ranking metric). Our TF-IDF implementation is based on Scikit-Learn <cit.>. The biencoder and cross-encoder are from the SentenceTransformer <cit.> library, specifically the "multi-qa-MiniLM-L6-cos-v1" [https://huggingface.co./sentence-transformers/multi-qa-MiniLM-L6-cos-v1] and "ms-marco-MiniLM-L-6-v2" [https://huggingface.co./cross-encoder/ms-marco-MiniLM-L-6-v2] respectively.
Evaluating candidate ranking in isolation is difficult, as relevant information is not labeled, so using the system accuracy and CRAG score is the most straight forward way to compare differences in each system. However, to test the various systems, we use the base Llama-3 8B model with not adapters for each retrieval approach and use the accuracy metric to determine the best performing approach. We use accuracy instead of CRAG at this stage for ranking, as we think this is a better representation of how often the relevant information retrieved. For a test set, we randomly select 500 samples from the Task 1 dataset.
The results of this experiment are shown in Table <ref>. From the results, the cross-encoder is the best performing system, thus we used it for our retriever. We suspect that with proper tuning TF-IDF would and ensembling would be much more performant overall, but as mentioned running extensive experimentation is difficult as it required LLM generation to get an overall accuracy score. Using an LLM to label passages as relevant or not is a possible approach to allow for tuning of just the retriever, however we did not explore this.
Despite the cross-encoder being the most computationally expensive approach, we found it to be fast enough processing the candidates in the required 30 seconds per sample. In the case of Task 3, we found it necessary to use Python's multiprocessing library to process multiple HTMLs simultaneously to meet the runtime requirement.
§.§ Retrieval Augmented Generation
Finally, with ranked candidates, we use Llama 3 8B to augment generation with the relevant context for the question. We ran experiments with 2 different prompt structures, the primary difference between them being the ordering of the question and context.
Our initial prompt structure started with the question first, then all of the retrieved candidates prior to the Llama model response, however we noticed that often due to how much context would be provided, the model would occasionally forget the question being asked. For example a question like "What is the elevation of Honolulu?" would result in an answer of "Honolulu population is 343,421", indicating the model remember the subject of the question, but incorrectly answered with the population, rather than elevation. Our subsequent prompt structure placed the question after the context, which resolved the issue.
For training Llama 3, we trained LoRa models for each task individually. Given the penalization of hallucinations in the scoring metric, we try to take steps to mitigate further hallucinations due to finetuning, as it has been observed that finetuning LLMs can be a source of further hallucinations <cit.>. This likely applies to RAG systems in cases where the expected answer is not answerable given the context provided, i.e. no relevant information is given and the question is not answerable without further context, yet the model is trained to output the target answer regardless. Thus for our training setup, we first relabel the target for training samples in cases where our candidate retrieval system likely does not provide the correct information and Llama does not know the answer without that information. We use the provided dev set for training, with the 500 set sample used for retrieval comparison treated as our holdout set.
Our initial approach for determining which training samples need relabeling was has been explored previously <cit.>. A common and simple approach to filter/relabel incorrectly["Incorrectly" here simply means in the context of our retrieval system, not that the provided answer is not true.] labeled samples is to use a particular sample's training loss after training to the point of overfitting. High loss examples after overfitting likely indicate examples that are incorrectly labeled and thus can be filtered out. Not all samples with high loss will be incorrect labels, instead simply being hard examples, yet typically the benefit of disposing of incorrectly labeled samples outweighs ignoring difficult ones.
Initial experiments, however, indicated that this method did not work well on finance based questions. Further analysis would be required for a more definitive answer, though we suspect that this is due to the fact that the loss in hallucinated answers when dealing with numeric outputs is likely less than typical string outputs. For example, with a question "What was Apple's closing price today?", with a hypothetical correct answer "$203.51", a prediction of "$204.52" would likely not result in filtering via this method. Compare that with a question such as "Which movie won Best Picture in 1990?" with an answer of "Driving Miss Daisy" and a prediction of "Dancing with Wolves", the loss will be comparatively much higher.
We instead determine these samples by first running the system with the base Llama 3 model, with a prompt indicating to always produce a prediction for each of the 4 candidate retrieval approaches mentioned previously. We use GPT-4o to evaluate whether any of the generated answers are correct. If any are correct, the original label is retained for the training, otherwise "i don't know" is used as the target label. In the case of false premise questions, we always retain the original label as the target label. We repeat this process for each task, given that each has access to a different data sources, to generate a training dataset for each LoRa adapter.
We use the llama-recipes repository <cit.> for training the LoRa adapters, utilizing the default LoRa configuration. The only modifications were changing the LoRa rank from 8 to 256 and increasing the weight decay from 0.01 to 1.0.
We demonstrate the effectiveness of relabeling in Table <ref>. We ran 3 different answer generation setups for the 500 sample Task 1 hold out set we created. The first is an unmodified Llama 3 8B model, the second is a LoRa model using the original targets, and the final a LoRa model with relabeled targets. As shown, using the original targets provide the best accuracy, but also worsens hallucinations over the base model. While using the relabeled targets hurts accuracy, it also substantially reduces hallucinations, providing the best CRAG Score amongst the three.
§ RESULTS
As part of the competition, a manual evaluation was conducted on user submissions. The automatic evaluation throughout was dependent on scoring via GPT-3.5 Turbo, given that correct user submissions may not have been exact matches to the expected answer. However, issues such as prompt injection still pose problems for automatic evaluation via LLMs. The results of our system across the various aspects of the dataset are shown in Figure <ref>. As can be seen by the results, our system suffers many of the problems the dataset is meant to expose with most RAG systems.
Similar to the baseline systems for CRAG, finance was the most challenging domain. The exact reason warrants further analysis, though contributing factors likely include LLMs known issues with number processing and simple mathematics and the fact that much online finance data is often not stored in plain text, but rather visualizations such as graphs.
Dynamism proves to be the most challenging question categorization, with model performance steadily decreasing as a question becomes more dynamic. Real-time questions prove to be the most challenging question category of any of the breakouts. Our prompt structure did not include any meta-data provided by the HTML documents, such as publish data or access date, which likely would have improved performance on dynamic questions, although likely not significantly.
The performance difference between head, torso, and tail questions appeared less substantial than our original expectations, though clearly performance drops off as popularity falls. Interestingly, Task 3 here under performs the other tasks in head, torso, and tail. We suspect that including substantially more search results includes overlapping entities / subjects, at which point conflicting information would be difficult to resolve.
Finally, the most interesting results in the question type results are the false premise category. Our system was able to achieve scores similar to the SOTA systems featured in the CRAG paper, despite obviously being a much smaller system overall. Interestingly, the false premise questions were the only type where our training setup always kept the original target label, rather than mapping the target to "i don't know".
§ FUTURE WORK
Observations we had during the competition were instances of catastrophic forgetting due to our attempts to reduce hallucinations. For instance, the question "Who has had a longer musical career, Sharika or Machine Gun Kelly?" is answerable by Llama without context, simply based on the knowledge it has of the two artists. However, after LoRa training, questions like this and others were often answered with "i don't know" in cases where the answer was not discoverable in the retrieved information. Methods to prevent this is something we are interested in pursuing in future work.
Additionally, we hope to explore larger Llama models, 70B+, in the future for this task. We were unable to get the 70B model running in the competition compute environments, so did not spend much timer looking at larger models. However, it is very likely moving to larger models would provide a substantial improvement over the 8B model.
§ CONCLUSION
In this work we presented MARAGS, a multi-adapter solution to the CRAG dataset. We demonstrated the effectiveness of training individual LoRa adapters for the 4 tasks in the pipeline, specifically API call generation and Task 1,2, and 3 answer generation. CRAG presents a variety of different tasks and questions to allow the tracking of progress of various methods used to build RAG systems. The penalization of hallucinations is a unique and important feature as future AI systems become increasingly common throughout society, as hallucinations hurt user trust in these systems. We discussed our methods for reducing these hallucinations, but they are not without cost, as in some cases the model fails to output previously known knowledge. Clearly the importance of balancing these two factors is a key to leveraging LLMs to their full potential, while also improving user trust.
We are grateful to the KDD Cup organizers, Meta, and AIcrowd for all the work that goes into hosting a successful competition.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02392v1 | 20240904024104 | Building Math Agents with Multi-Turn Iterative Preference Learning | [
"Wei Xiong",
"Chengshuai Shi",
"Jiaming Shen",
"Aviv Rosenberg",
"Zhen Qin",
"Daniele Calandriello",
"Misha Khalman",
"Rishabh Joshi",
"Bilal Piot",
"Mohammad Saleh",
"Chi Jin",
"Tong Zhang",
"Tianqi Liu"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
Deep Brain Ultrasound Ablation Thermal Dose Modeling with in Vivo Experimental Validation
Zhanyue Zhao, Benjamin Szewczyk, Matthew Tarasek, Charles Bales, Yang Wang, Ming Liu, Yiwei Jiang, Chitresh Bhushan, Eric Fiveland, Zahabiya Campwala, Rachel Trowbridge, Phillip M. Johansen, Zachary Olmsted, Goutam Ghoshal, Tamas Heffter, Katie Gandomi, Farid Tavakkolmoghaddam, Christopher Nycz, Erin Jeannotte, Shweta Mane, Julia Nalwalk, E. Clif Burdette, Jiang Qian, Desmond Yeo, Julie Pilitsis, and Gregory S. Fischer
Z. Zhao, B. Szewczyk, C. Bales, Y. Wang, M. Liu, Y. Jiang, K. Gandome, F. Tavakkolmoghaddam, C. Nycz, and G. S. Fischer are with Worcester Polytechnic Institute, Worcester, MA e-mail: [email protected], [email protected].
B. Szewczyk, J. Qian, and J. Pilitsis are with the Department of Neurosurgery, Albany Medical Center, Albany, NY
M. Tarasek, C. Bhushan, E. Fiveland, and D. Yeo are with GE Global Research Center, Niskayuna, NY
Z. Campwala, R. Trowbridge, Z. Olmsted, S. Mane, J. Nalwalk, J. Qian, and J. Pilitsis are with the Department of Neuroscience and Experimental Therapeutics, Albany Medical Center, Albany, NY
P. M. Johansen and J. Pilitsis are with Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton, FL
E. Jeannotte is with Animal Resources Facility, Albany Medical Center, Albany, NY
G. Ghoshal, T. Heffter, and E. C. Burdette are with Acoustic MedSystems, Inc., Savoy, IL
This research is supported by National Institute of Health (NIH) under the National Cancer Institute (NCI) under Grant R01CA166379 and R01EB030539.
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Large language models (LLMs) have demonstrated remarkable capacities across a variety of language tasks, showcasing their broad-ranging capabilities in natural language processing. Notable models include ChatGPT <cit.>, Claude <cit.>, and Gemini <cit.>. However, despite these advances, even the most advanced closed-source LLMs still struggle with complex reasoning tasks that require multi-rounds of decision making. In particular, for the representative task of mathematical problem solving, LLMs often fail with basic arithmetic and symbolic computations <cit.>. To address this issue, recent studies recommend the integration of external tools (e.g., calculators, computational Python libraries and symbolic solvers) to augment the LLMs' mathematical problem-solving capabilities <cit.>. Specifically, by integrating natural language reasoning with the use of these external tools, these enhanced LLMs can receive external messages from tool interactions and reason based on both previously generated tokens and external messages, which significantly improves their performance in mathematical tasks <cit.>.
These successes of tool-integrated LLMs lead to a natural research question: how can we better train LLMs to combine tool usage with intrinsic reasoning to tackle complex reasoning tasks? For the mathematical problem solving task, existing works primarily focus on synthetic data generation (by a strong teacher model) and supervised fine-tuning (SFT), as seen in ToRA <cit.>, MetaMathQA <cit.>, MAmmoTH <cit.>, and Open-MathInstruct <cit.>. These methods and synthetic datasets have yielded significant improvements in test accuracy on standard benchmarks like MATH <cit.> and GSM8K <cit.>.
Built on strong SFT models, Reinforcement Learning from Human Feedback (RLHF) has proven to be a key technique to elicit LLMs' knowledge during the post-training stage and has become a standard practice in the LLM training pipeline <cit.>. Broadly speaking, the RLHF learning paradigm, which was originally designed for aligning large language models (LLMs) with human values and preferences <cit.>, is distinct from SFT as it learns from relative feedback <cit.>. It has notably enhanced the capabilities of models like ChatGPT, Claude, and Gemini, enabling them to generate responses that are more helpful, harmless, and honest <cit.>. Inspired by RLHF's success in general chat applications, in this paper, we explore RLHF for improving LLMs' mathematical problem-solving abilities when equipped with external tools. In particular, since deep RL methods (e.g., the proximal policy optimization, PPO algorithm <cit.>) are often sample inefficient and unstable <cit.>, our goal is to derive direct preference learning algorithms that directly learn from the preference dataset <cit.>.
Contribution. We begin by formulating the learning process as a Markov decision process (MDP), distinct from the contextual bandit approach typically used in RLHF for making general chatbots without external environment interactions <cit.>. Then, we derive the optimality condition of the optimization problem and develop multi-turn direct alignment algorithms (M-DPO and M-KTO) that incorporate external messages, where the primary modification is to mask out irrelevant tokens during training. Furthermore, we extend our approach to its online iterative variants, which recent works demonstrated to be promising <cit.>. Finally, we evaluate our approach through case studies using augmented training sets from MATH and GSM8K benchmarks, employing various base models such as Gemma <cit.>, CodeGemma <cit.>, and Mistral <cit.>. For instance, the performance of a supervised fine-tuned Gemma-1.1-it-7B model increased from 77.5% to 83.9% on GSM8K and from 46.1% to 51.2% on MATH. Similarly, a Gemma-2-it-9B model improved from 84.1% to 86.3% on GSM8K and from 51.0% to 54.5% on MATH. These empirical results indicate a significant improvement in performance over standard SFT models, demonstrating the potential of RLHF in complex reasoning task. We also provide a comprehensive recipe for the practical implementation of our online iterative multi-turn methods, and make our models, datasets, and code publicly available for further research and development.
§.§ Problem Formulation
We denote prompt as x ∈ and assume that the interactions run for up to H rounds. At the first step, a prompt x is sampled from some distribution d_0 as the initial state s_1=x (We use the terminology “state” instead of “context” because we are concerning about an MDP instead of a contextual bandit here). Then, at each step h ∈ [H],
* Action: the agent observes the current state s_h, which is the history of the first h-1 interactions with the external environment, and takes an action a_h according to some policy π_h(·|s_h) ∈Δ(). Typically, the action is in the ReAct manner, which consist of a reasoning step f_h and an execution step e_h (e.g., writing python code) <cit.>.
* Observation: in response to the agent's action, the environment then returns an observation o_h based on the history s_h and current action a_h.
Then, we transit to a new state, which is the history up to the step h+1:
s_h+1 = (s_h, a_h, o_h) = (x, a_1, o_1, ⋯, a_h, o_h),
and a new step begins. This process repeats for H rounds in total and eventually, we collect a trajectory:
τ = (x, a_1, o_1, ⋯, o_H-1, a_H).
See Figure <ref> for an example. The framework presented here is a Markov decision process (MDP), which offers a distinct approach from the contextual bandit model discussed in <cit.>. Formally, we define the following MDP.
An MDP is specified by a tuple (, , H, ^*, d_0), where is the action space, H is the episode length[In practice, the episode length can vary across the trajectories. We may additionally define that the shorter trajectories that output the final answer are in an absorbing state. We consider the fixed episode length to simplify the subsequent mathematical analysis. ], ^*={^*_h}_h=1^H are the state transition kernels, and d_0 denotes the distribution of prompt s_1=x. For each h ∈ [H], ^*_h(·|s_h,a_h) is the distribution of the next state given the state-action pair (s_h,a_h) at step h. In our setup, a trajectory τ = (x, a_1, o_1, ⋯, o_H-1, a_H) is generated by: s_1=x ∼ d_0 and for all h ∈ [H], a_h ∼π_h(·|s_h), o_h ∼^*_h(·|s_h,a_h) where s_h+1 = (s_h, a_h, o_h). When there is no ambiguity, the abbreviation s_h+1∼^*_h(·|s_h,a_h) is also adopted.
The MDP formulation of preference learning was recently studied in <cit.> but with a focus on the single-turn chat task and without explicitly considering the external messages. A unique feature of RLHF, as opposed to traditional RL studies, is the relative feedback obtained through comparisons between two trajectories that share the same initial state (prompt). We follow <cit.> to assume that the preference signal is generated by the so-called Bradley-Terry model.
We denote τ/x = y, where the prompt is excluded from the trajectory. We assume that there exists a utility function of the trajectory u^* such that given (x, y^1, y^2), one response y^1 is preferred over another response y^2, denoted as y^1 ≻ y^2, with probability
(y^1 ≻ y^2 | x, y^1,y^2 ) = σ(u^*(x,y^1)-u^*(x,y^2)),
where σ is the sigmoid function σ(z) = 1/(1+exp(-z)). Also, given (x, y^1, y^2) we denote the sampled preference signal as z with z=1 indicating y^1 ≻ y^2 while z=0 indicating y^2 ≻ y^1.
Under this definition, we only assume access to the trajectory-level preference, but not an action-level one. This should distinguish our approach from a straightforward extension of the single-turn RLHF <cit.>, which fixes a prompt that may include mid-trajectory steps such as (x, a_1, o_1, a_2, o_2) and look into the next single step a_3. However, we remark that the utility function itself, can be defined in a step-wise manner. To further illustrate the notion of the BT model in trajectory-level comparisons, we provide some examples of the utility function here.
[Result Checking in Math]
Since the math reasoning datasets GSM8K <cit.> and MATH <cit.> have the gold answer, we can check the final answer to determine the reward. In this case, u^*(x, y) = 𝕀(a_H = gold answer).
[Outcome-supervised Reward Models (ORMs)]
Final result checking is not perfectly reliable because we can encounter false positive solutions that have the correct answer but incorrect reasoning trajectory. Instead, as shown in <cit.>, we can uniformly sample n trajectories per prompt and train an ORM to predict whether each solution is correct or not. Then, we can take the ORM prediction at the final token as the utility function.
[Process-supervised Reward Model (PRM) and PRM without Human Annotation.]
<cit.> argues that if we can provide step-by-step supervision signal, the utility function is more effective. However, this requires more fine-grained human labels to give rating for each step of the trajectory. <cit.> studies how to automatically construct the process-labeled data for math problems with gold answers. Specifically, for s_h,a_h, we generate N trajectories with final answers [a_H^j]_j=1^N. We can define the proxy reward value:
r(s_h, a_h) := ∑_j=1^N 𝕀(a_H^j = gold answer) /N.
We may also use a hard version
r(s_h, a_h) := 𝕀(There exists a j_0: a_H^j_0 = gold answer).
Then, we can train the PRM by
ℒ_PRM(θ) = _τ∼[ ∑_h=1^H r(s_h,a_h) log r_θ + (1-r(s_h,a_h)) log (1-r_θ)].
In this case, we can use u^*(x, y) = min_h ∈ [H] r_θ(s_h, a_h) <cit.>, where r_θ is the constructed step-wise reward function.
Notations. To improve the readability of this work, we provide a notable table in Table <ref>.
§.§ Related Work
LLMs for Mathematical Problem Solving. A line of works proposes to prompt LLMs to solve the complex reasoning task in a step-by-step manner, known as the Chain-of-Thought (CoT) prompting <cit.>, which has been a standard practice in reasoning task. However, LLMs often struggle with basic arithmetic and symbolic manipulations when relying solely on internal knowledge and natural language reasoning, as measured by standard benchmarks <cit.>. To overcome these limitations, several studies have explored the use of external tools to enhance the LLMs' problem-solving abilities. This includes calculators <cit.>, symbolic solvers <cit.>, and code interpreters <cit.>. A particularly effective approach is the Program-based method (PoT), which performs CoT reasoning by writing code and using the output of the written code as the final answer <cit.>. This method significantly outperforms traditional CoT-based techniques in mathematical problem solving. However, PoT also faces challenges in planning and error handling, where natural language reasoning is more suitable <cit.>. In view of this, tool-integrated reasoning is proposed to combine the natural-language-based intrinsic reasoning with the external tools <cit.> and has achieved great progresses in recent studies <cit.>. While these efforts have primarily focused on synthetic data generation for tool-integrated reasoning, our work aims to further boost the performance of tool-integrated LLMs by RLHF.
RLHF and RLHF Algorithms. The predominant approach in RLHF is the deep RL method, Proximal Policy Optimization Algorithms (PPO) <cit.>, which leads to the great successes in Chat-GPT <cit.>, Gemini <cit.>, and Claude <cit.>. However, applying PPO requires extensive efforts and resources <cit.>, often beyond the scope of open-source capabilities. In view of this, alternative approaches have been developed. The rejection sampling fine-tuning was first proposed with the name RAFT (reward ranked fine-tuning) in RLHF <cit.> and was later extended to machine translation <cit.> and mathematical problem solving <cit.>. Its theoretical advantage was explored in <cit.>. Subsequently, another long line of works proposes direct preference learning algorithms, including SLiC <cit.>, DPO <cit.>, IPO <cit.>, KTO <cit.>, and GPO <cit.>. These algorithms bypass the reward modeling step and optimize carefully designed loss objectives directly on the preference dataset, hence the name direct preference learning. There are also some works focusing on more general preference structure <cit.> beyond the reward-based framework or post-processing of the model <cit.>.
The newly proposed direct preference learning algorithms have largely advanced the RLHF area, particularly the post-training of open-source models, with the Zephyr project as a notable example <cit.>. After this, a long line of work <cit.> demonstrates the effectiveness of on-policy sampling (the samples are generated by the policy to be trained) and online exploration in enhancing direct preference learning. In particular, the online iterative DPO <cit.> and its variants <cit.> have made state-of-the-art open-source models <cit.>, or even the industry models <cit.>. Despite these advancements, most algorithms are proposed and designed for single-turn interactions and chat. The scenarios beyond single-turn chat remain largely unexplored in the existing literature. One exception is the very recent work by <cit.>, which studies multi-turn chat task under general preferences. In contrast, in this paper, we aim to explore the use of RLHF in multi-turn tasks that incorporate interactions with external tools. Meanwhile, they derive a mirror-descent-based policy optimization algorithm, which is also different from ours.
RLHF for Math Problem Solving. Algorithms traditionally used in general chatbot applications have been adapted to enhance the reasoning capabilities of LLMs in mathematical contexts. For instance, RAFT (Reward-rAnked Fine-Tuning) <cit.> is extensively employed for synthetic data generation, whether through on-policy (self-improving) <cit.> or off-policy (knowledge distillation) methods <cit.>. The reward signal in these scenarios is typically derived from either final result checking or Outcome-supervised Reward Models (ORMs) <cit.>. A novel approach by <cit.> introduces Process-supervised Reward Models (PRMs), which provide feedback at each step of the Chain-of-Thought, demonstrating significant improvements over ORMs when combined with rejection sampling <cit.>.
In addition to the RAFT, the GRPO algorithm proposed in <cit.> studies multi-turn math problem solving but focuses on the CoT format without external inputs and the resulting model achieves the state-of-the-art performance in its class. The GRPO is a variant of Reinforce <cit.> thus falling into the scope of deep RL methods.
Further advancements include adapting direct preference learning algorithms to mathematical problem solving. For instance, <cit.> have applied the original DPO or KTO by taking the trajectory completion as a “meta” action. <cit.> further adapt the online iterative DPO originally designed for chat <cit.> and achieve better performance for CoT reasoning. Inspired by the success of PRMs, recent studies have explored generating proxy step-wise labels for the intermediate steps of the reasoning trajectories. For instance, <cit.> leverage Monte Carlo Tree Search (MCTS) and use the estimated Q value to generate the proxy labels for the intermediate steps. <cit.> proposes to use AI feedback like GPT-4 <cit.> to find the first error step in the trajectory. Meanwhile, <cit.> identifies a trajectory with the correct final answer and no errors as preferable, and prompts the SFT model with a high temperature, starting from some intermediate step to collect a rejected trajectory with errors <cit.>. Finally, a very recent study by <cit.> proposes to use MCTS with a backward iteration from the final leaf node to compute the proxy unregularized value of each node. Preference pairs are then extracted from the tree by fixing the prefix and comparing the next single reasoning step. Then, they run the original DPO on these intermediate actions with the proxy labels from MCTS. To summarize, these works present different ways of preference data collection and apply the original DPO algorithm (with some additional marginal loss and regularization adapted from the literature), thereby differing from our work in both algorithmic concepts and application scope. In contrast, we study preference learning in the context of trajectory-level comparison, where we derive the optimality condition and introduce a multi-turn DPO within an online iterative framework, specifically for tool-integrated mathematical problem solving. However, we remark that while we focus on the trajectory-level comparison, the preference signal itself can be generated in a step-by-step supervision (see Section <ref> for the detailed examples). When preference signals for partial trajectories with shared prefixes are available, our method can also adapt to learn these step-level signals (see the optimality condition in (<ref>)). In particular, the algorithmic design presented in this paper can be readily combined with the MCTS-based data collection strategy outlined in recent literature, which we leave for future work.
§ ALGORITHMS DEVELOPMENT
We develop the main algorithms of this paper in this section. We proceed to handle the general MDP formulation presented in Section <ref>, which subsumes the tool-integrated mathematical reasoning problem as a special example. Therefore, the algorithms may also be applied to more general scenarios with external messages..
§.§ Planning with a Fixed Model: Optimality Condition
Following <cit.>, we first establish the connection between any model = (, , H, , d_0, u) and its associated optimal policy. In particular, we are interested in the following KL-regularized planning problem with respect to a reference policy π_:
_π J(π; , π_)= _x ∼ d_0_a_h ∼π_h(·|s_h), o_h ∼_h(·|s_h,a_h)[ u(x, y) - η∑_h=1^H ( π_h(·|s_h), π_, h(·|s_h))].
In the single-turn case (i.e., H=1 and without transitions ), <cit.> show that the optimal solution with respect to a utility function u admits a closed-form solution, which is the Gibbs distribution (see Lemma <ref>):
π_(a_1|x) ∝π_(a_1|x)exp(u(x,a_1)/η).
Moving from the single-step to multi-turn scenario, we first show that we are still concerning about the Gibbs distribution, but in a dynamic programming manner. The results are essentially from the study of entropy-regularized MDPs <cit.>.
To illustrate the idea, we first consider the simplest case of H=2, where the model is allowed to call the tool only once. Then, our goal is to maximize the following target:
_x ∼ d_0[ _a_1 ∼π_1(·|x)[_o_1 ∼_1(·|x, a_1)_a_2 ∼π_2(·|s_2) u(s_2, a_2) - η(π_2(·|s_2), π_, 2(·|s_2))_Inner Loop] - η(π_1(·|s_1), π_, 1(·|s_1)) ].
The idea is to take a backward iteration from h=H=2 to h=1. Specifically, when we fix s_2 and consider the inner loop, we can leverage Lemma <ref> to solve
π_,2(·|s_2) = _π_2_a_2 ∼π_2(·|s_2)(u(s_2,a_2) - η·(π_2(·|s_2), π_, 2(·|s_2) ) ) ∝π_, 2(·|s_2) ·exp(u(s_2,·)/η).
Then, we can define the value of the inner loop associated with π_,2 as
V_,2(s_2) : = _a_2∼π_, 2 (·|s_2)[ u(s_2,a_2) - η(π_,2(·|s_2), π_, 2(·|s_2)) ]
Q_, 1(s_1,a_1) := _o_1∼_1(·|s_1,a_1)[V_, 2(s_2)].
Then, for step h=H-1=1, we are concerning the following KL-regularized optimization problem:
π_, 1(·|s_1) = _π_1_a_1 ∼π_1(·|x)[Q_, 1(s_1,a_1) - η(π_1(·|s_1), π_, 1(·|s_1)) ] ∝π_,1(·|s_1) ·exp(Q_, 1(s_1, ·)/η).
By construction, it can be observed that {π_, h}_h=1^2 is optimal as it maximizes the KL-regularized target.
For general H-step MDP, we can repeat the above process for H times starting with V_, H+1 = 0 where we recursively define
Q_, h(s_h, a_h) = u(s_H, a_H), if h = H,
_o_h ∼_h(·|s_h, a_h) [V_, h+1(s_h+1)], if h ≤ H-1,
Here the optimal policy and the V-values are given by
π_, h(a_h|s_h) := 1/Z_h(s_h)π_, h(a_h|s_h) ·exp(Q_, h(s_h, a_h)/η) (Gibbs distribution of Q_, h)
V_, h(s_h) := _a_h ∼π_, h(·| s_h)[Q_, h(s_h, a_h) - η·(π_, h (·|s_h), π_, h(·|s_h))]
= ηlog_π_, h(a'_h|s_h)exp(Q_, h (s_h,a_h')/η),
where Z_h(s_h) = ∑_a_h ∈π_, h(a_h|s_h) ·exp(Q_, h(s_h, a_h)/η) is the normalization constant. The second equality in the definition of the V-value is from Lemma <ref>. Then, by definition, [π_, h]_h=1^H is the optimal policy. Essentially, we solve H Gibbs distributions in terms of the Q-values[The definitions of Q-values are different from that of <cit.> so that the optimal policy can be interpreted as the Gibbs distribution of Q-values.].
§.§ Planning with a Fixed Model: Practical Algorithm
While (<ref>) can be approximately solved with standard deep RL methods, here we are interested in the implementation in a direct preference learning manner like SLiC <cit.>, DPO <cit.> or IPO <cit.>. The existing attempts <cit.> take the completion y as a “meta action” and plug it into the single-step DPO loss. In other words, they treat the external messages as the regular texts generated by the model itself. Another natural idea is to plug the probability of the trajectory into the single-step DPO loss. To be specific, for a pair (x,τ^w,τ^l), where τ^w refers to the preferred (i.e., winning) trajectory, we have
- logσ(η[log_π(τ^l|x)/_π_(τ^l|x) - log_π(τ^w|x)/_π_(τ^w|x)] )
= - logσ(η[log∏_h=1^H π_h(a^l_h|s^l_h) _h(o^l_h|s^l_h,a^l_h)/π_, h(a^l_h|s_h^l) _h(o^l_h|s^l_h,a^l_h) - log∏_h=1^H π_h(a^w_h|s^w_h) _h(o^w_h|s^w_h,a^w_h)/π_, h(a^w_h|s_h^w) _h(o^w_h|s^w_h,a^w_h)] )
= - logσ(η∑_h=1^H [logπ_h(a^l_h|s_h^l)/π_, h(a^l_h|s_h^l) - logπ_h(a^w_h|s_h^w)/π_, h(a^w_h|s_h^w)] ).
Unfortunately, the resulting algorithm does not always lead to the optimal policy as we explain next. In particular, we can solve the Q-values as
Q_, h(s_h, a_h) = logπ_, h(a_h|s_h)/π_, h(a_h|s_h) + ηlog_π_, h(a'_h|s_h)exp(Q_, h (s_h,a_h')/η)
= logπ_, h(a_h|s_h)/π_, h(a_h|s_h) + V_, h(s_h),
where two equalities uses the definition of the optimal policy π_, h and V-value V_, h in (<ref>), respectively. Furthermore, by the definition of Q-values Q_,h in (<ref>), we have
_o_h ∼_h(·|s_h, a_h) V_, h+1(s_h+1) = logπ_, h(a_h|s_h)/π_, h(a_h|s_h) + V_, h(s_h), if h ≤ H-1
u(s_H, a_H) = logπ_, H(a_H|s_H)/π_, H(a_H|s_H) + V_, H(s_H).
Summing over h ∈ [H], we have
u(s_H, a_H) = η∑_h=1^H logπ_, h(a_h|s_h)/π_, h(a_h|s_h) + ∑_h=1^H [V_, h(s_h) -_o_h ∼_h(·|s_h, a_h) V_, h+1(s_h+1) ]
= η∑_h=1^Hlogπ_, h(a_h|s_h)/π_, h(a_h|s_h)_term (A) + V_, 1(s_1)_term (B) + ∑_h=1^H-1[V_, h+1(s_h+1) -_o_h ∼_h(·|s_h, a_h) V_, h+1(s_h+1) ]_term (C).
Here, term (A) is the counterpart of ηlogπ(a_1|s_1)/π_(a_1|s_1) in the single-step DPO derivation and term (B) will be cancelled if we consider the reward difference of two trajectories with the same prompt s_1 = x. Unfortunately, in practice, term (C) is typically not feasible to directly compute. Especially, some simple math with the Chebyshev's Inequality leads to that with probability at least 0.9,
|C| ≤ 4 [∑_h=1^H-1σ_h^2]^1/2,
where σ_h^2 is the conditional variance of V_, h+1(s_h+1) -_o_h ∼_h(·|s_h, a_h) V_, h+1(s_h+1). Therefore, the bias term (C) is related to the randomness of the external environment.
For most cases of tool-integrated LLMs for mathematical reasoning, i.e., the focus of this work, luckily the code execution result is determined by the history (the codes written by the LLMs). In other words, given the history s_h, the external observation is deterministic, which leads to term (C)=0. Thus, with a dataset consisting of (x, τ^w, τ^l), the following multi-turn DPO (M-DPO) loss can be adopted:
ℒ_(θ) = -∑_(x, τ^w, τ^l) ∈logσ(η∑_h=1^H [logπ_θ, h (a^l_h|s_h^l)/π_, h(a^l_h|s_h^l) - logπ_θ, h(a^w_h|s_h^w)/π_, h(a^w_h|s_h^w)] ),
We emphasize again that although the loss presented in (<ref>) is identical to the one in (<ref>), a rigorous derivation procedure (rather than a direct plug-in) is provided. To the best of our knowledge, (<ref>) is new in the context of multi-turn reasoning task with external messages. In particular, it is noted that such a M-DPO loss is only valid upon deterministic transitions, i.e., term (C) = 0.
Moreover, with (<ref>) implying that with term (C)=0, the implicit reward is given by A=η∑_h=1^Hlogπ^*_h(a_h|s_h)/π_, h(a_h|s_h), a multi-turn version of KTO <cit.>, denoted as M-KTO, can also be naturally derived:
ℒ_(θ) = _x, y ∼[λ_y - v(x,y) ],
where
u_θ(x, y) = η∑_h=1^Hlogπ_u, h(a_h|s_h)/π_, h(a_h|s_h),
z_0 = _x' ∼, τ' ∼π_θ(·|x') ∑_h=1^H (π_θ(·|s_h), π_(·|s_h) ),
and
v(x,y) = {λ_+ σ(η (u_θ(x, y) - z_0) ) if y ∼ y_desirable | x
λ_- σ(η (z_0 - u_θ(x, y)) ) if y ∼ y_undesirable | x
..
Here λ_+ and λ_- are two hyper-parameters. We notice that <cit.> developed an online iterative version of KTO for the CoT format reasoning task. Here we extend it to build the tool-integrated reasoning agent.
The above discussions, in particular, M-DPO and M-KTO losses provided in (<ref>) and (<ref>), are focused on deterministic observations due to the deterministic nature of tool-integrated LLMs for mathematical reasoning. In contrast, some other applications may encounter stochastic observations, e.g., multi-turn chats with the external message provided by a human or another LLM <cit.>. In these scenarios, (<ref>) is biased and cannot lead to the optimal policy since term (C)≠ 0. Instead, one should first construct a value network based on the Bellman equations provided in (<ref>) and (<ref>), similar to the approach in <cit.>. Subsequently, term (C) can be estimated using Monte-Carlo methods and serve as an adaptive margin in the preference training. Particularly, the distinctions between direct preference learning algorithms and classical deep RL methods become less clear. The exploration of this more complex algorithm and its application to general multi-turn learning scenarios is left for future research.
We note that the MDP formulation above and related discussions have been previously derived by <cit.> in the context of either token-wise MDP or more general MDP with deterministic transition but their focuses are all on the single-turn chat tasks. Although the mathematical formulations appear similar, our primary focus lies on tool-integrated reasoning tasks that incorporate additional external messages {o_h}_h=1^H-1.
§.§ Learning with Online Iterative Training
In the literature of direct preference learning, a long line of work shows that the online single-turn RLHF significantly outperforms their offline counterpart, both in the literature of direct preference learning <cit.> and DRL-based approach or rejection sampling fine-tuning <cit.>. Motivated by these successes, we propose to further incorporate online interactive learning to the multi-turn RLHF studied in this work. In the following, we illustrate the proposed ideas from mainly two aspects: two learning objectives and one unified algorithmic framework.
Learning objective. We consider two different learning objectives. The first one is the KL-regularized target:
max_π_x ∼ d_0_a_h ∼π(·|s_h), o_h ∼^*_h(·|s_h,a_h)[ u^*(x, y) - η∑_h=1^H ( π(·|s_h), π_0(·|s_h))],
i.e., max_π J(π; ^*, π_0) where ^* = (, , H, ^*, d_0, u^*) is the groundtruth environment and π_0 is the initial policy (e.g., from SFT) that RLHF starts from.
This target is widely adopted in practice <cit.> and requires us to search for the optimal policy only at a fixed KL ball centered at the SFT policy π_0 <cit.>.
In contrast, the second one is the non-regularized target, i.e., directly optimizing the reward:
max_π_x ∼ d_0_a_h ∼π(·|s_h), o_h ∼^*_h(·|s_h,a_h)[ u^*(x, y) ] .
This target is the standard one in canonical RL studies <cit.>. One motivation for this target is that in the reasoning task, the reward function is more interpretable (e.g. final result checking) compared to the chat task.
Additionally, we note that a stronger KL regularization in the target (<ref>) is known to be beneficial for mitigating over-fitting issue and forgetting on the out-of-domain tasks <cit.>. On the other hand, (<ref>) allows the model to move more far away, thus achieving a better in-domain performance. Thus, from one perspective, the choice between the above two targets can be viewed as a tradeoff between out-of-domain and in-domain performances. This intuition is also verified by later experiments, where optimizing the second target in (<ref>) leads to better performance on in-domain test sets. In the rest of this section, we discuss two learning objectives to fully develop the multi-turn preference learning framework. We also conduct an ablation study on these objectives in the experimental section.
Algorithmic framework. We present a general online iterative algorithmic framework in Algorithm <ref>. This framework is termed as Online Iterative Multi-turn Gibbs Sampling
from Human Feedback (M-GSHF) to highlight the online iterative training process and the optimal condition derived in (<ref>) that the optimal policy is a layer-wise Gibbs distribution, which generalizes the bandit formulation in <cit.>. Specifically, starting from π_0, at each iteration, we first collect a pair of trajectories by the current policy pair, where the preference signal is also revealed according to Definition <ref>. Then, we update our policy pair given the data collected so far and the next iteration begins. We now discuss some features of the framework as follows.
Policy choice for exploration-exploitation trade-off. We update our behavior policies in a non-symmetric way. The first agent, which aims to extract the historical information we have gathered so far, planning with respect to the empirically best model on the historical dataset to get π_t^1, where the planning algorithms have been discussed in Section <ref>, e.g., optimizing the M-DPO or M-KTO loss in (<ref>) or (<ref>). However, it is widely recognized in RL studies <cit.> that simply exploiting the historical data via following the empirically best model is not sufficient to obtain a good final policy, while it is also required to explore the environment so that new information can be collected to facilitate subsequent learning, i.e., the exploration-exploitation tradeoff. While the main agent targeting exploitation, we design the second agent, in contrast, to strategically incorporate the uncertainty of the future relative to π_t^1 given the historical information we collect so far into its policy choice. We call the policy of the second agent π_t^2 as an exploration policy because it serves to explore the underlying environment and facilitate the first agent's learning. In practice, this principle of exploration is generally interpreted as maximizing the difference between the two behavior policies or increasing the diversity of the collected data. We summarize some popular heuristic exploration policy adopted in the online iterative RLHF practice:
* Mixture sampling: in the Claude project <cit.>, the authors choose to use the checkpoints from different training steps to collect data;
* Inference parameters tuning: in the LLaMA project <cit.>, the authors carefully tune the sampling temperature to balance data diversity and data quality;
* West-of-n sampling: <cit.> samples n responses per prompt and extract the best one and the worst one (based on some ranking criteria) to construct a preference pair.
We will explore the mixture sampling in the experimental section and also provide a theoretical justification in the next subsection.
Reference model choice for controlling regularization level. Despite two different learning targets are discussed in (<ref>) and (<ref>) seperately, we note that one general algorithmic framework can be adopted with the reference model choice taking as a hyper-parameter to control the regularization level and account for the two targets:
* KL-regularized target in (<ref>): if we fix the reference model as the initial policy, i.e., π_t, = π_0, ∀ t∈ [T], we always search the optimal policy within the KL ball centered at π_0, and thus optimize the KL-regularized target.
* Non-regularized target in (<ref>): in contrast, inspired by the mirror descent <cit.>, if we update the reference policy every iteration to be the policy learned in the last iteration, i.e., π_t, = π^1_t-1, ∀ t∈ [T], the cumulative update can make the model to move away from the original π_0 (while a constraint is made on the per-iteration update magnitude) and we thus optimize the non-regularized target.
A graphical illustration is provided in Figure <ref> to facilitate the understanding.
§.§ Theoretical Results
In this section, we show that the multi-turn RLHF problem can be solved in a statistically efficient manner under standard assumptions in learning theory literature. In particular, for generality, we target the most challenging scenario with stochastic and unknown transitions, while as aforementioned, multi-turn mathematical reasoning with external tools falls into an relatively easier regime with deterministic transitions. Here we mostly studies the KL-regularized target due to the lack of theoretical research on it. The other target of optimizing the rewards has been theoretically studied in <cit.> while the techniques of analyzing mirror-descent-style algorithm and corresponding guarantees have also be developed in <cit.>, which can be migrated to considering preference feedbacks. Also, to ease the presentation, we consider the scenario with batch size m=1, while the results can be easily generalized to large batches.
First, to measure the online learning process, we define the optimal policy as
π^* := _π J(π):= J(π; ^*, π_0),
and introduce the standard notion of regret as
Reg(T):=∑_t∈ [T]J(π^*) - J(π^1_t),
which represents the cumulative performance loss over T steps comparing the learned policies [π_t^1]_t=1^T against the optimal policy π^*. In addition, we consider that a bounded u^*(x, y) ∈ [0, B] for all (x, y) to maintain a reasonable utillity regime. Also, it is assumed that we have accesses to the following policy improvement oracle, that is analogue to the one considered in <cit.>.
For any model = (, , H, , d_0, u) and a reference function π_, we can compute the optimal policy associated with the model [π_, h]_h=1^H iteratively as in (<ref>).
The overall algorithm, i.e., the theoretical version of online iterative M-GSHF, is also summarized in Algorithm <ref>. At each round t, with = ∪_i =1^t-1_i as the aggregated dataset, it starts with performing a maximum likelihood estimation (MLE) of the reward function u^* over a set , whose elements are bounded in [0, B], as
û_t = _û∈ L_t(û) := ∑_(x, τ^1, τ^2, z) ∈∪_i =1^t-1_i[zlog(σ(û(τ^1) - û(τ^2))) + (1-z)log(σ(û(τ^2) - û(τ^1))) ],
and also an MLE of the transition kernel ^* over a set as
_t = _∈ L_t() := ∑_(π, τ) ∈∪_i =1^t-1_ilog^π(τ),
where ^π(τ) denotes the probability of trajectory τ under policy π and transition kernel . With the obtained model _t = (û_t, _t), the Oracle defined in Definition <ref> is called with the reference policy π_ set as the initial policy π_0, whose output is adopted as the main policy π^1_t.
Then, we specify how to choose a theoretically sound exploration policy π_t^2. The previous work of <cit.> on single-turn RLHF has demonstrated the intuition that the exploration policy should be in charge of collecting information of the uncertain parts of the environment , which is thus often selected to maximize one uncertainty measurement. In the multi-turn RLHF setup considered in this work, the following proposition serves as the cornerstone to find a suitable uncertainty measurement to decide the exploration policy. In particular, we can observe that the optimal policy is parameterized by the optimal Q-function. If a different set of Q-function is adopted for policy parameterization, we can bound its performance as follows.
If considering a set of Q-functions [Q̂_h]_h=1^H and a reference policy π_ with the induced policy π̂ as
π̂_h(a_h|s_h) ∝π_,h(a_h|s_h) ·exp(Q̂_h(s_h, a_h)/η),
and the corresponding set of V-functions [V̂_h]_h=1^H as
V̂_h(s_h) = _a_h∼π̂_h(·|s_h)[Q̂_h(s_h, a_h)] - η(π̂_h(·|s_h), π_,h(·|s_h)), V̂_H+1(s_H+1) = 0,
for any comparator policy π, it holds that
J(π) - J(π̂) = _d_0,π, ^*[u^*(s_H, a_H)] - _d_0, π̂, ^*[u^*(s_H, a_H)]
+ ∑_h∈ [H]_d_0, π, ^*[V̂_h+1(s_h+1) - Q̂_h(s_h, a_h)] - ∑_h∈ [H]_d_0, π̂, ^*[V̂_h+1(s_h+1) - Q̂_h(s_h, a_h)]
- η·∑_h∈ [H]_d_0, π, ^*[(π_h(·|s_h), π̂_h(·|s_h))],
where the expectation _d_0, π, ^* is with respect to the prompt and response (i.e., the trajectory) generated following d_0, ^* and π.
Based on Proposition <ref>, the exploration policy π_t^2 is selected as
π_t^2 = _πmax_ũ∈_t, ∈_t _d_0, π, [ũ(s_H, a_H)] - _d_0, π^1_t, [ũ(s_H, a_H)] - (_d_0, π, [û_t(s_H, a_H)] - _d_0, π^1_t, [û_t(s_H, a_H)])_uncertainty measurement of reward estimation
+ ∑_h∈ [H]_d_0,π, [ V̂_t, h+1(s_h+1) - [_t, hV̂_t, h+1](s_h, a_h)]_uncertainty measurement of transition estimation,
where _t and _t are two confidence sets defined as
_t = {u∈: L_t(u) ≥ L_t(û_t) - c_1 log(||T/δ)},
_t = {∈: L_t() ≥ L_t(_t) - c_1 log(||T/δ)}
with c_1 denoting an absolute constant here. Note that for the theoretical convenience, we have assumed and are finite here, which can be extended to the infinite case using standard discretization techniques.
It can be observed that π_t^2 is selected to maximize a combination of uncertainty from estimations of both rewards and transitions. If considering known transitions (i.e., without the need to estimate ), the uncertainty from the estimation of transitions dimimishes, which leads to a similar uncertainty measurement adopted in <cit.>.
The following theorem establishes a rigorous guarantee for the regret incurred.
Assuming u^*∈ and ^*∈, with probability at least 1-δ, we have that
(T) ≲ κ^-1B√(d_Tlog(||T/δ)) + B^2Hξ(d_, T, c_2log(||HT/δ))
- η·∑_h∈ [H]_d_0, π^*, ^*[(π^*_h(·|s_h), π^1_t, h(·|s_h))],
where κ:= 1/(2+ exp(-B)+ exp(B)), c_2 is an absolute constant, d_ is the Eluder coefficient defined in Definition <ref> while d_ and ξ(·) are from the generalized Eluder-type condition defined in Definition <ref>.
We note that the Eluder coefficient and the generalized Eluder-type condition are standard and well-adopted conditions in the theoretical studies on RL <cit.> and also RLHF <cit.>. Moreover, for a board class of RL problems (see <cit.> for more details), the Eluder coefficient d_ is small and the condition is satisfied with ξ(d_, T, c_2log(||HT/δ)) ≲√(d_Tlog(||HT/δ)), which implies that the regret of theoretical version of Algorithm <ref> is sublinear in T, further evidencing its statistical efficiency.
§ EXPERIMENTS
§.§ Experiment Setup
Task, and datasets. We use the test sets of MATH <cit.> and GSM8K <cit.> to measure the model's ability to solve the mathematical problems. The MATH dataset includes 5K problems across diverse mathematical fields such as algebra, geometry, probability, number theory, and calculus. The GSM8K test set consists of 1319 grade-school math word problems, which are generally simpler than those in the MATH dataset. Examples from each dataset are as follows:
* GSM8K: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?
* MATH: Find the center of the circle with equation x^2 - 6x + y^2 + 2y = 9.
To effectively solve these problems, the model needs to perform multi-turn reasoning and arithmetic operations before getting the final answer. To construct the training prompt set, we follow <cit.> to use an augmented prompt set from the 7.5K training problems of MATH and 7.47K training problems of GSM8K. In particular, we use the prompts from MetaMathQA <cit.> and MMIQC <cit.>. The new questions include rephrasing question, backward question (starting with the final answer and thinking backward to determine an unknown variable in the original question), and bootstrapping questions by in-context learning and iterative question composing <cit.>. We delete the duplicate questions and also ensure that none from the test sets of MATH and GSM8K were used. Eventually, we have 60K training prompts in total for training and randomly split them into three disjoint sets for iterative training. We also reserve a set of 1K prompts for model selection during the training.
Base models. We train with a range of base models, including Gemma-1.1-it-7B <cit.>, CodeGemma-1.1-it-7B <cit.>, Mistral-7B-v0.3 <cit.>, and Gemma2-it-9B. We use the pre-trained version of Mistral instead of the instruction version because the chat template of its huggingface checkpoint and that of their own code base are not consistent so we start from the pre-trained model and fine-tune it by ourselves.
Data format and generation. We format the data into a multi-turn chat where the user initially ask the LLMs a question, and provide the messages returned by the Python interpreter in the subsequent user rounds of chat. In each model turn, the model reasons based the history gathered so far and can output a final answer enclosed in \boxed, or call the Python interpreter by writing a code wrapped in ```python and ```. After receiving the response of the model, we return the execution result of the code if the model calls the tool, and stop if the model outputs the final answer or reaches the maximal number of rounds H (6 in our setting). See Figure <ref> for an illustration. We generated N=30 samples per prompt for each iteration using a temperature setting of 1.0, without employing top-K or top-p sampling. We employ a mixture sampling strategy, where the up-to-date model generates only 20 trajectories, and the remainder (10 trajectories) are collected using the model from the last iteration. For the initial iteration, we employed models fine-tuned for 3 epochs and 1 epoch, respectively, to conduct mixture sampling. Intuitively, the mixture sampling helps to improve the diversity of the collected samples, and have been employed in previous RLHF practices <cit.>. For all the data generation process, we adopt the following constraints: (1) for each turn, the model can generate up to 512 tokens; (2) the maximal number of steps is H=6; (3) the maximal number of token for each trajectory is 2048.
Supervised fine-tuning (SFT). We first fine-tune the model for the tool-integrated reasoning task <cit.>, using a subset of the Open-MathInstruct dataset, which was generated by the permissively licensed model through in-context learning. The problems are from the training sets of MATH and GSM8K datasets. We restrict the number of samples for each question to be 50 and remove the nearly duplicate responses. Eventually we get 510K samples in the SFT dataset. We train the models for 4 epochs at most with a learning rate of 5e-6 for Gemma instruct models <cit.> and a learning rate of 1e-5 for Mistral-v0.3 model <cit.>. The learning rates are determined by searching {2e-6, 5e-6, 1e-5}. We use the pretrained model of Mistral because the chat template of Mistral instruct models are not consistent in different code bases (huggingface and the official one) at the time of our experiments. We use a cosine learning rate scheduler and set the warm-up steps as 100. The samples are packed into blocks with length 4096 to accelerate training and a global batch size of 64 is used. We also mask all the user messages (i.e., the prompt and the messages returned by the Python interpreter) in the training. It takes roughly 10-15 hours when training with 8xA100 80G GPUs. The checkpoint at the end of the third epoch is used for Gemma and the checkpoint of the end of the second epoch is used for Mistral as the starting point for RLHF. This is because these models outperform the last-iteration one with considerable margin and is very close to the next one. An ablation study on the SFT epochs is also included.
Data Annotation. For each prompt, we first divide the responses into the winning set G^w and the losing set G^l by checking the final answer. In practice, we observe that the model can memorize the final answer and output it even though the reasoning path itself is incorrect. To mitigate this issue, we include some heuristic filtering process. First, we delete all the trajectories in the winning set where the returned messages in the second last round indicate the code is with some bugs, but the models just ignore it and predict the ground-truth answer. Then, we delete the responses in both the winning set G^w and losing set G^l if they are longer than 2048 tokens. Finally, we randomly sample a trajectory from the G^w and a trajectory from G^l to construct a pair or to add them into the training set of KTO algorithm. For each iteration, we typically get 15K-20K samples because some of the prompts may not have any correct answer. We notice that it is possible to leverage AI feedback like Gemini <cit.> or GPT4 <cit.> to further verify the correctness of the trajectory step by step or construct a PRM <cit.> to rank the trajectories, which we leave for future work.
Implementation of M-DPO and M-KTO. To implement the M-DPO, we simply set the labels of all the user-turn tokens to be -100 and mask the log-probability in the subsequent loss computation. We train the model for 1 epoch at most and tune the learning rate in {2e-7, 4e-7, 7e-7, 1e-6} with the first iteration of iterative training. Eventually, the learning rate of 4e-7 is used for Gemma-1.1 models and 2e-7 is used for Gemma-2 model and Mistral model. The global batch size is 32 with a warm-up step of 40. We evaluate the model every 50 training steps by the split prompt set and the best model is typically obtained between 150 steps to 600 steps, which is expected because the prompts for SFT and prompts for RLHF are overlapped. This has also been observed in previous work of RLHF for making general chatbot <cit.>. Further exploration of prompt scaling is also left for future work. The hyper-parameters are of M-KTO are mostly the same as the M-DPO. We also set the λ_+ = λ_- = 1 following the original KTO paper <cit.>. The RLHF experiments of this paper are run with 8xA100 80G GPUs, where an additional machine with 8xA100 40G GPUs is also used for data collection and model evaluation. The main experiment of this paper can be reproduced by 24 - 48 hours with this setup. We defer some other implementation details to Appendix <ref> due to space constraint.
§.§ Main Results
We evaluate the models in the zero-shot setting and report the main results in Table <ref>.
Baselines. The existing literature mainly focuses on the synthetic data generation and teach the models to use the external tool by supervised fine-tuning on the collected data. We use the results from <cit.> as baselines because we use the same SFT dataset so the results are generally comparable. For the CoT baselines, we use the Wizardmath models from <cit.>. We also include the reward ranked fine-tuning (RAFT) as a baseline <cit.>, which is also known as rejection sampling fine-tuning in the literature <cit.>. RAFT first collects N trajectories per prompt, filters the low-quality data (by reward function), and fine-tune on the selected trajectories. Another baseline is the single-turn online iterative DPO and KTO <cit.>, which ignores the problem structure (i.e., the external messages) and treats the trajectory as a whole. In implementation, it means that we do not mask the user turn and the tokens of external messages also contribute to the loss.
From the first two sections in Table <ref>, we first observe that the tool-integrated LLMs significantly outperform their CoT counterparts with only SFT, demonstrating the benefits of leveraging external tools. In the subsequent discussions, we focus on the comparison within the scope of tool-integrated LLMs.
Iterative M-DPO and M-KTO considerably improve the SFT models. We observe that for all the four base models, after the iterative training with M-DPO or M-KTO, the resulting model outperforms their starting SFT checkpoint with considerable margins on both GSM8K and MATH. In particular, with M-DPO, the aligned Gemma-1.1-it-7B model attains accuracies of 83.9% and 51.2% on GSM8K and MATH, respectively, and is comparable to the open-source Open-MathInstruct-finetuned CodeLLaMA-2-70B (slightly worse on GSM8K but also slightly better on MATH). Moreover, the aligned Gemma-2-it-9B model achieves accuracies of 86.3% and 54.5% on GSM8K and MATH, surpassing all of the open-source models trained with Open-MathInstruct in the 7B to 70B range. Overall, our framework can robustly further boost the tool-integrated models' ability on the top of supervised fine-tuning.
Iterative M-DPO and M-KTO surpass existing RLHF baselines. We also observe that the iterative M-DPO and M-KTO surpass other existing RLHF baselines. First, they consistently and significantly outperform the RAFT algorithm across all four base models, which is known to be a robust and competitive baseline in the literature <cit.>. This is because the RAFT algorithm only utilizes the positive signal by imitating the correct trajectories, while the DPO-based and KTO-based algorithms further leverage the negative signal from those incorrect trajectories. We note that the SFT stage in our pipeline can also be viewed as an application of RAFT, an idea that further dates back to expert iteration <cit.>. Consequently, our results should be interpreted to be that on the top of the first stage of SFT, algorithms with negative signal are more sample efficient. Moreover, while the online iterative single-turn DPO (KTO) <cit.> also gives a boost performance, it is generally worse than the multi-turn version. This suggests that learning to predict the off-policy external messages returned by the code interpreter usually has a negative impact on the reasoning ability improvement. Essentially, this corresponds to the fact that when deriving the optimality condition of the KL-regularized optimization problem, we are not allowed to optimize the external messages. Meanwhile, we present a representative example we encounter in Figure <ref>, where LLMs generate poorly constructed code resulting in anomalous and lengthy external messages. Forcing LLMs to learn to predict these messages can significantly hurt the model's reasoning abilities.
Iterative training and reference update lead to better performance. We use the Gemma-1.1-it-7B and M-DPO as a representative example and observe that the model benefits from online iterative training, where the test accuracy of GSM8K improves from 77.5% (SFT) to 81.5% (iter 1) to 82.5% (iter2) to 83.9% (iter3), and the test accuracy of MATH improves from 46.1% (SFT) to 49.1% (iter 1) to 49.7% (iter2) to 51.2% (iter3). This is consistent with our theoretical insight that iterative training allows the models to explore the underlying space and learn the optimal policy progressively. Moreover, we observe that if we fix the reference model as the SFT policy, the final model performance is much worse compared to the case where we update the reference model as the current model at every iteration. We suspect that this is because this version of algorithm essentially optimizes the non-regularized reward and the reward in the mathematical reasoning task is more accurate than those in the general chat task, leading to the superior in-domain performance. We defer a more detailed ablation study on the impact of KL regularization to next subsection.
Preference learning improves pass@n only when n is relatively small. We plot the pass@n accuracy in terms of the number of candidate trajectories n in Figure <ref>. To evaluate the pass@n, for each question, we independently sample n trajectories, and the question is considered to be solved if there exists at least one trajectory with the correct final answer. We observe that the preference learning only improves the pass@n when n is relatively small. In particular, when n>16, all the models perform similarly on both GSM8K and MATH. In other words, the iterative M-DPO does not inject new knowledge but elicits the models' knowledge acquired in pre-training and SFT stages by boosting the quality of Top n responses. The observation is consistent with that of <cit.>, which studies the DRL-based GRPO method for the CoT mathematical reasoning task. Therefore, the success of preference learning is on top of a well-trained SFT model. We expect that the final model performance can be further improved with more high-quality SFT data.
§.§ Ablation Study and Discussion
We conduct ablation studies in this subsection for a more comprehensive understanding of the proposed algorithm.
A moderate level of KL regularization balances the per-iteration improvement and exploration efficiency. The effectiveness of (iterative) DPO is significantly influenced by the choice of reference model and the KL coefficient. Previous research by <cit.> on offline DPO for general chatbot applications suggests that a lower KL coefficient, specifically 0.01, yields superior performance by allowing the resulting model to move more far away from the SFT model π_0. Meanwhile, for online iterative training, <cit.> adopt a fixed reference model of π_0, and achieves continuous improvements as the training iterates. In our ablation study, we consider two different choices: (1) using the fixed reference model π_0; (2) updating the reference model to the last iteration's model at each round. Moreover, we search the KL coefficient η∈{0.01, 0.1, 0.5}. The results are summarized in Table <ref>. First, we notice that if we update the reference model at each iteration, the final model outperforms the one with a fixed reference model π_0 with a large margin. Essentially, this dynamic approach optimizes the non-regularized reward, while the one with a fixed reference model π_0 aims to maximize the KL-regularized reward. This can be viewed as a trade-off between the generation diversity and reward optimization. We suspect this performance difference is because for reasoning task, the correct reasoning paths are highly concentrated on a small subset of the generation space, and the diversity is less important in this case.
We also find that the strongest model is obtained by a moderate KL coefficient of 0.1, outperforming both 0.01 and 0.5. To understand this phenomena, we plot the test accuracy of GSM8K in Figure <ref> along the way of iterative training. As we can see, for the first iteration, the results align with <cit.>'s findings, where a smaller KL coefficient leads to a larger model improvement. However, the resulting intermediate model is further used to collect trajectories for subsequent iterative training. The models trained with very low KL coefficients tend to lose diversity rapidly, potentially reducing their capacity to collect diverse trajectories for subsequent training, leading to diminishing gains in the second and third iterations. In contrast, a higher KL coefficient of 0.5 imposes strong regularization between the resulting model and the reference model, and the model improvement is less compared to that of 0.1 for each iteration. To summarize, for online iterative training, we need to strike a balance between the per-iteration improvement and exploration efficiency to optimize the overall performance. We will see that such an intuition also extends to the choices of sampling strategy choice and other experimental tricks.
The impact of sampling strategy: data diversity and coverage are crucial. Throughout our iterative training process of the Gemma-1.1-it-7B, we observed a steady increase in the percentage of correct trajectories—from 47% in the first iteration to 76% in last iteration. Moreover, since we update the reference model at each iteration, the diversity of the generated trajectories also decrease rapidly. However, the diversity of the collected data is critical for DPO/KTO training due to their contrastive nature. Prior studies in online iterative DPO for general chatbots <cit.> recommend employing model variants with different sampling temperatures or training steps to enhance trajectory diversity. Motivated by this, we explored two data collection strategies: (1) on-policy sampling, where all trajectories are sampled using the current policy, and (2) mixture sampling, where 20 trajectories are collected using the current model and 10 from the last iteration's model. We report the results in Table <ref>, where with mixture sampling, the final model performance considerably outperform the one with only on-policy sampling. To understand this phenomena, we plot the MATH test accuracy in terms of the iteration in Figure <ref>. We observe that on-policy sampling fails to improve MATH test accuracy in the third iteration, while we achieve considerable gain with the mixture sampling. This again demonstrates the importance of the diversity of the collected responses in the iterative training and also aligns with previous findings that advanced exploration strategies, which prevent diversity collapse, provide more meaningful signals for iterative preference learning <cit.>. It would be interested to explore more advanced exploration strategy like Monte Carlo tree search (MCTS) in the future study.
In our experiments, we collected N trajectories per prompt to ensure the presence of both correct and incorrect reasoning paths for constructing the comparison pair. A larger N generally leads to a better coverage of the prompt set because for some difficult problem, we need to sample more responses to find a correct reasoning path. For instance, in iteration 1, with N=30, 92.5% of the prompts are covered, compared to 83.0% for N=12 and 60% for N=6. See Figure <ref> for an illustration of the relationship between pass@1 and N. However, increasing N also incurs higher computational costs. To understand the impact of the parameter N, we conduct an ablation study with N ∈{6, 12, 30} and summarize the results in Table <ref>. We observe a substantial performance boost when increasing N from 6 to 12, reflecting a better coverage of the complex problems that require more attempts to find a correct path. In contrast, from N=12 to N=30, we only get very minor improvement in the test accuracy, suggesting that the incremental benefits of increasing N in best-of-N sampling diminish rapidly.
The best model is obtained with starting checkpoint fine-tuned with more than 1 epochs. <cit.> finds that if the SFT model is trained for more than one epoch, the subsequent DPO training will lead to performance regression with longer training in terms of instruction-following ability and benchmark for a general chatbot. In other words, there exists a trade-off between the SFT training epochs and the DPO training steps. Moreover, the best model is obtained by SFT for one epoch in their practice. We also conduct an ablation study on the impact of the SFT epoch and summarize the results in Table <ref>. Consistently across all tested scenarios, the subsequent iterative M-DPO training leads to considerable model improvement compared to the SFT model. Meanwhile, we also observe a similar trade-off between SFT and RLHF training because with more SFT epochs, the gains from the RLHF stage decrease. However, in our case, the strongest model is obtained with three epochs of SFT, followed by fine-tuning through iterative M-DPO, which is different from the offline DPO training <cit.> or the iterative DPO for general chatbot <cit.> with only one epoch of SFT.
NLL loss helps when the SFT model is substantially underfitting. The recent work <cit.> has introduced iterative RPO, specifically aimed at enhancing Chain of Thought (CoT) capabilities for solving mathematical problems. A key feature of this approach is the inclusion of an additional negative log-likelihood (NLL) loss for the preferred response. The main intuition for adding the NLL loss is that the original DPO algorithm <cit.> tends to reduce the likelihood of the preferred responses, and this is believed to hurt the reasoning ability <cit.>. Motivated by their results, we explored the applicability of this idea to our setup. We conduct an ablation study by adding the NLL loss into the iterative M-DPO training and observe performance regression as reported in Table <ref>. We observe that the best model is obtained in the second iteration if we add the additional NLL loss even though we use the mixture sampling to increase the diversity of the collected data. With time-weighted exponential moving average for smoothing training record, we observe that the log probability of the chosen responses and rejected responses are (-126, -222) at the 200th step of the third iteration training when we add the NLL loss, as compared to (-166, -350) in the case without the NLL loss. This is consistent with the result of <cit.> where with the additional NLL loss, both the log probability of chosen responses and that of rejected responses increase. These evidences indicate that the NLL loss further contributes to the model distribution collapse and eventually hurt the overall performance of online iterative learning. Finally, we notice that the additional NLL loss can be viewed as an implementation of the pessimistic principle <cit.>. This also explains its inferior in-domain performance though it may be helpful to stable the training, which requires more in-depth studies.
However, one distinct feature between our setup and <cit.> is whether we first fine-tune the initialized SFT model with in-domain data. To further understand the phenomena, we fine-tune the Gemma-1.1-it-7B with only 100 steps (so that the model knows to leverage Python code to solve the problem) as the starting checkpoint of preference learning and conduct an ablation study with the NLL loss using this model. We observe when the SFT model is substantially underfitting, the addition of NLL loss actually enhances performance. This scenario mirrors the findings of <cit.>, who utilized a general LLaMA2-70B-chat model <cit.> without firstly fine-tuning on the in-domain data. Our observations align with prior research in the context of developing general chatbots <cit.>, which suggests that RLHF is less effective without preliminary SFT.
On-policy sampling and small learning rate mitigate the probability drops in preferred responses. In the literature, the Direct Preference Optimization (DPO) algorithm is often reported to diminish reasoning capabilities by reducing the likelihood of preferred responses <cit.>. In our preliminary experiments, we also observe similar phenomena with a large learning rate (1e-6), where the model's reasoning ability collapses after only a few training steps, preventing convergence to good reasoning performance. In contrast, we find that using on-policy sampling within our online iterative training framework, coupled with a smaller learning rate (2e-7 or 4e-7), the DPO algorithm enhances the model's reasoning abilities. To interpret our observation, we can first write down the gradient of the DPO as follows:
∇_θℒ_DPO(π_θ, π_) =-η·σ( r_θ(x, y^l) - r_θ(x, y^w) ) [1/π_θ(y^w|x)∇_θπ_θ(y^w|x) - 1/π_θ(y^l|x)∇_θπ_θ(y^l|x) ],
where r_θ(x,y) = ηlogπ_θ(x,y)/π_(x,y) is the implicit reward and we use the single-turn one for simplicity. In practice, the probability of the rejected responses typically decrease, and their gradient quickly dominates when π_θ(y^l|x) << π_θ(y^w|x) and the optimization becomes unlearning of the rejected responses. In this case, the probability of the chosen responses cannot increase. This phenomenon was also discussed in the blog <cit.>. When we adopt on-policy sampling, it leads to a relatively large probability for both rejected and chosen responses at the initial stage, ensuring that both gradients remain valid and effective. Moreover, a small learning rate prevents the model from deviating too significantly, maintaining the effectiveness of both gradients. We also notice that for the KTO algorithm, the preferred responses and the rejected responses do not appear in pairs. We suspect that the probability of the preferred response increases because the gradients of the rejected response do not dominate in every mini-batch of data. A more comprehensive understanding of the training dynamic of the direct preference learning algorithms remains largely open and we leave a more detailed study of this phenomena to future study.
§ CONCLUSION, LIMITATION, AND FUTURE RESEARCH DIRECTION
We demonstrate that preference learning, as an alternative learning paradigm to supervised fine-tuning, can further boost the performance of the tool-integrated reasoning LLMs on top of iterative best-of-n fine-tuning. We introduce an online iterative multi-turn direct preference optimization algorithm and validate its effectiveness through extensive experimentation across various base models. Our results indicate substantial improvements in the pass@1 metric over the SFT policy, as evidenced by performance gains on standard benchmarks such as GSM8K <cit.> and MATH <cit.>. Additionally, we also conduct various ablation studies to show that achieving optimal performance requires a careful balance between per-iteration improvement and exploration, facilitated by moderate levels of KL regularization and strategic exploration choices.
There are also several potential directions to further improve the model performance that we have not explored in this paper. Currently, our experiments only use final result check as the preference signal, so we cannot effectively compare trajectories that both end with correct or incorrect answers. Although our algorithm is designed for trajectory-level preference learning, the reward signal in the Bradley-Terry model could be adapted to a step-wise level. In particular, we may leverage AI feedback to verify trajectories step by step or train a process-supervised reward model <cit.> to provide learning signals. Additionally, with more fine-grained reward signals, it is also possible to adopt more advanced heuristic exploration policy like west-of-n sampling, which prove to be effective in the practice of making general chatbot <cit.> and Monte Carlo tree search (MCTS) <cit.>. Furthermore, it is also possible to leverage some well-established tricks like adaptive margin and length regularization for DPO training <cit.>. These techniques have proven to be effective for achieving a better in-domain performance for the chat task. We expect that these more fine-grained preference signals and algorithmic designs can largely improve the models' performance.
Finally, while the direct preference learning algorithms show promising gains for the mathematical reasoning tasks with code interpreter, it is not directly applicable to the general agent learning with more complex and stochastic external environments or against dynamic opponents. In particular, it requires to construct a value network for involving an adaptive margin in the optimization target and take the randomness of the external environment into consideration. We leave the study of this more involved algorithm to the future work. Moving beyond the framework presented this paper, it is also possible to explore more general preference structures beyond the BT model <cit.>. We hope that the insights from this paper will inspire further research in this direction, extending the utility of preference learning beyond the general structured chat tasks.
abbrvnat
*
§ NOTATION TABLE
§ IMPLEMENTATION DETAIL
Tools in Math Problem Solving.
Following <cit.>, the LLM agent is allowed to call the python interpreter when it decodes a python code starting with ```python and ending with ```. For each step h, to generate the observation o_h, we leverage the python package , and run all the codes in the history one by one and treat each code snippet as a Jupyter cell. We only return the standard output or the error message from the last snippet. When there exists some bug in the code, we only return the error message which is typically less than 20 tokens as in <cit.>. We notice that some works (e.g. <cit.>) also returns the first and the last 50 tokens of the trackback information.
Data Generation. All the models are evaluated in the zero-shot setting. For all the data generation process, we adopt the following constraints: (1) for each turn, the model can generate up to 512 tokens; (2) the maximal number of steps is H=6; (3) the maximal number of generated token for each trajectory is 2048. When collecting new data for online iterative M-DPO, we set temperature to be 1.0 and decode without top-K or top-p sampling. For evaluation, greedy decoding is employed so that the results are generally comparable with previous works <cit.>. For evaluating the models with pass@n rate, we follow <cit.> to adopt a temperature of 0.7.
Python Experiment Environment. We find that the evaluation can be influenced by the python environment, the precision (especially for the Gemma-1.1 models), and even the virtual machine we use. This does not affect the overall trend and conclusion because the magnitude of oscillation is relatively small compared to the overall improvement. For completeness, however, we specify some of the key package versions here. We use transformers 4.42.4, torch 2.3.0, sympy 1.2, antlr4-python3-runtime 4.11.0, IPython 8.26.0 for all models. We evaluate the models using torch.float and use vllm 0.5.0.post1 for most the experiments except for Gemma-2 where vllm 0.5.1 is required. The inconsistency of vllm version is because Gemma-2 model was not released when we performed the main experiments of this project. We fix the python environment and machine for our evaluation throughout the experiment. For SFT, we use the open-source axolotl project with version 0.4.1 and for online iterative preference learning and RAFT, we use the code base from RLHF Workflow <cit.>.
RAFT implementation. The data generation step is similar to the online iterative M-DPO training, except that we only keep the trajectories with correct final answer. For each prompt, we sample at most k trajectories where we search k ∈{1, 3, 8} and use k=1 eventually because we do not see improvement by leveraging more data. We run the algorithm for three iterations in total. The training parameters are similar to the SFT stage, but we use a smaller batch size of 32 so that there are enough optimization steps. For Gemma models, we use a learning rate of 5e-6. For each training stage, we train the models for two epochs in total according to our parameter search. For Mistral model, we find that a smaller learning rate of 1e-6 and training for 1 epoch give us much better performance.
Prompt template. We do not tune the prompt though we do observe that the prompt engineering can further improve the performance. For all the experiments, we simply adopt the chat template of the models as in Figure <ref>.
§ OMITTED THEORETICAL PROOFS
§.§ Proof of Proposition <ref>
For one policy π, starting with V^π_, H+1 = 0, we recursively define its V-value and Q-value functions on one model = (, , H, , d_0, u) and the reference policy π_ as
Q^π_, h(s_h, a_h) := u(s_H, a_H), if h = H,
_o_h ∼_h(·|s_h, a_h) [V^π_, h+1(s_h+1)], if h ≤ H-1,
V^π_, h(s_h) := _a_h ∼π_h(·| s_h)[Q^π_, h(s_h, a_h) - η·(π_h (·|s_h), π_, h(·|s_h))].
It is noted that with the optimal policy π_, Q_, h = Q^π__, h and V_, h = V^π__, h. In the following discussions, we exclusively focus on the model ^* = (, , H, ^*, d_0, u^*) with abbreviations Q^π_h = Q^π_^*, h and V^π_h = V^π_^*, h.
For any comparator policy π, it holds that
J(π) - J(π̂) = _d_0[V_1^π(s_1) - V̂_1(s_1)] - _d_0[V_1^π̂(s_1) - V̂_1(s_1)],
For any h∈ [H], we can obtain that
_d_0, π_1:h-1, ^*_1:h-1[V_h^π(s_h) - V̂_h(s_h)] - _d_0, π̂_1:h-1, ^*_1:h-1[V_h^π̂(s_h) - V̂_h(s_h)]
(a)=_d_0, π_1:h-1, ^*_1:h-1[_π_h[Q_h^π(s_h, a_h)] - η(π_h(·|s_h),π_,h(·|s_h))]
- _d_0, π_1:h-1, ^*_1:h-1[ _π̂_h[Q̂_h(s_h, a_h)] - η(π̂_h(·|s_h),π_,h(·|s_h))]
- _d_0, π̂_1:h-1, ^*_1:h-1[_π̂_h[Q_h^π̂(s_h, a_h)] - η(π̂_h(·|s_h), π_,h(·|s_h))]
+ _d_0, π̂_1:h-1, ^*_1:h-1[_π̂_h[Q̂_h(s_h, a_h)]- η(π̂_h(·|s_h), π_,h(·|s_h))]
= _d_0, π_1:h, ^*_1:h-1[Q_h^π(s_h, a_h) - Q̂_h(s_h, a_h)] - _d_0, π̂_1:h, ^*_1:h-1[Q_h^π̂(s_h, a_h) - Q̂_h(s_h, a_h)]
+ _d_0, π_1:h-1, ^*_1:h-1[_π_h[Q̂_h(s_h, a_h)] - _π̂_h[Q̂_h(s_h, a_h)]]_term (I)
- η·_d_0, π_1:h-1, ^*_1:h-1[(π_h(·|s_h), π_,h(·|s_h))] + η·_d_0, π_1:h-1, ^*_1:h-1[(π̂_h(·|s_h), π_,h(·|s_h))]
(b)=_d_0, π_1:h, ^*_1:h-1[Q_h^π(s_h, a_h) - Q̂_h(s_h, a_h)] - _d_0, π̂_1:h, ^*_1:h-1[Q_h^π̂(s_h, a_h) - Q̂_h(s_h, a_h)]
- η·_d_0, π_1:h-1, ^*_1:h-1[(π_h(·|s_h), π̂_h(·|s_h))].
In the above derivation, equation (a) is from the definitions of Q^π and V^π, and the relationship between Q̂ and V̂. The equation (b) is because
(term I) := _π_h[Q̂_h(s_h, a_h)] - _π̂_h[Q̂_h(s_h, a_h)]
= η·_π_h[logπ̂_h(a_h|s_h)/π_,h(a_h|s_h)] - η·_π̂_h[logπ̂_h(a_h|s_h)/π_,h(a_h|s_h)]
= η·(π_h(·|s_h),π_,h(·|s_h)) - η·(π_h(·|s_h), π̂_h(·|s_h)) - η·(π̂_h(·|s_h), π_,h(·|s_h)).
where the second equation is from the relationship that
Q̂_h(s_h, a_h) = η·logπ̂_h(a_h|s_h)/π_,h(a_h|s_h) - η·logẐ_h(s_h).
Furthermore, if h = H, we can obtain that
_d_0, π_1:H-1, ^*_1:H-1[V_H^π(s_H) - V̂_H(s_H)] - _d_0, π̂_1:H-1, ^*_1:H-1[V_H^π̂(s_H) - V̂_H(s_H)]
= _d_0, π_1:H, ^*_1:H-1[u^*(s_H, a_H) - Q̂_H(s_H, a_H)] - _d_0, π̂_1:H, ^*_1:H-1[u^*(s_H, a_H) - Q̂_H(s_H, a_H)]
- η·_d_0, π_1:H-1, ^*_1:H-1[(π_H(·|s_H), π̂_H(·|s_H))]
= _d_0, π_1:H, ^*_1:H-1[u^*(s_H, a_H)] - _d_0, π̂_1:H, ^*_1:H-1[u^*(s_H, a_H)]
+ _d_0, π_1:H, ^*_1:H[V̂_H+1(s_H+1) - Q̂_H(s_H, a_H)] - _d_0, π̂_1:H, ^*_1:H[V̂_H+1(s_H+1) - Q̂_H(s_H, a_H)]
- η·_d_0, π_1:H-1, ^*_1:H-1[(π_H(·|s_H)||π̂_H(·|s_H))],
where the second equality leverages that V̂_H+1(s_H+1) = 0;
otherwise, for all h ≤ H-1, it holds that
_d_0, π_1:h-1, ^*_1:h-1[V_h^π(s_h) - V̂_h(s_h)] - _d_0, π̂_1:h-1, ^*_1:h-1[V_h^π̂(s_h) - V̂_h(s_h)]
= _d_0, π_1:h, ^*_1:h-1[Q^π_h(s_h,a_h) - Q̂_h(s_h, a_h)] - _d_0, π̂_1:h, ^*_1:h-1[Q^π̂_h(s_h,a_h) - Q̂_h(s_h, a_h)]
- η·_d_0, π_1:h-1, ^*_1:h-1[(π_h(·|s_h)||π̂_h(·|s_h))]
= _d_0, π_1:h, ^*_1:h[V̂_h+1(s_h+1) - Q̂_h(s_h, a_h)] - _d_0, π̂_1:h, ^*_1:h[V̂_h+1(s_h+1) - Q̂_h(s_h, a_h)]
- η·_d_0, π_1:h-1, ^*_1:h-1[(π_h(·|s_h)||π̂_h(·|s_h))]
+ _d_0, π_1:h, ^*_1:h[V^π_h+1(s_h+1) - V̂_h+1(s_h+1)] - _d_0, π_1:h, ^*_1:h[V^π̂_h+1(s_h+1) - V̂_h+1(s_h+1)].
The proposition can be obtained by iteratively using the above relationship for h∈ [H].
§.§ Proof of Theorem <ref>
First, with the assumption u^*∈ and ^*∈, the following lemma demonstrates that _t and _t are valid confidence sets.
There exists an absolute constant c_1 such that for any δ∈ (0, 1], with probability at least 1-δ, for all t∈ [T], û∈, and ∈, it holds that
L_t(û) - L_t(u^*) ≤ c_1 log(||T/δ), L_t() - L_t(^*) ≤ c_1 log(||T/δ),
which implies that u^*∈_t and ^*∈_t.
Then, we provide an additional lemma demonstrating the in-sample error of the MLE and optimistic estimators.
There exists an absolute constant c_2 such that for any δ∈ (0, 1], with probability at least 1-δ, for all t∈ [T], we have
∑_i < t|σ(û_t(s^2_i,H, a^2_i,H) - û_t(s^1_i,H, a^1_i,H)) - σ(u^*(s^2_i,H, a^2_i,H) - u^*(s^1_i,H, a^1_i,H))|^2 ≤ c_2log(||T/δ);
∑_i < t|σ(ũ_t(s^2_i,H, a^2_i,H) - ũ_t(s^1_i,H, a^1_i,H)) - σ(u^*(s^2_i,H, a^2_i,H) - u^*(s^1_i,H, a^1_i,H))|^2 ≤ c_2log(||T/δ),
and for all t∈ [T], h∈ [H], we have
∑_j∈{1,2}∑_h∈ [H]∑_i < t({d_0, π^j_i, [^*_1:h-1, _t,h, ^*_h+1: H]}, {d_0, π^j_i, ^*_1:H})^2 ≤ c_2 log(||HT/δ);
∑_j∈{1,2}∑_h∈ [H]∑_i < t({d_0, π^j_i, [^*_1:h-1, _t,h, ^*_h+1: H]}, {d_0, π^j_i, ^*_1:H})^2 ≤ c_2 log(||HT/δ),
where ({d_0, π, }, {d_0, π', '}) denotes the TV distance between the probability distributions over the trajectories induced by d_0, π, and d_0, π', '.
First, for ũ_t, we can obtain that with probability at least 1-δ, there exists an absolute constant c such that for all t∈ [T],
∑_i < t|σ(ũ_t(s^2_i,H, a^2_i,H) - ũ_t(s^1_i,H, a^1_i,H)) - σ(u^*(s^2_i,H, a^2_i,H) - u^*(s^1_i,H, a^1_i,H))|^2
≤ c(∑_i< tlogz_i ·σ(u^*(s^1_i,H, a^1_i,H) - u^*(s^2_i,H, a^2_i,H))+ (1-z_i) ·σ(u^*(s^2_i,H, a^2_i,H) - u^*(s^1_i,H, a^1_i,H))/z_i ·σ(ũ_t(s^1_i,H, a^1_i,H) - ũ_t(s^2_i,H, a^2_i,H)) + (1- z_i) ·σ(ũ_t(s^2_i,H, a^2_i,H) - ũ_t(s^1_i,H, a^1_i,H)) + log(||T/δ))
= c(L_t(u^*) - L_t(ũ_t) + log(||T/δ))
≤ c(L_t(u^*) - L_t(û_t) + c_1 log(||T/δ) + log(||T/δ))
≤ c_2 log(||T/δ).
where the first inequality is from Proposition B.2 from <cit.> and the second inequality uses Lemma <ref>. The result for û_t can be similarly established.
Then, following similar steps, for _t, we can obtain that with probability at least 1-δ, there exists an absolute constant c such that for all t∈ [T],
∑_j∈{1,2}∑_h∈ [H]∑_i < t({d_0, π^j_i, [^*_1:h-1, _t,h, ^*_h+1: H]}, {d_0, π^j_i, ^*_1:H})^2
≤∑_j∈{1,2}∑_h∈ [H] c·(∑_i<tlog^*_h(s^j_i,h+1|s^j_i,h, a^j_i,h)/_t,h(s^j_i,h+1|s^j_i,h, a^j_i,h) + log(|_h|HT/δ))
= c·(∑_j∈{1,2}∑_i<tlog^*,π^j_i(τ^j_i)/^π^j_i_t(τ^j_i) + 2log(||HT/δ))
= c·(L_t(^*) - L_t(_t) + 2log(||HT/δ))
≤ c·(L_t(^*) - L_t(_t) + c_1 log(||T/δ) + 2log(||HT/δ))
≤ c_2 log(||HT/δ).
The result for _t can also be similarly established.
In the following proofs, we omit the KL term in the decomposition to ease the presentation. Then, with probability at least 1-δ, for all t∈ [T], we can obtain that
J(π^*) - J(π^1_t)
= _d_0, π^*, ^*[u^*(s_H, a_H)] - _d_0, π^1_t, ^*[u^*(s_H, a_H)] - (_d_0, π^*, ^*[û_t(s_H, a_H)] - _d_0, π^1_t, ^*[û_t(s_H, a_H)])
+ ∑_h∈ [H]_d_0, π^*, ^*[V̂_t, h+1(s_h+1) - [_t, hV̂_t, h+1](s_h,a_h)]- ∑_h∈ [H]_d_0, π^1_t, ^*[ V̂_t, h+1(s_h+1) - [_t, hV̂_t, h+1](s_h,a_h)]
≤_d_0, π_t^2, _t[ũ_t(s_H, a_H)] - _d_0, π^1_t, _t[ũ_t(s_H, a_H)] - (_d_0, π_t^2, _t[û_t(s_H, a_H)] - _d_0, π^1_t, _t[û_t(s_H, a_H)])_term (I)_t
+ ∑_h∈ [H]_d_0, π_t^2, _t[V̂_t, h+1(s_h+1) - [_t, hV̂_t, h+1](s_h,a_h)]+ ∑_h∈ [H]_d_0, π^1_t, ^*[ [_t, hV̂_t, h+1](s_h,a_h) - V̂_t, h+1(s_h+1)]_term (II)_t,
where the inequality is from the definition of π^2_t and the fact that (u^*, ^*)∈_t×_t from Lemma <ref>.
We define the following terms:
term (A)_t := _d_0, π_t^2, ^*[ũ_t(s_H, a_H)] - _d_0, π^1_t, ^*[ũ_t(s_H, a_H)] - (_d_0, π_t^2, ^*[u^*(s_H, a_H)] - _d_0, π^1_t, ^*[u^*(s_H, a_H)]),
term (B)_t := _d_0, π_t^2, ^*[u^*(s_H, a_H)] - _d_0, π^1_t, ^*[u^*(s_H, a_H)] - (_d_0, π_t^2, ^*[û_t(s_H, a_H)] - _d_0, π^1_t, ^*[û_t(s_H, a_H)]),
term (C)_t := ∑_j∈{1,2}∑_h∈ [H]_d_0, π^j_t, ^*[(_t,h(·|s_h, a_h), ^*_h(·|s_h, a_h))],
term (D)_t := ∑_j∈{1,2}∑_h∈ [H]_d_0, π^j_t, ^*[ (_t,h(·|s_h, a_h), ^*_h(·|s_h, a_h))].
For term (I)_t, we have that
term (I)_t := _d_0, π_t^2, _t[ũ_t(s_H, a_H)] - _d_0, π^1_t, _t[ũ_t(s_H, a_H)] - (_d_0, π_t^2, _t[û_t(s_H, a_H)] - _d_0, π^1_t, _t[û_t(s_H, a_H)])
= _d_0, π_t^2, ^*[ũ_t(s_H, a_H)] - _d_0, π_t^1, ^*[ũ_t(s_H, a_H)] - (_d_0, π_t^2, ^*[u^*_t(s_H, a_H)] - _d_0, π_t^1, ^*[u^*_t(s_H, a_H)])
+ _d_0, π_t^2, ^*[u^*_t(s_H, a_H)] - _d_0, π_t^1, ^*[u^*_t(s_H, a_H)] - (_d_0, π_t^2, ^*[û_t(s_H, a_H)] - _d_0, π^1_t, ^*[û_t(s_H, a_H)])
+ _d_0, π_t^2, _t[ũ_t(s_H, a_H)] - _d_0, π_t^1, _t[ũ_t(s_H, a_H)] - (_d_0, π_t^2, ^*[ũ_t(s_H, a_H)] - _d_0, π_t^1, ^*[ũ_t(s_H, a_H)])
+ _d_0, π_t^2, ^*[û_t(s_H, a_H)] - _d_0, π^1_t, ^*[û_t(s_H, a_H)] - (_d_0, π_t^2, _t[û_t(s_H, a_H)] - _d_0, π^1_t, _t[û_t(s_H, a_H)])
≤_d_0, π_t^2, ^*[ũ_t(s_H, a_H)] - _d_0, π_t^1, ^*[ũ_t(s_H, a_H)] - (_d_0, π_t^2, ^*[u^*_t(s_H, a_H)] - _d_0, π_t^1, ^*[u^*_t(s_H, a_H)])
+ _d_0, π_t^2, ^*[u^*_t(s_H, a_H)] - _d_0, π_t^1, ^*[u^*_t(s_H, a_H)] - (_d_0, π_t^2, ^*[û_t(s_H, a_H)] - _d_0, π^1_t, ^*[û_t(s_H, a_H)])
+ 4B·({d_0, π^1_t, _t}, {d_0, π^1_t, ^*})+ 4B·({d_0, π^2_t, _t}, {d_0, π^2_t, })
≤_d_0, π_t^2, ^*[ũ_t(s_H, a_H)] - _d_0, π_t^1, ^*[ũ_t(s_H, a_H)] - (_d_0, π_t^2, ^*[u^*_t(s_H, a_H)] - _d_0, π_t^1, ^*[u^*_t(s_H, a_H)])_term (A)_t
+ _d_0, π_t^2, ^*[u^*_t(s_H, a_H)] - _d_0, π_t^1, ^*[u^*_t(s_H, a_H)] - (_d_0, π_t^2, ^*[û_t(s_H, a_H)] - _d_0, π^1_t, ^*[û_t(s_H, a_H)])_term (B)_t
+ 4B·∑_j∈{1,2}∑_h∈ [H]_d_0_π^j_t, ^*[(_t,h(·|s_h, a_h), ^*_h(·|s_h, a_h))]_term (C)_t.
For term (II)_t, we have that
term (II)_t = ∑_h∈ [H]_d_0, π_t^2, _t[V̂_t, h+1(s_h+1) - [_t, hV̂_t, h+1](s_h,a_h)]
+ ∑_h∈ [H]_d_0, π^1_t, ^*[ [_t, hV̂_t, h+1](s_h,a_h) - V̂_t, h+1(s_h+1)]
= ∑_h∈ [H]_d_0, π_t^2, ^*[V̂_t, h+1(s_h+1) - [_t, hV̂_t, h+1](s_h,a_h)]
+ ∑_h∈ [H]_d_0, π_t^2, _t[V̂_t, h+1(s_h+1) - [_t, hV̂_t, h+1](s_h,a_h)]
- ∑_h∈ [H]_d_0, π_t^2, ^*[V̂_t, h+1(s_h+1) - [_t, hV̂_t, h+1](s_h,a_h)]
+ ∑_h∈ [H]_d_0, π^1_t, ^*[ [_t, hV̂_t, h+1](s_h,a_h) - V̂_t, h+1(s_h+1)]
≤ 2B·∑_j∈{1,2}∑_h∈ [H]_d_0, π_t^j, ^*[(_t,h(·|s_h, a_h)), ^*_h(·|s_h, a_h)]
+ 2BH ·({d_0, π_t^2, _t}, {d_0, π_t^2, ^*})
≤ 2B·∑_j∈{1,2}∑_h∈ [H]_d_0, π_t^j, ^*[(_t,h(·|s_h, a_h)), ^*_h(·|s_h, a_h)]_term (D)_t
+ 2BH ·∑_j∈{1,2}∑_h∈ [H]_d_0, π_t^j, ^*[(_t,h(·|s_h, a_h)), ^*_h(·|s_h, a_h)]_term (C)_t.
In the above derivations, we have repeatedly used similar relationships as follows:
({d_0, π^2_t, _t}, {d_0, π^2_t, ^*}) ≤∑_h∈ [H]_d_0, π^2_t, ^*[(_t,h(·|s_h, a_h), ^*_h(·|s_h, a_h))],
which can be derived as
({d_0, π^2_t, _t}, {d_0, π^2_t, ^*}) ≤∑_h∈ [H]({d_0, π^2_t, ^*_1:h-1, _t, h:H}, {d_0, π^2_t, ^*_1:h, _t, h+1:H})
= ∑_h∈ [H]_d_0, π^2_t, ^*[(_t, h(·|s_h,a_h), ^*_h(·|s_h,a_h)})].
Then, we can obtain that
∑_t∈ [T]J(π^*) - J(π̂^1_t) ≤∑_t∈ [T]term (A)_t + ∑_t∈ [T]term (B)_t + (4B + 2BH) ∑_t∈ [T]term (C)_t + 2B ∑_t∈ [T]term (D)_t.
Then, we control the sum of each individual term in the following. First, for term (A)_t, with probability at least 1-δ, we have that
∑_t∈ [T]term (A)_t
= ∑_t∈ [T]_d_0, π_t^2, ^*[ũ_t(s_H, a_H)] - _d_0, π^1_t, ^*[ũ_t(s_H, a_H)] - (_d_0, π_t^2, ^*[u^*(s_H, a_H)] - _d_0, π^1_t, ^*[u^*(s_H, a_H)])
≤∑_t∈ [T]ũ_t(s^2_t,H, a^2_t,H) - ũ_t(s^1_t,H, a^1_t,H) - (u^*(s^2_t,H, a^2_t,H) - u^*(s^1_t,H, a^1_t,H)) + O(B√(Tlog(1/δ)))
≤√(d_∑_t=2^T(1+ ∑_i=1^t-1(ũ_t(s^2_i,H, a^2_i,H) - ũ_t(s^1_i,H, a^1_i,H) - (u^*(s^2_i,H, a^2_i,H) - u^*(s^1_i,H, a^1_i,H)))^2)) + O(B√(Tlog(1/δ)))
≤√(d_∑_t=2^T(1+ κ^-2∑_i=1^t-1(σ(ũ_t(s^2_i,H, a^2_i,H) - ũ_t(s^1_i,H, a^1_i,H)) - σ(u^*(s^2_i,H, a^2_i,H) - u^*(s^1_i,H, a^1_i,H)))^2)) + O(B√(Tlog(1/δ)))
≲κ^-1 B √(d_ T log(||T/δ)),
where the first inequality is from the Hoeffding inequality, the second inequality uses the Eluder coefficient d_ := (1, - , T) from Definition <ref>, the third inequality leverages the mean value theorem with κ:= 1/(2+ exp(-B)+ exp(B)) representing the minimum derivative of σ(·) in the regime of [0, B], and the last inequality incorporates Lemma <ref>. A similar result can be obtained for term (B)_t.
For term (C)_t, we have that
∑_t∈ [T]term (C)_t = ∑_j∈{1,2}∑_t∈ [T]∑_h∈ [H]_d_0, π^j_t, ^*[(_t,h(·|s_h, a_h), ^*_h(·|s_h, a_h))]
= ∑_j∈{1,2}∑_t∈ [T]∑_h∈ [H]({d_0, π^j_t, [^*_1:h-1, _t,h, ^*_h+1: H]}, {d_0, π^j_t, ^*_1:H})
≤ 2H·ξ(d_, T, c_2log(||HT/δ)),
where the last step is from the generalized Eluder-type condition in Definition <ref> and Lemma <ref>. A similar result can be obtained for term (D)_t.
Finally, we obtain that
Reg(T) ≲ κ^-1B√(d_Tlog(||T/δ)) + B^2Hξ(d_, T, c_2log(||HT/δ)
- η·∑_h∈ [H]_d_0, π^*, ^*[(π^*_h(·|s_h), π^1_t, h(·|s_h))],
which concludes the proof.
§ TECHNICAL LEMMAS
Given a loss functional with respect to p(·|x), written as
_w ∼ p(·)[-U(w) + η(p(·),p_0(·) )] = η( p(·), p_0(·)exp(1/ηU(·)) )- η·log_w ∼ p_0(·)exp(1/ηU(w))_C_r,
where the minimizer of the loss functional is
p^*(w) = 1/C_r p_0(w)exp(1/η U(w)), also known as Gibbs distribution.
Given a function class , its Eluder coefficient (λ, , T) is defined as the smallest number d so that for any sequence {x_t: t∈ [T]} and {f_t: t∈ [T]}∈,
∑_t = 2^T |f_t(x_t) - f^*(x_t)| ≤√(d∑_t = 2^T(λ + ∑_i = 1^t-1 (f_t(x_i) - f^*(x_i))^2)).
There exists a real number d_∈^+ and a function ξ such that for any (T, Δ) ∈×^+, transitions {'_t: t∈ [T]} and policies {π_t: t∈ [T]}, we have
∀ t∈ [T], ∑_i < t({d_0, '_i, π_i}, {d_0, , π_i})^2 ≤Δ ⇒∑_t ∈ [T]({d_0, '_t, π_t}, {d_0, , π_t}) ≤ξ(d_, T, Δ).
|
http://arxiv.org/abs/2409.03541v1 | 20240905140623 | Submodularity of Mutual Information for Multivariate Gaussian Sources with Additive Noise | [
"George Crowley",
"Inaki Esnaola"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
§ ABSTRACT
Sensor placement approaches in networks often involve using information-theoretic measures such as entropy and mutual information. We prove that mutual information abides by submodularity and is non-decreasing when considering the mutual information between the states of the network and a subset of k nodes subjected to additive white Gaussian noise. We prove this under the assumption that the states follow a non-degenerate multivariate Gaussian distribution.
[
[
=====
§ INTRODUCTION
A graph is characterized by the set of nodes = {1,2, …, n} with n ∈ℕ, where each node corresponds to a system element, and the set of edges as = {(i,j) ∈× : node i is connected to node j }, where each edge represents a connection between nodes in the network. Jointly, the set of edges and the set of nodes define an undirected graph = (, ). We assume that the state of the network can be described by the vector of random variables X^n := (X_1,X_2,…,X_n)^. The observations obtained for a sensor placed at a node i ∈ are denoted as Y_i and are subject to i.i.d. additive white Gaussian noise (AWGN), denoted as formally as Z_i ∼ N(0,σ^2), with σ∈ℝ_+. Hence, the measurements obtained by the placed sensor i is given by
Y_i = X_i + Z_i, i ∈.
Assuming that k < n with k ∈ℕ sensors are placed in the network amongst n nodes, then the observation vector Y^k is defined as
Y^k := (Y_i_1, …, Y_i_k)^,
where the subscript i_j denotes the j-th selected sensor.
The set of linear observation matrices
is given by
ℋ_k := ⋃_𝒜⊆
|𝒜| = kℋ_k(𝒜),
with
ℋ_k(𝒜) := {∈{0,1}^k × n: = [ _i_1^,_i_2^,,_i_k^ ]^where i_j ∈𝒜⊆ for j = 1,…,k },
where _i ∈{0,1}^n is the i-th column basis vector, i.e. 1 in the i-th position and 0 otherwise.
Combining Definition <ref> with (<ref>) yields the following observation model:
Y^k := X^n + Z^k, for all ∈ℋ_k.
We consider the problem of finding the sensor placement 𝒜⊂ such that we seek to extremize the optimization problem
^*_k := _∈ℋ_k I(X^n; X^n + Z^k),
where I(·,·) denotes the information-theoretic measure mutual information <cit.>. Further assuming that the probability distribution of the state variables satisfies X^n ∼ N_n(, ), where ∈ℝ^n and ∈ S_++^n, then
f() := I(X^n; X^n + Z^k) = 12log( 1σ^2k( ^ + σ^2 _k ) ), ∈ℋ_k,
where (·) denotes the determinant of a square matrix, and _k denotes the (k × k) identity matrix.
Under the assumption X^n ∼ N_n(, ), where ∈ℝ^n and ∈ S_++^n, the function f() satisfies the following properties:
* f() is 0 when ∈ℋ_0.
* f() is submodular.
* f() is non-decreasing.
Under the conditions of Theorem <ref>, when the greedy heuristic is applied to the optimization problem posed in (<ref>), the heuristic always produces a solution whose value is at least 1 - (k-1/k)^k times the optimal value, which has a limiting value of (e-1/e) <cit.>.
§ SUBMODULARITY
We begin by introducing the definitions of non-decreasing and submodular set functions.
Given a finite set Ω, a real-valued function z on the set of subsets of Ω is called submodular if
z(𝒜) + z(ℬ) ≥ z(𝒜∪ℬ) + z(𝒜∩ℬ), ∀𝒜,ℬ⊆Ω.
We shall often make use of the incremental value of adding element j to the set 𝒮, let ρ_j(𝒮) = z(𝒮∪{ j}) - z(𝒮).
Each of the following statements is equivalent and defines a submodular set function.
(i) z(𝒜) + z(ℬ) ≥ z(𝒜∪ℬ) + z(𝒜∩ℬ), ∀𝒜,ℬ⊆Ω.
(ii) ρ_j(𝒮) ≥ρ_j(𝒯), ∀𝒮⊆𝒯⊆Ω, ∀ j ∈Ω∖𝒯.
Condition (ii) can be re-written as
z(𝒮∪{ j}) - z(𝒮) ≥ z(𝒯∪{ j}) - z(𝒯), ∀𝒮⊆𝒯⊆Ω, ∀ j ∈Ω∖𝒯.
Each of the following statements is equivalent and defines a non-decreasing submodular set function.
(i') Submodularity: z(𝒜) + z(ℬ) ≥ z(𝒜∪ℬ) + z(𝒜∩ℬ), ∀𝒜,ℬ⊆Ω.
Non-decreasing: z(𝒜) ≤ z(ℬ), ∀𝒜⊆ℬ⊆Ω.
§ PROOF OF SUBMODULARITY
To keep the notation consistent, we translate the notation used in <cit.> to ours. Set Ω = and 𝒮 := {i_𝒮_1,i_𝒮_2,…,i_𝒮_s}
such that the cardinalty of 𝒮 = s, with 𝒮⊆Ω.
Then, we can write our cost function z(𝒮) as
z(𝒮) = f(_𝒮) := 12log( 1σ^2s( _𝒮_𝒮^ + σ^2 _s) ),
where the observation matrix _𝒮 = [ _i_𝒮_1^,_i_𝒮_2^,,_i_𝒮_s^ ]^. We will now prove conditions (1) - (3) from Theorem <ref>.
Let ∈ℋ_0, then I(X^n;Z^k) = 0 since Z^k are i.i.d. Gaussian random variables.
Before proving condition (2), we first note some key results used throughout the proof.
Denote the block matrix as
:=
[ ; ].
If is invertible <cit.>, then (<ref>) holds. If is invertible, then (<ref>) holds, where
() = [ ; ] = () ( - ^-1)
= ()( - ^-1).
Define as in Lemma <ref>. If the inverse of exists, <cit.>, and = ^, then
^-1 =
[ ; ^ ]^-1 = [ ^-1 ; ] + [ -^-1; _γ ]( - ^^-1)^-1[ -^^-1, _γ ].
Let ≻ 0, and let be p × n of rank q (q ≤ p) <cit.>. Then:
^≽ 0.
Define the matrix as in Lemma <ref>. Further, assume that is symmetric ( = ^) <cit.>. Then the following statement holds:
(a) ≻ 0 if and only if () ≻ 0 and - ^-1^≻ 0.
Suppose ≽ 0 and ≽ 0 be n × n Hermitian matrices <cit.>. Then the following inequality holds:
(c) ( + ) ≥() + () with equality if and only if + is singular or = or =.
Define the matrix as in Lemma <ref>. Suppose that is non-singular and is also non-singular <cit.>. Define _· = - ^-1, then
^-1 = [ _·^-1 - _·^-1^-1; ; - ^-1_·^-1 ^-1 + ^-1_·^-1^-1 ].
For the proof, we first note that j ∉𝒯, to match notation with (<ref>), and 𝒮⊆𝒯. We further make note of the following observation matrices:
_{j} = [ _j^ ]^,
_𝒮∪{j} = [ _i_𝒮_1^,_i_𝒮_2^,,_i_𝒮_s^, _j^ ]^.
Assue there exists a set Γ such that 𝒮∪Γ = 𝒯. Note that if 𝒮 = 𝒯, then the function is equal and hence submodular. Otherwise,
_Γ = [ _i_Γ_1^,…, _i_Γ_γ^ ]^,
_𝒯 = _𝒮∪Γ = [ _i_𝒮_1^,_i_𝒮_2^,,_i_𝒮_s^,_i_Γ_1^,…, _i_Γ_γ^ ]^
= [ _𝒮; _Γ ],
_𝒯∪{j} = _𝒮∪Γ∪{j} = [ _i_𝒮_1^,_i_𝒮_2^,,_i_𝒮_s^, _i_Γ_1^,…, _i_Γ_γ^,_j^ ]^
= [ _𝒮; _Γ; _{j} ].
The cardinality of each subset is denoted by: || = n, |Γ| = γ, |𝒯| = s + γ = t, and |{j}| = 1.
From Proposition <ref>, we need to show (with 𝒮⊆𝒯, j ∉𝒯)
12log( 1σ^2 (s + 1)( _𝒮∪{j }_𝒮∪{ j }^ + σ^2 _s+1) ) - 12log( 1σ^2s( _𝒮_𝒮^ + σ^2 _s) )
≥ 12log( 1σ^2 (t + 1)( _𝒯∪{ j }_𝒯∪{ j }^ + σ^2 _t+1) ) - 12log( 1σ^2t( _𝒯_𝒯^ + σ^2 _t) ),
which can be simplified to
log( 1σ^2( _𝒮∪{ j }_𝒮∪{ j }^ + σ^2 _s+1)( _𝒮_𝒮^ + σ^2 _s)) ≥log(1σ^2( _𝒯∪{ j }_𝒯∪{ j }^ + σ^2 _t+1)( _𝒯_𝒯^ + σ^2 _t)).
Since all determinant values are positive (confirmed by the assumption that is positive definite) and log is a monotonic increasing function, (<ref>) becomes
1σ^2( _𝒮∪{ j }_𝒮∪{ j }^ + σ^2 _s+1)( _𝒮_𝒮^ + σ^2 _s) ≥1σ^2( _𝒯∪{ j }_𝒯∪{ j }^ + σ^2 _t+1)( _𝒯_𝒯^ + σ^2 _t)
( _𝒮∪{ j }_𝒮∪{ j }^ + σ^2 _s+1)( _𝒮_𝒮^ + σ^2 _s) ≥( _𝒯∪{ j }_T ∪{ j}^ + σ^2 _t+1)( _𝒯_𝒯^ + σ^2 _t).
Before proceeding, we notice that
_𝒮∪{ j }_𝒮∪{ j }^ + σ^2 _s+1 = [ _𝒮_𝒮^ + σ^2 _s _𝒮_{j}^; ; _{j}_𝒮^ _{j}_{j} + σ^2 ],
and
_𝒮∪Γ∪{ j }_𝒮∪Γ∪{ j }^ + σ^2 _s + γ +1 = [ _𝒯_𝒯^ + σ^2 _t (_𝒯 X^n, _{j} X^n); ; ((_𝒯 X^n, _{j} X^n)^ _{j}_{j}^ + σ^2 ].
The covariances can be calculated as
(_𝒯 X^n, _{j} X^n ) = _𝒯(X^n,X^n)_{j}^
= _𝒯_{j}^,
and its transposition is
(_𝒯_{j}^)^ = _{j}_𝒯^.
Then, using Lemma <ref>, with = _𝒮_𝒮^ + σ^2 _s, = _{j}_{j}^ + σ^2,
= _𝒮_{j}^, and = _{j}_𝒮^), it follows that the left-hand side of (<ref>) can be written as
= ( _𝒮∪{ j }_𝒮∪{ j }^ + σ^2 _s+1)( _𝒮_𝒮^ + σ^2 _s)
= ( _𝒮_𝒮^ + σ^2 _s) (_{j}_{j} + σ^2 - _{j}_𝒮^(_𝒮_𝒮^ + σ^2 _s)^-1_𝒮_{j}^) ( _𝒮_𝒮^ + σ^2 _s)
= (_{j}_{j}^ + σ^2 - _{j}_𝒮^(_𝒮_𝒮^ + σ^2 _s)^-1_𝒮_{j}^).
Using Lemma <ref>, taking = _𝒯_𝒯^ + σ^2 _t, = _{j}_{j}^ + σ^2, = (_𝒯 X^n, _{j} X^n), and = ^, it follows that the right-hand side of (<ref>) can be written as
= ( _𝒯∪{ j }_𝒯∪{ j }^ + σ^2 _t+1)( _𝒯_𝒯^ + σ^2 _t)
= ( _𝒯_𝒯^ + σ^2 _t) ( _{j}_{j}^ + σ^2 - _{j}_𝒯^(_𝒯_𝒯^ + σ^2 _t)^-1_𝒯_{j}^)( _𝒯_𝒯^ + σ^2 _t)
= ( _{j}_{j}^ + σ^2 - _{j}_𝒯^(_𝒯_𝒯^ + σ^2 _t)^-1_𝒯_{j}^).
Since is (n × n), _{j} is (1 × n), _𝒮 is (s × n), _𝒯 is (t × n), and hence _{j}_𝒮^ is (1 × s), it follows that the resulting matrices inside the determinants of both (<ref>) and (<ref>) are scalars. Since the determinant of a scalar is just the scalar itself, this observation shows us that we can rewrite (<ref>) as
- _{j}_𝒮^(_{j}_𝒮^ + σ^2 _s)^-1_𝒮_{j}^≥ - _{j}_𝒯^(_𝒯_𝒯^ + σ^2 _t)^-1_𝒯_{j}^
_{j}_𝒮^(_{j}_𝒮^ + σ^2 _s)^-1_𝒮_{j}^≤_{j}_𝒯^(_𝒯_𝒯^ + σ^2 _t)^-1_𝒯_{j}^
_{j}_𝒯^(_𝒯_𝒯^ + σ^2 _t)^-1_𝒯_{j}^ - _{j}_𝒮^(_{j}_𝒮^ + σ^2 _s)^-1_𝒮_{j}^≥ 0
_{j}( _𝒯^(_𝒯_𝒯^ + σ^2 _t)^-1_𝒯 - _𝒮^(_{j}_𝒮^ + σ^2 _s)^-1_𝒮) _{j}^≥ 0.
Using (<ref>) and (<ref>) yields
_{j}( [ _𝒮^, _Γ^ ](_𝒯_𝒯^ + σ^2 _t)^-1[ _𝒮; _Γ ] - _𝒮^(_𝒮_𝒮^ + σ^2 _s)^-1_𝒮) _{j}^≥ 0.
Observe that we can further manipulate the inequality in (<ref>) to obtain
_{j}[ [ _𝒮^, _Γ^ ](_𝒯_𝒯^ + σ^2 _t)^-1[ _𝒮; _Γ ] - [ _𝒮^, _Γ^ ][ (_𝒮_𝒮^ + σ^2 _s)^-1 ; 0*_γ ][ _𝒮; _Γ ]] _{j}^≥ 0.
It then follows after using (<ref>) that
_{j}_𝒯^[ (_𝒯_𝒯^ + σ^2 _t)^-1 - [ (_𝒮_𝒮^ + σ^2 _s)^-1 ; ^ 0*_γ ]] _𝒯_{j}^≥ 0.
The inequality holds if the matrix inside is positive semi-definite, i.e.
( ( _𝒯_𝒯^ + σ^2 _t)^-1 - [ ( _𝒮_𝒮^ + σ^2 _s)^-1 ; ^ 0*_γ ]) ≽ 0.
The block form of _𝒯_𝒯^ + σ^2 _t can be expressed as
_𝒮∪Γ_𝒮∪Γ^ + σ^2 _s + γ = [ _𝒮_𝒮^ + σ^2 _s (_𝒮 X^n, _Γ X^n) ; ; ( (_𝒮 X^n, _Γ X^n) )^ _Γ_Γ^ + σ^2 _γ ] = [ ; ; ^ ].
Using Lemma <ref>, with = _𝒮_𝒮^ + σ^2 _s, and as indicated from (<ref>), it follows that
( _S ∪Γ_𝒮∪Γ^ + σ^2 _s + γ)^-1 = [ ^-1 ; ] + [ -^-1; _γ ]( - ^^-1)^-1[ -^^-1, _γ ].
Inserting equation (<ref>) into (<ref>) yields the condition
[ -^-1; _γ ]( - ^^-1)^-1[ -^^-1, _γ ]≽ 0.
Observe that = _𝒮_𝒮^ + σ^2 _s is symmetric and positive definite, then it follows that ^-1 is also symmetric and positive definite (i.e. ≻ 0, and (^-1)^ = ^-1). Then it follows that
[ -^-1; _γ ]^ = [ (-^-1)^, _γ ] = [ -^^-1, _γ ].
By setting
:= [ -^-1; _γ ],
and using Lemma 3, it follows that the inequality in (<ref>) can be written as
( - ^^-1)^-1^≽ 0 ( - ^^-1)^-1≻ 0 - ^^-1≻ 0.
Moreover, by setting := _𝒮∪Γ_𝒮∪Γ^ + σ^2 _s + γ as in (<ref>), which is positive definite, by Lemma <ref>, it follows that is positive definite if and only if ≻ 0 and - ^^-1≻ 0. But - ^^-1≻ 0 is the inequality in (<ref>), and so the result follows.
Using the same notation as before, the non-decreasing property states
z(𝒮) ≤ z(𝒯), ∀𝒮⊆𝒯⊆.
In our formulation, the non-decreasing property yields as
12log( 1σ^2s( _𝒮_𝒮^ + σ^2 _s) ) ≤12log( 1σ^2t( _𝒯_𝒯^ + σ^2 _t) ).
First, let us assume that 𝒮 = 𝒯, then the equality holds trivially. Hence, we assume that 𝒯 =𝒮∪Γ, then using the monotonicity of the logarithm, it follows that
1σ^2s( _𝒮_𝒮^ + σ^2 _s) ≤1σ^2t( _𝒯_𝒯^ + σ^2 _t).
We set the block matrix as
= _𝒯_𝒯^ + σ^2 _t = [ _𝒮_𝒮^ + σ^2 _s (_𝒮 X^n, _Γ X^n); ; ( (_𝒮 X^n, _Γ X^n) )^ _Γ_Γ^ + σ^2 _γ ] = [ ; ],
then, by Lemma <ref>, it follows that
() = () ( - ^-1)
= (_𝒮_𝒮^ + σ^2 _s) ( - ^-1).
Using (<ref>) in (<ref>) yields
1σ^2s( _𝒮_𝒮^ + σ^2 _s) ≤1σ^2t(_𝒮_𝒮^ + σ^2 _s) ( - ^-1).
Since _𝒮_𝒮^ + σ^2 _s≻ 0 ( _𝒮_𝒮^ + σ^2 _s) > 0, we can divide this term out of (<ref>) such that
1σ^2s≤1σ^2t( - ^-1),
and hence, using t = s + γ and fully expanding all the terms, (<ref>) can be written as
( _Γ_Γ^ + σ^2 _γ - ( (_𝒮 X^n, _Γ X^n) )^( _𝒮_𝒮^ + σ^2 _s)^-1(_𝒮 X^n, _Γ X^n) ) ≥σ^2γ.
Set = σ^2 _γ and = _Γ_Γ^ - ( (_𝒮 X^n, _Γ X^n) )^( _𝒮_𝒮^ + σ^2 _s)^-1(_𝒮 X^n, _Γ X^n). We omit temporarily showing that ≽ 0, but will invoke Lemma <ref> on (<ref>) which yields the inequality
(+) ≥() + () ≥σ^2γ.
Since = σ^2 _γ, we have () = σ^2γ. Then
(+) ≥σ^2γ + () ≥σ^2γ() ≥ 0 ≽ 0.
We will now proceed by showing that is semi-positive definite. We can write the joint random vector of _Γ X^n and _𝒮 X^n + Z^s as
[ _Γ X^n; _𝒮 X^n + Z^s ] ∼ N( [ _Γ[X^n]; _𝒮[X^n] ], [ (_Γ X^n,_Γ X^n) (_Γ X^n,_𝒮 X^n + Z^s); (_𝒮 X^n + Z^s,_Γ X^n) (_𝒮 X^n + Z^s,_𝒮 X^n + Z^s) ])
∼ N( [ _Γ[X^n]; _𝒮[X^n] ], [ _Γ_Γ^ _Γ_𝒮^; _𝒮_Γ^ _𝒮_𝒮^ + σ^2 _s ]).
Observe that the covariance matrix in (<ref>) is positive definite, since
[ _Γ_Γ^ _Γ_𝒮^; _𝒮_Γ^ _𝒮_𝒮^ + σ^2 _s ] = [ _Γ_Γ^ _Γ_𝒮^; _𝒮_Γ^ _𝒮_𝒮^ ] + [ _γ×γ ; ^ σ^2 _s ],
and the first matrix is a principle submatrix of , which is positive definite by assumption. Hence, the inverse of the covariance matrix in (<ref>) exists, which is also positive definite. By Lemma <ref>, it then follows that
[ _Γ_Γ^ _Γ_𝒮^; _𝒮_Γ^ _𝒮_𝒮^ + σ^2 _s ]^-1 = [ (_Γ_Γ^ - _Γ_𝒮^(_𝒮_𝒮^ + σ^2 _s)^-1_𝒮_Γ^)^-1 …; ; … … ].
Since the covariance matrix is positive definite, Lemma <ref> implies that
(_Γ_Γ^ - _Γ_𝒮^(_𝒮_𝒮^ + σ^2 _s)^-1_𝒮_Γ^)^-1 ≻ 0
_Γ_Γ^ - _Γ_𝒮^(_𝒮_𝒮^ + σ^2 _s)^-1_𝒮_Γ^ ≻ 0.
But the matrix in (<ref>) is , since ( (_𝒮 X^n, _Γ X^n) )^ = (_𝒮_Γ^)^ = _Γ_𝒮^, and hence the result follows.
[]
|
http://arxiv.org/abs/2409.03079v1 | 20240904210817 | On the backward stability of s-step GMRES | [
"Erin Carson",
"Yuxin Ma"
] | math.NA | [
"math.NA",
"cs.NA",
"65F10, 65F50, 65G50"
] |
SUSY Quantum Mechanics, (non)-Analyticity and … Phase Transitions
Alexander V. Turbiner
September 9, 2024
=================================================================
bstract.tex
ntroduction.tex
reliminary.tex
tability-sstep.tex
rthogonalization.tex
rylov-basis.tex
xperiments.tex
onclusion.tex
cknowledgments.tex
abbrvurl
§ PROPERTIES OF BCGSI+
Given X ∈ℝ^m × n with m ≥ n, in Algorithm <ref>, we present the (k+1)-th step of the BCGSI+ algorithm to compute X = QT with an orthonormal matrix Q ∈ℝ^m × n and an upper triangular matrix T ∈ℝ^n × n.
Note that MGS or any unconditionally stable QR algorithm, e.g., Householder QR or Tall-Skinny QR (TSQR), described in <cit.>, can be utilized in Line <ref>, while any backward stable QR algorithm, i.e.,
Ŵ^(2) + Δ W^(2) = Q̂_ks+1:(k+1)sT̂^(2) with Δ W^(2)≤() Ŵ^(2),
can be employed in Line <ref>; see <cit.> for details.
The properties of BCGSI+ have already been studied in <cit.>.
Based on the results in <cit.>, it is easy to obtain the following lemmas.
Let Q_1:js and T_1:js be computed by Algorithm <ref>.
If assuming () κ(X_1:js) < 1, then
X_1:js + Δ X_1:js = Q_1:jsT_1:js, Δ X_i≤() X_i
for any i ≤ js, and
Q_1:jsQ_1:js - I≤().
Similarly to the proof of <cit.>, it is easy to verify (<ref>), since each line of Algorithm <ref> is column-wise backward stable.
The conclusion (<ref>) is directly followed by <cit.>.
From Lemma <ref>, = (), and (<ref>) is satisfied when the Q-factor is not well-conditioned.
Then it remains to estimate defined by (<ref>).
Assume that for is+j, Q_1:is+j and T_1:is+j are computed by Algorithm <ref>.
If
Q_1:is+j-1Q_1:is+j-1 - I≤(),
Q_1:is+jQ_1:is+j - I > (),
then
max{Q_is+jT_is+j,is+j, T_is+j,is+j}≤() X_is+j.
Without loss of generality, we only need to prove the case j = 1 since Algorithm <ref> is columnwise backward stable.
From the assumption (<ref>),
Q_1:is+1Q_1:is+1 - I≤() + Q_is+1Q_is+1 - I + 2Q_1:isQ_is+1.
Note that <cit.> do not depend on <cit.> when using an unconditionally stable QR algorithm in Line <ref> of Algorithm <ref>.
From <cit.> it holds that
Q_is+1Q_is+1 - I≤().
Then it remains to estimate Q_1:isQ_is+1, which can be bounded as
Q_1:isQ_is+1 ≤Q_1:isW̃^(2)_1 (T^(2)_1,1)^-1 + Q_1:isΔW̃^(2)_1 (T^(2)_1,1)^-1
≤(I - Q_1:isQ_1:is) Q_1:isU_1/T^(2)_1,1
+ Q_1:isΔW̃^(2)_1/T^(2)_1,1,
where W^(2) = W̃^(2) + ΔW̃^(2) with W̃^(2) = (I - Q_1:isQ_1:is) U.
Together with <cit.> and the assumption (<ref>), we obtain
Q_1:isQ_is+1≤()/T^(2)_1,1.
Then together with <cit.>, it follows that
W̃^(2)_1 + Δ W^(2)_1 = Q_is+1T^(2)_1,1, Δ W^(2)_1≤(),
and further,
Q_1:isQ_is+1 ≤() Q_is+1/(I - Q_1:isQ_1:is) U_1 - Δ W^(2)_1
≤()/(I - Q_1:isQ_1:is) U_1 - ().
This means that Q_1:is+1Q_1:is+1 - I≤() if 2 (I - Q_1:isQ_1:is) U_1 > () holds.
Furthermore, the contrapositive is that 2 (I - Q_1:isQ_1:is) U_1≤() if Q_1:is+1Q_1:is+1 - I > () guaranteed by the assumption (<ref>).
Then by <cit.>, and
T_is+1,is+1 = T^(2)_1,1T^(1)_1,1 + Δ T_is+1,is+1 with Δ T_is+1,is+1≤() T^(2)_1,1T^(1)_1,1,
we have
Q_is+1T_is+1,is+1≤Q_is+1T^(2)_1,1T^(1)_1,1 + () Q_is+1T^(2)_1,1T^(1)_1,1≤() X_is+1,
which also implies T_is+1,is+1≤() X_is+1 by noticing Q_is+1≥ 1 - ().
§ EXAMPLE FOR CLASSICAL S-STEP GMRES WITH BCGSPIPI+
We construct the linear system Ax = b, where A is a 20-by-20 random matrix with κ(A) = 10^10 generated using the MATLAB command and .
The vector b is selected as the right singular vector corresponding to the fourth largest singular value, and the initial guess x_0 is the zero vector.
For this specific linear system, the relative backward error Ax - b/Ax + b of the solution computed by using standard GMRES with CGSPIPI+, namely BCGSPIPI+ with s = 1, is approximately 10^-8.
In contrast, using standard GMRES with CGSI+ results in an error of around 10^-16, as illustrated in Figure <ref>.
This difference occurs because CGSPIPI+ cannot generate a nearly orthonormal basis V̂_1:20 when the condition number of r̂ Ŵ_1:19 exceeds approximately 10^8, implying () κ^2(r̂ Ŵ_1:19) > 1.
Under this situation, the return value of V̂_20 is a vector.
Thus, it is not possible to obtain a more accurate solution than x^(19), whose backward error is approximately 10^-8.
A similar result occurs for s-step GMRES with s = 2.
The relative backward error using BCGSPIPI+ is approximately 10^-5, while for s-step GMRES with BCGSI+, the error is around 10^-11, as illustrated in Figure <ref>.
§ PROOF OF LEMMA <REF>
First, we aim to prove (<ref>) by induction.
For the base case, from (<ref>) with i=1, there exists Y_1:s such that B_1:s = V_1:s Y_1:s.
Assume that B_(i-1)s+1:is = V_(i-1)s+1:is Y_(i-1)s+1:is holds for all i ≤ j - 1.
Then our aim is to prove that it holds for j.
Since V_1:(j-1)s is orthonormal and B_(j-1)s+1:js is the Q-factor of (I - V_1:(j-1)s V_1:(j-1)s)^2 K_(j-1)s+1:js, we obtain V_1:(j-1)s B_(j-1)s+1:js = 0.
Together with the above assumptions on i ≤ j - 1 and (<ref>), there exists Y_(j-1)s+1:js such that
B_1:js = V_1:jsY_1:(j-1)s V_1:(j-1)s B_(j-1)s+1:js
0 Y_(j-1)s+1:js
= V_1:jsY_1:(j-1)s 0
0 Y_(j-1)s+1:js,
which gives (<ref>) by induction on j.
Then we will bound κ (B̂_1:ks).
By the definition of ω_B_i, we only need to consider the off-diagonal blocks B̂_(i-1)s+1:isB̂_(j-1)s+1:js.
From (<ref>) and dropping the quadratic terms, it holds that
B̂_(i-1)s+1:isB̂_(j-1)s+1:js
≤Δ B_(i-1)s+1:isV̂_(j-1)s+1:jsỸ_(j-1)s+1:js
+ Ỹ_(i-1)s+1:isV̂_(i-1)s+1:isV̂_(j-1)s+1:jsỸ_(j-1)s+1:js
+ Δ B_(j-1)s+1:jsV̂_(i-1)s+1:isỸ_(i-1)s+1:is
≤Δ B_(i-1)s+1:isỸ_(j-1)s+1:js
+ Ỹ_(i-1)s+1:isV̂_(i-1)s+1:isV̂_(j-1)s+1:jsỸ_(j-1)s+1:js
+ Δ B_(j-1)s+1:jsỸ_(i-1)s+1:is,
which implies that
∑_i, j = 1; i ≠ j^k B̂_(i-1)s+1:isB̂_(j-1)s+1:js^2
≤ 3 ∑_i, j = 1; i ≠ j^k (Δ B_(i-1)s+1:is^2 Ỹ_(j-1)s+1:js^2
+ Ỹ_(i-1)s+1:is^2 V̂_(i-1)s+1:isV̂_(j-1)s+1:js^2 Ỹ_(j-1)s+1:js^2
+ Δ B_(j-1)s+1:js^2 Ỹ_(i-1)s+1:is^2)
≤ 6k ∑_i = 1^k (Δ B_(i-1)s+1:is^2 Ỹ_(j-1)s+1:js^2)
+ 3 max_i(Ỹ_(j-1)s+1:js^4) ω_k^2.
Together with B̂_(i-1)s+1:is≤√(s) (1 + ω_B_i) and
Ỹ_(i-1)s+1:is≤B̂_(i-1)s+1:is + Δ B_(i-1)s+1:is/1 - V̂_(i-1)s+1:isV̂_(i-1)s+1:is - I,
we obtain
B̂_1:ksB̂_1:ks - I^2
= ∑_i=1^k B̂_(i-1)s+1:isB̂_(i-1)s+1:is - I^2
+ ∑_i, j = 1; i ≠ j^k B̂_(i-1)s+1:isB̂_(j-1)s+1:js^2
≤∑_i=1^k ω_B_i^2
+ 6k ∑_i = 1^k (Δ B_(i-1)s+1:is^2 Ỹ_(j-1)s+1:js^2) + 3 max_i(Ỹ_(j-1)s+1:js^4) ω_k
≤∑_i=1^k ω_B_i^2
+ 24 ks ∑_i = 1^k (Δ B_(i-1)s+1:is^2)
+ 48 s^2 ω_k^2
by dropping the quadratic terms.
This implies that, from the assumption (<ref>),
B̂_1:ksB̂_1:ks - I≤∑_i=1^k ω_B_i
+ 5 √(ks)∑_i = 1^k Δ B_(i-1)s+1:is + 7 s ω_k≤1/2.
Thus, (B̂_1:ks) can be bounded by
(B̂_1:ks) ≥ 1 - B̂_1:ksB̂_1:ks - I≥1/2
and further
κ (B̂_1:ks) ≤∑_i B̂_(i-1)s+1:is/(B̂_1:ks)≤ 2 √(∑_i = 1^k s (1 + ω_B_i))≤ 2 √(n) + √(s).
|
http://arxiv.org/abs/2409.03139v1 | 20240905002229 | Approximation and application of minimizing movements for surface PDE | [
"Elliott Ginder",
"Karel Svadlenka",
"Takuma Muramatsu"
] | math.NA | [
"math.NA",
"cs.NA"
] |
GraphEx: A Graph-based Extraction Method for Advertiser Keyphrase Recommendation
Kamesh Madduri
September 9, 2024
================================================================================
empty
page-1
top=2cm
1.2
§ INTRODUCTION
We extend the applicability of minimizing movements (MM) to approximating solutions of surface partial differential equations (SPDE) and apply the technique to approximate mean curvature flow (MCF)<cit.> and hyperbolic MCF (HMCF)<cit.> on surfaces.
The MBO algorithm <cit.> and the HMBO algorithm <cit.>, both based on the level set method, are well-known approximation methods for such curvature flows.
We recall that the MBO algorithm is an approximation method for mean curvature flow and is based on solving the heat equation.
On the other hand, the HMBO algorithm is an approximation method for the hyperbolic mean curvature flow and involves solving the wave equation.
It has been shown that the MBO algorithm and HMBO algorithm can approximate curvature flow under area preservation constraints, as well as in the multiphase setting <cit.>.
Minimizing movements <cit.> are used to realize the area conservation condition, and the signed distance vector field <cit.> is used for calculations involving multiphase regions.
On the other hand, for curvature flow on curved surfaces, approximation methods for the mean curvature flow were presented in <cit.>.
The authors show that the Closest point method (CPM) <cit.> can be used to approximate mean curvature flow on surfaces.
This is done by extending the values of functions defined on a surface to the ambient space of the surface. In turn, the CPM enables the approximation of surface gradients and other differential quantities by making use of the surround Euclidean space. However, no previous studies have treated approximation of solutions to curvature flow under area preservation involving interfaces in the multiphase and curved surface setting.
This study develops approximation methods for mean curvature flow and hyperbolic mean curvature flow under multiphase area preservation conditions for interfaces moving on curved surfaces.
Our approach is to extend MM to the case of surface PDE and to use their framework to apply the MBO and HMBO algorithms. Similar to <cit.>, our generalizations make use the CPM which we combine with the surface-constrained signed distance vector field <cit.>.
The outline of this paper is as follows. In section <ref>, we describe the research background and our objectives. Our objectives are based on conventional approximation methods such as the MBO algorithm and HMBO algorithm. To achieve our goals, approximation methods for constrained partial differential equations on surfaces are required. We therefore demonstrate that partial differential equations on surfaces can be computed using the Closest Point Method and introduce surface-type minimizing movements to handle partial differential equations with constraints. In section <ref>, we discuss computational techniques related to partial differential equations on surfaces. In particular, we create an approximation method by combining the Closest Point Method with minimizing movements and perform numerical error analyses for heat and wave equations defined on surfaces. In section <ref>, we discuss our approximation method for mean curvature flow on surfaces and hyperbolic mean curvature flow on surfaces. This includes an explanation of the signed distance vector field, which is required to handle multiphase domains, and the area preservation condition that is achieved through the use of minimizing movements. In section <ref>, we discuss mean curvature flow and hyperbolic mean curvature flow on surfaces and describe the method for enforcing area preservation conditions in multiphase environments. We summarize the contents of this paper and discuss future challenges in section <ref>.
§ BACKGROUND
In this section, we will briefly explain our goals, and the mathematical frameworks used in our research. In section <ref>, we will touch upon our research objectives. Then, in section <ref>, we introduce the Closest Point Method (CPM), and in section <ref> we will introduce the minimizing movements (MM). Again remark that, by combining these methods, we obtain an approximation method for partial differential equations with constraints on curved surfaces.
§.§ Objectives
As stated in Section <ref>, the goal of this research is to create an approximation method for interfacial motion on curved surfaces. Here, we will explain two representative examples of interfacial motions on curved surfaces: mean curvature flow <cit.> and hyperbolic mean curvature flow <cit.>. The approximation method for mean curvature flow that we emply is known as the MBO algorithm, which alternates between solving the heat equation and constructing level set functions <cit.>. The numerical solution method for hyperbolic mean curvature flow is known as the HMBO algorithm, which alternates between solving the wave equation and constructing level set functions <cit.>. Examples of more complex interfacial motions involving area preservation and extendtions to multiphase regions are illustrated. Here, multiphase regions refer to regions where the domain is divided into three or more regions by the interface. We use the fact that the area preserving condition can be realized by imposing a constraint on the partial differential equations used in the MBO and HMBO algorithms <cit.>. We again remark that the case of multiphase regions is possible to treat by using the signed distance vector field <cit.>.
Based on the above considerations, the purpose of this research is as follows:
* establish an approximation method for partial differential equations with constraints on curved surfaces
* extend the MBO and HMBO algorithma to curved surfaces and realize numerical compute mean curvature flow and hyperbolic mean curvature flow on surfaces
* generalize our famework to treat the above problems in the multiphase setting.
§.§ Closest Point Method
When numerically solving partial differential equations on surfaces, approximation of surface gradients (SG) on the surface are necessary.
For a smooth surface S embedded in n-dimensional Euclidean space, the SG of a function u on the surface S is given by
∇_S u=∇ u-n(n·∇ u),
where n is the unit normal vector of the surface S, and ∇ is the usual gradient in the Euclidean space.
In the CPM, an approximation of the SG on the surface is obtained by smoothly extending the values of the function defined on the surface in the direction of the surface normal vector <cit.>.
This is enabled by constructing a function that provides the closest point on the surface S to any external point of the ambient space.
For any point x in n-dimensional space, the function C_S that gives the closest point on the surface S is defined as follows:
C_S(x)=_y∈ S|| x-y||
For example, if S is the unit circle (n=2), then C_S is given by:
C_S(x,y)=(x/√(x^2+y^2),y/√(x^2+y^2)), (x,y)∈ℝ^2
If S is the unit sphere (n=3), then C_S is given by:
C_S(x,y,z)=(x/√(x^2+y^2+z^2),y/√(x^2+y^2+z^2),z/√(x^2+y^2+z^2)), (x,y,z)∈ℝ^3
Note that, in the case of the unit circle or sphere, the closest point to the origin is not uniquely determined.
In this case, since the distance between the origin and the surface is constant, CPM assigns an arbitrary point on the surface as the closest point to the origin.
Figure <ref> shows the relationship between a point x in three-dimensional space and its closest point p on the surface S.
The closest point p is given by the closest point function C_S as p=C_S(x).
Note that, if a surface S is represented by a parameterization, it is easy to find the closest point. In such a case, we just need to solve the optimization problem (<ref>). On the other hand, if the surface S is not parameterized but is represented by a point cloud or a triangulated surface, then a bit more ingenuity is required. For example, when the surface S is discretized by a point cloud, the closest point on the surface S may involve constructing implicit surfaces defined by distance functions, or require one to accept a certain level of loss of uniquess.
The following theorems forms the basis for approximations using the CPM <cit.>.
Let S ⊂ℝ^3 be a smooth surface.
Let u: ℝ^3 →ℝ be an arbitrary smooth function that has a constant value in the normal direction of the surface S near the surface.
Then, on the surface S,
∇ u= ∇_S u
holds <cit.>.
Here, ∇_S designates the surface gradient on the surface S.
Let v be an arbitrary smooth vector field in ℝ^3 that is tangent to the surface S or to a surface that is at a fixed distance from the surface S.
Then, on the surface S,
∇· v=∇_S· v
holds.
If u is a function defined on a surface S, then the function u(C_S(x)) given by the closest point function C_S is a function that has a constant value in the direction of the normal vector of the surface.
Therefore, by Theorem <ref>, we have:
∇ u(C_S(x)) = ∇_S u(x), x∈ S
Furthermore, since ∇ u(C_S(x)) is always tangent to a surface that is at a short and fixed distance from the surface S, Theorem <ref> implies that:
∇·∇ u(C_S(x)) = ∇_S·∇_S u(x), x∈ S
The operator ∇_S·∇_S on the right-hand side of the above equation is intrinsic to the surface S and is denoted by Δ_S (the Laplace-Beltrami operator).
From the above theorem, it can be seen that by using the extension given by C_S, approximation methods used in Euclidean space can be applied to approximate differential operators such as ∇_S and Δ_S.
In <cit.>, examples of solving partial differential equations on surfaces using CPM and numerical methods are presented.
As mentioned earlier, in numerical calculations of curvature flow using MBO and HMBO algorithms, an approximation method for surface partial differential equations with constraints is required to achieve the area-preservation condition.
One effective approximation method for constrained partial differential equations is the method of minimizing movements.
In the next section <ref>, we will the extention of minimizing movements to the case of surface PDE.
§.§ Minimizing movements
Here we will explain the method of minimizing movements (MM), also known as Discrete Morse Flow <cit.>.
Minimizing movements are a method for approximating the gradient flow of an energy functional
ℰ(u)=∫_Ω L(∇ u(x),u(x),x) dx
by iteratively minimizing functionals of the form
ℱ_n(u)=∫_Ω|u-u_n-1|^2/2hdx+ℰ(u)
within a suitable function space and where h>0 is a suitable time step.
Here, Ω is a region with given boundary conditions, and u_n is an approximation of u at time t=nh.
The Euler-Lagrange equation of each functional ℱ_n represents an approximation of the gradient flow of the energy functional.
By changing ℱ_n(u), various approximations of solutions to partial differential equation can be obtained as the Euler-Lagrange equation of ℱ_n.
For example, for a given α > 0, if we set,
ℱ_n(u)=∫_Ω|u-u_n-1|^2/2h+α|∇ u|^2/2dx
we obtain an approximation of a heat equation, and if we set
ℱ_n( u)
= ∫_Ω| u-2 u_n-1+ u_n-2|^2/2h^2
+α|∇ u|^2/2dx
we obtain an approximation of the wave equation.
Minimizing movements are based on energy minimizations, so it is possible to naturally handle constrained partial differential equations by adding penalty terms to the functional.
In the next section we will create an approximation methods for constrained partial differential equations on surfaces by combining minimizing movements with the CPM.
§ NUMERICAL CALCULATION OF PDES ON SURFACES
Here we will provide an overview of an approximation method that combines CPM and MM, and explain its algorithm.
We also perform a numerical convergence analysis for the surface heat and wave equations using our method.
§.§ Approximation of PDEs on surfaces by CPM.
We will explain the method of discretization in time when approximating solutions to the heat and wave equations on a curved surfaces using CPM.
To this end, let S be a closed smooth surface without boundary in ℝ^3.
We consider the following surface heat equation (<ref>):
u^S_t(t,x)=αΔ_S u^S(t,x), x∈ S, t>0
u^S(0,x)=f(x), x∈ S
and surface wave equation (<ref>):
u^S_tt(t,x)=αΔ_S u^S(t,x), x∈ S, t>0
u^S_t(0,x)=V_0(x), x∈ S
u^S(0,x)=f(x), x∈ S
Here, α>0 is a constant, f(x) is the initial condition, V_0 is the initial velocity, and Δ_S is the Laplace-Beltrami operator on the surface S.
We remark that boundary conditions are not included in equations (<ref>) and (<ref>) because the surface S is without boundary.
For a given time step size h>0, we approximate the time derivative in (<ref>) using a forward difference, and in (<ref>) we use a centered difference approximation with respect to time.
By defining u^S_n to be an approximation of u^S at time nh where n=0,1,⋯, we obtain:
u^S_n+1(x)=u^S_n(x)+hαΔ_S u_n^S(x),
u_0^S(x)=f(x), x∈ S
which is a approximation scheme for equation (<ref>).
Similarly, the result for equation (<ref>) is given by:
u^S_n+1(x)=2u^S_n(x)-u^S_n-1(x)+h^2αΔ_S u_n^S(x),
u^S_-1(x)=u^S_0(x)-hV_0(x),
u_0^S(x)=f(x), x∈ S
Since Δ_S is included in the right-hand side of equations (<ref>) and (<ref>), they are difficult to compute in the general setting.
However, in the CPM, the following equations (<ref>) and (<ref>) are used to compute the solutions in the space Ω surrounding the surface S.
Here, u_n is a function value defined on Ω at time nh.
u_n+1(x)=u_n(C_S(x))+hαΔ u_n(C_S(x)),
u_0(x)=f(C_S(x)),
x∈Ω
u_n+1(x)=2u_n(C_S(x))-u_n-1(C_S(x))+h^2αΔ u_n(C_S(x)),
u_-1(x)=u_0(C_S(x))-hV_0(C_S(x)),
u_0(x)=f(C_S(x)),
x∈Ω
Here, C_S is defined by equation (<ref>), and Δ = ∇·∇.
Since equations (<ref>) and (<ref>) do not contain Δ_S, it is possible to apply standard numerical approximation techniques in the surrounding Euclidean space calculate surface gradient quantities.
Also, since the surface is given by a point cloud, interpolation can be used to define the numerical solution restricted to the surface S or at any other point in the domain Ω.
Although explicit methods were used to discretize the time derivatives in equations (<ref>) and (<ref>), implicit methods can also be used <cit.>.
The combination of CPM and MM for the calculation of equations (<ref>) and (<ref>) are described in detail in Section <ref>.
§.§ Combination of CPM and MM
As mentioned in Section <ref>, when performing calculations for curvature flow with an area preservation constraint via the MBO or HMBO algorithms, an approximation method for the constrained partial differential equation is necessary.
Here we explain the approximation method for the constrained partial differential equations on surfaces by combining CPM and MM.
We will explain our method for applying CPM to minimizing movement for the surface heat equation (<ref>), and the surface wave equation (<ref>).
As described in Section <ref>, applying CPM yields the approximations for the surface heat equation (<ref>) and the surface wave equation (<ref>), given by equations (<ref>) and (<ref>), respectively.
As a numerical method, utilizing the method of minimizing movements requires one to approximate functional values. In particular, for n=0,1,⋯, using a time step size h>0 and a constant α>0, the following functional values are required and can be approximated, for example, by means of the finite element method:
ℱ_n+1(u)
= ∫_Ω| u(x)- u_n(C_S(x))|^2/2h
+α|∇ u(x)|^2/2dx
ℱ_n+1( u)
= ∫_Ω| u(x)-2 u_n(C_S(x))+ u_n-1(C_S(x))|^2/2h^2
+α|∇ u(x)|^2/2dx
Here, Ω is a sufficiently large region that covers the surface S, and u_n minimizes functional ℱ_n.
In the following, we will show that the Euler-Lagrange equations for equations (<ref>) and (<ref>) lead to the implicitly discretized equations using the CPM method for partial differential equations on surfaces.
Let ϕ be an arbitrary function from C_0^∞(Ω) and ϵ be a real number.
We compute the first variation of equation (<ref>) as follows:
.d/dϵℱ_n+1(u+ϵϕ)|_ϵ=0=0.
The first variation is:
d/dϵℱ_n+1(u+ϵϕ)
=d/dϵ∫_Ω| (u+ϵϕ)- u_n(C_S)|^2/2h
+α|∇ (u+ϵϕ)|^2/2dx
=∫_Ω(u+ϵϕ)- u_n(C_S)/hϕ
+α∇ (u+ϵϕ)·∇ϕ dx.
Substituting ϵ=0 into equation (<ref>), we obtain:
.d/dϵℱ_n+1(u)|_ϵ=0 =∫_Ωu- u_n(C_S)/hϕ
+α∇ u·∇ϕ dx
=∫_Ω( u- u_n(C_S)/h
-αΔu)ϕ dx+α∫_∂Ω∂ u/∂νϕ dS,
where ∂Ω is the boundary of Ω, and ∂ u/∂ν is the outer normal derivative of u on ∂Ω.
Since ϕ is an arbitrary C_0^∞(Ω) function, the boundary integral in (<ref>) is zero, and we have:
.d/dϵℱ_n+1(u)|_ϵ=0 =∫_Ω( u- u_n(C_S)/h
-αΔu)ϕ dx
A weak form of the Euler-Lagrange equation (<ref>) is therefore:
∫_Ω( u- u_n(C_S)/h
-αΔu)ϕ dx=0
Since ϕ is arbitary, the fundamental lemma of the calculus of variations applies to obtain:
u- u_n(C_S)/h
-αΔu=0
which, written as an approximation scheme states:
u=u_n(C_S)+hαΔu
Equation (<ref>) is an implicit form of the time-discrete surface heat equation (<ref>) obtained by using the CPM.
The functional (<ref>) can be treated in the same fashion. We obtain:
d/dϵℱ_n+1(u+ϵϕ)
=d/dϵ∫_Ω| (u+ϵϕ)- 2u_n(C_S)+u_n-1(C_S)|^2/2h^2
+α|∇ (u+ϵϕ)|^2/2dx
=∫_Ω(u+ϵϕ)- 2u_n(C_S)+u_n-1(C_S)/h^2ϕ
+α∇ (u+ϵϕ)·∇ϕ dx.
Setting ϵ=0 in (<ref>), we have:
.d/dϵℱ_n+1(u)|_ϵ=0 =∫_Ωu- 2u_n(C_S)+u_n-1(C_S)/h^2ϕ
+α∇ u·∇ϕ dx
=∫_Ω( u- 2u_n(C_S)+u_n-1(C_S)/h^2
-αΔu)ϕ dx+α∫_∂Ω∂ u/∂νϕ dS.
As before, ∂ u/∂ν is the derivative of u in the direction of the outer normal vector ν on ∂Ω.
Since ϕ is an arbitrary in C_0^∞(Ω), we obtain the following equation:
.d/dϵℱ_n+1(u)|_ϵ=0 =∫_Ω( u- 2u_n(C_S)+u_n-1(C_S)/h^2
-αΔu)ϕ dx.
It follows that
u- 2u_n(C_S)+u_n-1(C_S)/h^2
-αΔu=0,
which can be written (<ref>):
u=2u_n(C_S)-u_n-1(C_S)+h^2αΔu.
Equation (<ref>) is an implicit approximation of the time-discretized surface wave equation (<ref>) using the CPM (compare to Equation (<ref>)).
Having shown that the minimizing schemes above produce approximate solutions to the surface PDE (<ref>) and (<ref>) we now turn to discussing related numerical considerations..
Next, we introduce the computational algorithms for implementing the CPM and MM.
§.§ Computational methods for the heat and wave equations on surfaces
Here we will explain the computational notions used in our numerical methods.
For simplicity, we will first explain in setting of the surface heat equation (<ref>) on a smooth closed surface S without boundary in three-dimensional Euclidean space. For the sake of clarity, we will also explain the detail in the setting of the surface wave equation (<ref>).
Let α>0 denote the diffusion coefficient and Δ_S denote the Laplace-Beltrami operator on S. Given a time step of h>0, the algorithm for the surface-type minimizing movement that we developed is as follows.
Surface-type minimizing movements for the surface heat equation (<ref>)
* Create the computational domain Ω^D by preparing a sufficiently large Cartesian grid covering the surface S.
Let x_min, x_max, y_min, y_max, z_min, and z_max be the coordinates of the grid boundaries, as shown in Figure <ref>.
Let the grid spacing in the three spatial directions be given by Δ x, Δ y, and Δ z, respectively.
Then Ω^D is defined as follows:
Ω^D=.{x_i,j,k =(
x_i
y_j
z_k)|0≤ i ≤ N_x, 0≤ j ≤ N_y, 0≤ k ≤ N_z}
where i, j, and k are natural numbers, and N_x, N_y, and N_z denote the number of grid points along the axes of the coordinate system.
The grid points in the computational domain are expressed as follows:
x_i=x_min+iΔ x, y_j=y_min+jΔ y,
z_k=z_min+kΔ z,
N_x=x_max-x_min/Δ x, N_y=y_max-y_min/Δ y, N_z=z_max-z_min/Δ z
For simplicity, we assume a uniform grid Δ x=Δ y=Δ z.
* Using the closest point function C_S, compute and record the closest point on the surface S for each point in Ω^D.
* To reduce computational cost, calculations are performed only in a vicinity near the surface S.
This process is called banding.
In particular, we extract a set of points from Ω^D whose Euclidean distance to the surface S is less than or equal to a constant value λ >0 and denote the region by Ω^D_λ.
This is expressed as follows, where ||·|| represents the Euclidean norm.
Ω_λ^D={x∈Ω^D |||x-C_S(x)||≤λ}
Remark:
Ω_λ^D is a point cloud; it consists of discrete points. In the continuous case, a sufficiently large region Ω⊂ℝ^3 covering the surface S is taken, and the region Ω_λ around the surface is defined as follows:
Ω_λ={x∈Ω |||x-C_S(x)||≤λ}
Remark:
The value of λ needs to be chosen appropriately, depending on the interpolation method used in Step <ref> below.
If a polynomial interpolation is used, λ depends on the degree of the interpolation.
Here, we explain a method for determining λ when performing a linear interpolation in a two-dimensional space (higher dimensions can be treated analogously).
We assume that the grid points in the computational domain Ω^D have equal spacing in both the horizontal and vertical directions (Figure <ref>(a)).
To obtain the interpolated value at the point denoted by “⋆" in Figure <ref>(b), four points denoted by “∙" are required.
In this case, the maximum distance between the interpolation point and the grid points is √(2(Δ x/2)^2).
The maximum distance occurs in Figure <ref>(c), and its value is √(2(Δ x)^2).
Therefore, λ must be larger than √(2(Δ x)^2).
Thus, one choice is to set λ=√((Δ x)^2+2(Δ x)^2) when a linear interpolation is used in a two-dimensional space.
This discussion can be generalized to the case of a d-dimensional pth degree polynomial interpolation, then we obtain <cit.>:
λ=√((d-1)(p+1/2)^2+(1+p+1/2)^2)Δ x
Since we are considering surfaces in three-dimensional space, we select λ using the interpolation degree p as follows:
λ=√(2(p+1/2)^2+(1+p+1/2)^2)Δ x
* When approximating the gradient of a function in Ω_λ^D, information about the boundary points is necessary.
We define the characteristic function ϕ_i,j,k as follows:
ϕ_i,j,k=
0, x_i,j,k∈Ω_λ^D
1, otherwise,
from which we define the boundary points ∂Ω_λ^D of Ω_λ^D as follows:
∂Ω_λ^D={x_i,j,k∈Ω^D |ϕ_i,j,k|∇_Dϕ_i,j,k|≠0}
where ∇_Dϕ_i,j,k=(ϕ_i+1,j,k-ϕ_i-1,j,k,
ϕ_i,j+1,k-ϕ_i,j-1,k,
ϕ_i,j,k+1-ϕ_i,j,k-1)/(2Δ x).
We then join ∂Ω_λ^D with Ω_λ^D and define it as Ω̂_λ^D, that is,
Ω̂_λ^D=Ω_λ^D∪∂Ω_λ^D.
Figure <ref> shows the relationship between S, Ω_λ^D, and ∂Ω_λ^D.
Figure <ref> is a schematic diagram of the section of Figure <ref>.
*
Using the closest point function C_S, extend the initial condition given on the surface S to Ω^D as follows, where the initial condition at point x_i,j,k is denoted by u^0_i,j,k.
u^0_i,j,k=
u^S_0(C_S(x_i,j,k)), x_i,j,k∈Ω̂_λ^D
0, x_i,j,k∈Ω^D ∖Ω̂_λ^D
Remark:
In order to simplify the calculations, the initial values of the grid points outside of Ω̂_λ^D are set to 0.
* Obtain an approximate solution of the heat equation on Ω_λ^D using MM.
To approximate u on Ω_λ^D in equation (<ref>), let u_i,j,k=u(x_i,j,k).
Approximate the functional values of (<ref>) by means of an expression such as:
ℱ_n(u)≈Δ x^3∑_x_i,j,k∈Ω_λ^D{|u_i,j,k-u_i,j,k^n-1|^2/2h+α(∇_D,xu_i,j,k)^2+(∇_D,yu_i,j,k)^2+(∇_D,zu_i,j,k)^2/2}
Denote the minimizer of this functional by u_n, where u=(u_i,j,k).
Here, Δ x^3 is the volume of the element, and ∇_D,xu_i,j,k, ∇_D,yu_i,j,k, ∇_D,zu_i,j,k are calculated by difference approximations as follows:
∇_D,xu_i,j,k=u_i+1,j,k-u_i-1,j,k/2Δ x
∇_D,yu_i,j,k=u_i,j+1,k-u_i,j-1,k/2Δ x
∇_D,zu_i,j,k=u_i,j,k+1-u_i,j,k-1/2Δ x
Note that, as mentioned earlier, we assume Δ x=Δ y=Δ z.
Remark:
Various methods can be used to obtain the minimizer of (<ref>). Among them, from the viewpoint of computational cost, the L-BFGS method is often used <cit.>.
* Create an interpolating function I_n(x) defined on Ω_λ for the minimizer obtained in step <ref>.
Using I_n(x), define u_n^S(x)=I_n(x) for x∈ S.
It should again be noted that I_n(x) is defined on Ω_λ. There are multiple methodologies for its construction.
One example is to use trilinear interpolation, which is a linear interpolation in 3D <cit.>.
The computations in this study have used polynomial interpolations.
* Using the closest point function C_S, extend u_n^S onto Ω̂_λ^D as follows:
u^n_i,j,k=u_n^S(C_S(x_i,j,k)), x_i,j,k∈Ω̂_λ^D
* Repeat steps <ref> to <ref> for n=1,2,⋯ until the desired final time is reached.
Next, we will explain the computational algorithm for the surface wave equation (<ref>).
Surface-type minimizing movements for the surface wave equation (<ref>)
* Perform the computations in Steps <ref> to <ref> of the previous algorithm.
* Assign u_i,j,k^-1 using the initial velocity V_0 of equation (<ref>).
This can be done, for example, by means of the backward difference approximation u_i,j,k^-1=u_i,j,k^0-hV_0(x_i,j,k).
Note that u_i,j,k^-1 represents the value at the grid point x_i,j,k at time -h.
* Compute an approximate solution to the wave equation on Ω_λ^D using MM.
Similar to the case of the heat equation, the functional values in (<ref>) can be approximated as follows, for n=1,2,⋯:
ℱ_n(u)≈Δ x^3∑_x_i,j,k∈Ω_λ^D{|u_i,j,k-2u_i,j,k^n-1+u_i,j,k^n-2|^2/2h^2+α(∇_D,xu_i,j,k)^2+(∇_D,yu_i,j,k)^2+(∇_D,zu_i,j,k)^2/2}
The minimizer of this functional is denoted by u_n, where u=(u_i,j,k).
Here, Δ x^3 is the volume of the element, and ∇_D,xu_i,j,k, ∇_D,yu_i,j,k, ∇_D,zu_i,j,k are calculated in the same way as in Step <ref> of the previous algorithm.
* Define u_n^S using the minimizer obtained in Step <ref> by employing the same procedure as in Step <ref> of the previous algorithm.
* Using the closest point function C_S, extend u_n^S onto Ω̂_λ^D.
* Repeat steps <ref> to <ref> for n=1,2,⋯ until the desired final time is reached.
In the next section, we perform a numerical error analysis using the above algorithms for the heat and wave equations on the surface of the unit sphere.
We will begin by treating the case of the surface heat equation.
§.§ Numerical error analysis of MM for the heat equation on a surface
Here, we will perform an numerical error analysis of the algorithm for solving the surface heat equation, described in the previous section.
Unsing MM, we numerically solve the surface heat equation (<ref>) on the unit sphere S, and examine the error between the numerical solution and the exact solution.
We define the unit sphere S in the 3D space as follows:
S={.(
[ sinθcosϕ; sinθsinϕ; cosθ ])|
0≤θ≤π,0≤ϕ≤ 2π}
We perform two numerical experiments by changing the initial condition f in equation (<ref>) on the unit sphere S.
First, we explain the initial conditions used and their corresponding exact solutions.
The results of the numerical error analysis are presented in Section <ref>.
§.§ MM and and surface heat equation: initial condition 1
Setting the diffusion coefficient to α=1, we take the initial condition f as
f(θ)=cosθ
The exact solution of equation (<ref>) is then given by
u(θ ,ϕ,t)=e^-2tcosθ, t≥ 0
§.§ MM and and surface heat equation: initial condition 2
Setting the diffusion coefficient to α=1/42, we take the initial condition f as
f(θ,ϕ)=Y^0_6(θ,ϕ)+√(14/11)Y^5_6(θ,ϕ)
where Y^m_l(θ,ϕ) are the eigenfunctions of the Laplacian on the unit sphere, known as spherical harmonics.
The exact solution of equation (<ref>) is then given by
u(θ ,ϕ,t)=e^-t{Y^0_6(θ,ϕ)+√(14/11)Y^5_6(θ,ϕ)}, t≥ 0
as shown in <cit.>.
The results of the numerical error analysis using initial conditions 1 and 2 (described above) for the surface heat equation are described in the next section.
§.§ MM numerical error analysis results (heat equation on the unit sphere)
We investigate the relationship between Δ x and the numerical error of the MM approximation to the solution of the surface heat equation. Computations follow the computational algorithm for the heat equation on surfaces (<ref>), presented in Section <ref>, where the spatial discretization Δ x is varied.
The L-BFGS method is used to minimize the discretized functional (<ref>). We implement the method using Optim.jl <cit.>, and calculate the functional gradient using automatic differentiation (ReverseDiff.jl <cit.> is used for this purpose).
The time step h is set to h = Δ x^2/6, and polynomial interpolation of order p=2 is used (see Section <ref>).
For both initial conditions 1 and 2, we calculate the maximum absolute error L_∞ on S at the closest point to each point in Ω_λ^D at time t_e=0.25. We note that, since the exact solution of the surface heat equation converges to 0, it becomes difficult to evaluate the error between the numerical solution and the exact solution. For this reason, we have chosen such a value of t_e (i.e., so that the L_∞-error of the absolute value of the exact solution at time t_e is sufficiently large for both initial conditions 1 and 2).
The L_∞-error is defined as
L_∞`-error=sup_x∈Ω_λ^D|u(C_S(x),t_e)-û(C_S(x),t_e)|
where û is the numerical solution and u denotes the exact solution.
The results obtained for each Δ x are shown in Table <ref> and Table <ref>.
The results are plotted in Figure <ref>(heaterr) and Figure <ref>(heaterr_log) using both regular and log-log scales, respectively.
The legend in the figure denotes initial condition 1 by cond1, and initial condition 2 by cond2.
The time evolution is shown in Figure <ref> and Figure <ref>.
The results confirm that the L_∞-error decreases as Δ x decreases, except for the case where Δ x=0.0125.
Except for this case, the numerical error is roughly proportional to the square of Δ x when Δ x is reduced by a factor of 2.
The reason for the larger numerical error in initial condition 2 may be due to insufficient resolution relative to the initial condition.
These results confirm that the numerical solution obtained by MM converges to the exact solution of the surface heat equation as the spatial discretization converges to zero.
We also note that we observe the numerical error increases when Δ x becomes numerically too small (Figure <ref>(heaterr_log)).
Next, we will perform a numerical error analysis for the wave equation on a curved surface.
§.§ Numerical error Analysis for the surface wave equation
Using the MM approximation scheme previously described, we numerically solve the wave equation on a curved surface (<ref>) and examine the error between the numerical and the exact solution.
We perform two computational experiments by changing the initial condition f of equation (<ref>) on the unit sphere S expressed by equation (<ref>).
First, we explain the initial conditions used and their exact solutions.
The results of the numerical error analysis are presented in Section <ref>.
§.§ Surface wave equation: initial condition 1
With the constant α=1, we set the initial condition f as
f(θ)=-cosθ
and the initial velocity V_0 as
V_0=0
In this case, the exact solution of equation (<ref>) is given by
u(θ,ϕ,t)=-cos(√(2)t)cosθ, t≥ 0
§.§ Surface wave equation: initial condition 2
Let α = 1 and the initial condition f be
f(θ ,ϕ)=Y^0_6(θ,ϕ)+√(14/11)Y^5_6(θ,ϕ)
with initial velocity V_0 = 0.
Then, the exact solution of equation (<ref>) is given by
u(θ,ϕ,t)=cos(√(42)t){Y^0_6(θ,ϕ)+√(14/11)Y^5_6(θ,ϕ)}, t≥ 0
Next, we will explain the results of the numerical error analysis using initial conditions 1 and 2 for the surface wave equation.
§.§ Numerical error analysis results (wave equation on the unit sphere)
We performed several numerical simulations using the algorithm presented in Section <ref> for the wave equation (<ref>) and investigated the relationship between the numerical error and the spatial discretization Δ x.
We used the same optimization methods and interpolation degree p as those used for the surface heat equation.
The time step h was set to Δ x/10.
For initial condition 1, the absolute error L_∞ is calculated at the closest point on S for each point in Ω_λ^D at the time t=2π/√(2).
For initial condition 2, the absolute error L_∞ is calculated at the closest point on S for each point in Ω_λ^D at the time t=2π/√(42).
The maximum value of the absolute error is then obtained for each case.
We evaluated the L_∞ error at the time t_e such that the exact solution using initial conditions 1 and 2 have both oscillated once over 0≤ t≤ t_e.
The L_∞ error is defined as follows:
L_∞`-error=sup_x∈Ω_λ^D|u(C_S(x),t_e)-û(C_S(x),t_e)|
where û denotes the numerical solution, and u denotes the exact solution.
The results obtained for Δ x and L_∞`-error are presented in Table <ref> and Table <ref>.
Figure <ref>(waveerr) shows a graphical representation of the contents of Table <ref> and Table <ref>, while Figure <ref>(waveerr_log) shows a representation on log-log scale.
In the legend of the figures, cond1 corresponds to initial condition 1, and cond2 corresponds to initial condition 2.
The evolution of the solution over time is shown in Figure <ref> and Figure <ref>.
The results show that the L_∞`-error decreases with Δ x.
Except for when Δ x=0.2, the numerical error is approximately halved when Δ x is halved.
That is, it is observed that the numerical error converges proportional to Δ x.
Similar to the numerical error analysis for the heat equation, it was found that the numerical error for the initial condition 2 is greater than that for initial condition 1.
These results indicate that when the spatial grid spacing Δ x is sufficiently small, the numerical solution obtained by the developed numerical method for the wave equation converges to the exact solution.
In this section, we explained the approximation methods for partial differential equations on curved surfaces using the CPM and MM methods, and presented the results of their numerical error analysis.
In the following sections, we will illustrate applications of the approximation methods to the simulation of interfacial motions on curved surfaces.
§ NUMERICAL SIMULATION OF INTERFACIAL MOTIONS ON SURFACES
In this section, we discuss approximation methods for realizing mean curvature flow and hyperbolic mean curvature flow on surfaces.
After explaining the interfacial motions on curved surfaces, we present the numerical results of our schemes.
We also deal with the mean curvature flow and the hyperbolic mean curvature flow on curved surfaces in the multiphase setting and under area preservation conditions.
Multiphase regions are realized by means of a signed distance vector field, which allows us to incorporate area preservation constraints into the functional minimizations of the MM approach.
§.§ Surface-constrained interfacial motions
Here we will discuss the approximation methods for the mean curvature flow (MCF) <cit.> and the hyperbolic mean curvature flow (HMCF) <cit.> on curved surfaces.
The surface MCF and surface HMCF are described by the following nonlinear partial differential equations, respectively.
γ^S_t(t,s)=-κ^S(t,s)ν^S(t,s),
γ^S(0,s)=γ^S_0(s)Surface MCF
γ^S_tt(t,s)=-κ^S(t,s)ν^S(t,s),
γ^S(0,s)=γ^S_0(s),
γ_t^S(0,s)=v_0ν^S(t,s)Surface HMCF
Here, S is a smooth surface, γ^S:[0,T)× [a,b]→ S is a smooth simple curve on the surface S, satisfying γ^S(t,a)=γ^S(t,b), κ^S is the curvature of the curve, γ_0 is the initial shape of the curve, v_0 is the initial velocity of the curve, and ν^S represents the outward unit normal vector of the curve on the surface S.
Here, γ^S_t=∂γ^S/∂ t, γ^S_tt=∂γ^S/∂ tt.
Surface MCF and surface HMCF are equations that generalize the motion of surfaces following mean curvature flow or hyperbolic mean curvature flow in the Euclidean space to the setting of interfaces moving on curved surfaces.
Interfaces moving by surface MCF tend to decrease their length and smoothing their shape over time, while interfaces moving by surface HMCF tend to oscillate.
In the case that surface MCF and surface HMCF are subject to the area-preservation conditions, the interfaces should move while preserving the areas of the regions enclosed by the interfaces.
We handle such motions by extending the MBO algorithm and the HMBO algorithm to the surface PDE setting using CPM, MM, and a surface version of the signed distance vector field, to develope approximate solutions for surface MCF (surface MBO) and surface HMCF (surface HMBO).
We remark that our methods can also handle interfacial motions with area-preservation conditions in the multiphase setting.
This is enabled by means of a signed distance vector field that is used to encod the shape of interfaces atop the surface.
§.§ The signed distance vector field on surfaces
In this section, we discuss the signed distance vector field <cit.> and its extension to the surface setting.
The signed distance vector field (SDVF) is used to encode the shape of multiphase regions by means of vector directions.
When performing numerical calculations under area preserving conditions, the signed distance can be used in the two-phase setting.
However, for interface motions involving three or more phases and area preservation conditions, it is not possible to distinguish each phase using a single signed distance function.
On the other hand, the SDVF can be used to distinguish phase locations and shapes even in the case of three or more phases.
The SDVF is constructed by assigning a special vector to each phase of the multiphase region. Each vector is weighted by its signed distance from each interface <cit.>. The SDVF is described below.
Let K be the number of phases, ϵ > 0 be an interpolation parameter, P^i be the region of phase i, p_i be the vector from the barycenter of a (K-1)-dimensional simplex to each vertex (refer to Figure <ref> for K=3, <cit.>), d_S^i(x) be the signed distance function to phase i, and χ_E be the characteristic function of the set E.
The signed distance vector field on the surface S (Surface SDVF) z_S^ϵ is given by the following equation:
z_S^ϵ (x)=∑_i=1^K(p_iχ_{d_S^i≥ϵ/2}+1/ϵ( ϵ/2+d_S^i)p_iχ_{-ϵ/2<d_S^i <ϵ/2}), x∈ S
where,
χ_E(x)=
1 x∈ E,
0 otherwise,
d_S^i(x)=inf_y∈∂ P^i||x-y||_S x∈ P^i,
-inf_y∈∂ P^i||x-y||_S otherwise.
Here, ||x-y||_S represents the geodesic distance on the surface between the two points x and y, and can be expressed as the value that minimizes the following length functional:
||x-y||_S=min_Γ⊂ SLength(Γ)
where Γ is a curve on the surface connecting x and y, and Length(Γ) is the length of Γ along the surface.
Remark:
From here on we will omit the S in z_S^ϵ and d_S^i, and denote them simply by z^ϵ and d^i, respectively.
In the next Section <ref>, we introduce an approximation method for Surface MCF, based on the MBO algorithm using the surface SDVF. We refer to appoximation method as the surface MBO algorithm.
In Section <ref>, we introduce an approximation method for the surface HMCF, which is based on the HMBO algorithm and the surface SDVF. Similarly, we refer to our approximation method as the surface HMBO.
§.§ Surface MBO
In this section, we discuss an approximation method for surface MCF.
We begin by describing the surface MBO in the two-phase setting.
Then we explain the surface MBO with and without the area preservation condition in the multiphase setting.
The computational results of our surface MBO in the multiphase setting are presented in Section <ref>.
We use the CPM as an approximation for the surface partial differential equation on the surface.
To reduce the computational cost of the CPM, we limit the computations to a small tubular region around S, as described in Section <ref>.
This region is defined by accumulating points from the computation grid that are located within a constant distance λ from S as Ω_λ (see equation (<ref>)).
We determine the value of λ in the same way as in equation (<ref>).
We use a natural number N and the final time T>0 to define the time step τ=T/N.
§.§.§ Surface MBO for two-phase regions
Below, we explain a method for approximating the MCF motion of an interface in a 2-phase region. Here,
γ^S_n represents the shape of the curve at time nτ, where τ=T/N is the time step size, T>0 is the final time, and γ^S_0 is the initial curve.
Let d_n(x) be the signed distance function from the point x on the surface to the interface γ^S_n.
That is, d_n(x) is defined as follows:
d_n(x)=inf_y∈∂ P^n||x-y||_S, x∈ P^n,
-inf_y∈∂ P^n||x-y||_S, otherwise.
where P^n is the region occupied by phase n.
The surface MBO for a 2-phase domain is as follows:
1. Create d_0 using Equation (<ref>) from the initial curve γ^S_0.
2. Extend d_0 to Ω_λ:
d_0^λ(x)=d_0(C_S(x)), x∈Ω_λ
3. Repeat the following for n=0,1,⋯,N-1:
* For t∈[0,τ), solve the heat equation in Ω_λ:
u_t=αΔ u(x,t),
u(x,0)=d^λ_n(x),
x∈Ω_λ, τ>t>0
where α>0 is a constant representing the diffusion coefficient.
* Define the new curve γ^S_n+1 as the zero level set of the solution u(x,τ) of Equation (<ref>) on the surface S:
γ^S_n+1={x∈Ω_λ|S∩{u(x,τ)=0}}
* Create d_n+1 from γ^S_n+1 using Equation (<ref>).
* Extend d_n+1 to Ω_λ and define d^λ_n+1 as follows:
d_n+1^λ(x)=d_n+1(C_S(x)), x∈Ω_λ
The results of numerical error analysis using the proposed algorithm are shown in Section <ref>.
In the above algorithm, the sign of the signed distance function is used to extract the interface.
In the next section, we will implement the surface MBO for multiphase regions using the signed distance vector field instead of the signed distance function.
§.§.§ Surface MBO for multiphase regions
Here, we explain a method for approximating surface MCF on interfaces consisting of K≥ 2 multiphase regions on the surface S.
Let P^i_n represent the region of phase i at time nτ, and let P_n=⋃_i=1^KP^i_n.
We provide initial regions P_0^i for each phase i as initial conditions.
Additionally, we denote the vector given to phase i as p_i, as explained in Section <ref>.
The surface SDVF, written z_n^ϵ(x), is obtain from (<ref>) using P_n.
The surface MBO for multiphase regions described as follows.
1. Using equation (<ref>), create the surface SDVF z_0^ϵ from the initial domain P_0.
2. Extend z_0^ϵ to Ω_λ and denote it as z_0^ϵ,λ as follows:
z_0^ϵ,λ(x)=z_0^ϵ(C_S(x)), x∈Ω_λ
3. Repeat the following for n=0,1,⋯,N-1:
* Solve the following vector-valued heat equation for t∈[0,τ):
u_t=αΔu(x,t),
u(x,0)=z^ϵ,λ_n(x),
x∈Ω_λ, τ>t>0
Here, α>0 is a constant representing the diffusion coefficient.
* Extract the solution u(x,τ) of equation (<ref>) on the surface S and denote it as u^S as follows:
u^S(x)=u(x,τ), x∈ S∩Ω_λ
* Obtain P_n+1 on the surface S using u^S as follows:
P_n+1=⋃_i=1^K{P^i_n+1} P^i_n+1 ={x∈ S;u^S(x)·p_i≥u^S(x)·p_j, for all j∈{1,⋯,K}}
* Update the surface SDVF z_n+1^ϵ from P_n+1 using Equation (<ref>).
* Extend z_n+1^ϵ to Ω_λ:
z_n+1^ϵ,λ(x)=z_n+1^ϵ(C_S(x)), x∈Ω_λ
Next, we will implement the surface MBO with a prescribed area-constraint for multiphase regions by combining MM with the above algorithm.
§.§.§ Multiphase surface MBO with area constraints
Here, we will explain our method for approximating multiphase surface MCF under K area constraints on the surface S.
The initial vector field is constructed from the regions P_0^i for each phase i, where the vector for each phase i, denoted by p_i, is prescribed as in Section <ref>.
The sign-distance vector field z_n^ϵ(x) is then obtained via equation (<ref>) from P_n.
When approximating constrained interfacial motions, MM are often used when treating the case of area-preservation <cit.>.
In particular, MM can be used in combination with penalty for each area constraint.
We take a sufficiently large positive constant M and set h=τ/M.
Given w_m-1, we define the functional (<ref>) used in the MM as follows:
ℱ_m(w)=∫_Ω_λ(|w-w_m-1|^2/2h+α|∇w|^2/2)dx
+ρ∑^K_i=1|A^i-V^i_w|^2,
where we use the extension (<ref>) of the surface SDVF to Ω_λ.
In (<ref>), α>0 and ρ>0 are constants, and
A^i=V^i_z_0^ϵ,λ, V^i_w=∫_Ω_λH^ϵ(ϕ^i_w(x))dx,
ϕ^i_w(x)=inf_y∈∂ Q^i_w||x-y||_S, x∈ Q^i_w,
-inf_y∈∂ Q^i_w||x-y||_S, otherwise,
H^ϵ(u)=
1, u>ϵ
1/2+u/2ϵ+1/2πsinπ u/ϵ, -ϵ≤ u≤ϵ
0, u<-ϵ
Q^i_w={x∈Ω_λ|w(x)·p_i≥w(x)·p_j, for all j∈{1,⋯,K}},
where z^ϵ,λ_0 is the extension of z^ϵ_0 obtained from the initial region P_0 on the surface S to Ω_λ.
The parameter ρ controls the strength of the penalty.
Note that when ρ=0, equation (<ref>) takes the same form as the vectorized version of equation (<ref>), and applying the following algorithm results in a numerical solution of surface MCF without area preservation.
The surface MBO for realizing multiphase area-preserving surface MCF is as follows:
1. Using equation (<ref>), create the surface SDVF z_0^ϵ from the initial domain P_0.
2. Extend z_0^ϵ to Ω_λ and define:
z_0^ϵ,λ(x)=z_0^ϵ(C_S(x)), x∈Ω_λ
3. Using (<ref>), obtain A^i from z_0^ϵ,λ.
4. Repeat the following for n=1,2,⋯,N-1:
* Set w_0 = z^ϵ,λ_n-1(x).
* For m=1,⋯,M, find the minimizer w of the functional ℱ_m(w) (refer to equation (<ref>)). Denote the minimizer by w_M.
* Extract w_M on the surface S and denote it by w^S:
w^S(x)=w_M(x), x∈ S∩Ω_λ
* Obtain P_n+1 on the surface S using w^S as follows:
P_n+1 =⋃_i=1^K{P^i_n+1},
P^i_n+1 ={x∈ S|w^S(x)·p_i≥w^S(x)·p_j, for all j∈{1,⋯,K}}
* Create the surface SDVF z_n+1^ϵ from P_n+1 using equation (<ref>).
* Extend z_n+1^ϵ to Ω_λ and denote it as z_n+1^ϵ,λ as follows:
z_n+1^ϵ,λ(x)=z_n+1^ϵ(C_S(x)), x∈Ω_λ.
The numerical examples using the approximation methods presented in this section are shown in Section <ref>.
In the next section, we will explain an approximation methods for hyperbolic mean curvature flow on surfaces.
§.§ Surface HMBO
In this section, we discuss an approximation method for surface HMCF on a surface S.
First, we explain the surface HMBO for two-phase regions. Then, we describe its multiphase counterpart without area preservation. Afterwards, the surface HMBO for multiphase regions with area constraints is presented.
The numerical results using the surface HMBO for multiphase regions is presented in Section <ref>.
As before, let Ω_λ be a tubular region of S where the distance from S is within a constant value λ as in Eq. (<ref>).
The value of λ is determined using the same method as described at (<ref>).
The time step used in the implemented algorithm is denoted by τ=T/N, where N is a natural number representing the number of steps and T>0 is the final time.
§.§.§ Surface HMBO for two-phase regions
Below, we explain the method for approximating the motion of interfaces in a 2-phase region evolving by surface HMCF.
Given an initial curve γ^S_0 and an initial velocity v_0, let γ^S_n represent the shape of the curve at time nτ.
Let d_n(x) be a signed distance function on the surface to the interface γ^S_n.
That is, if we let P_n be the region enclosed by γ_n^S, d_n(x) can be expressed by (<ref>).
The surface HMBO for a 2-phase region is then described as follows:
1. Define γ^S_1=γ^S_0+τ v_0 using the initial curve γ^S_0 and the initial velocity v_0.
2. Create d_0 and d_1 from γ^S_0 and γ^S_1 using equation (<ref>).
3. Extend d_0 and d_1 to Ω_λ and denote them by d^λ_0 and d^λ_1 respectively, as follows:
d_l^λ(x)=d_l(C_S(x)), x∈Ω_λ, l=0,1
4. Repeat the following for n=1,2,⋯,N-1:
* Solve the following wave equation:
u_tt=αΔ u,
u(x,0)=2d^λ_n(x)-d^λ_n-1(x),
u_t(x,0)=0
x∈Ω_λ, τ>t>0
where α>0 is a constant.
* Define the new curve γ^S_n+1 as the zero level set of the solution u(x,τ) of equation (<ref>) on the surface S:
γ^S_n+1={x∈Ω_λ|S∩{u(x,τ)=0}}
* Create d_n+1 from γ^S_n+1 using equation (<ref>).
* Extend d_n+1 to Ω_λ and define d^λ_n+1 as follows:
d_n+1^λ(x)=d_n+1(C_S(x)), x∈Ω_λ.
The results of the numerical error analysis using the above algorithm are presented in Section <ref>.
In the above algorithm, the signed distance function is used to detect the location of the interface.
As before, by using the signed distance vector field instead of the signed distance function, we can implement the surface HMBO in the multiphase setting. This will be explained in the next section.
§.§.§ Surface HMBO for multiphase regions
In the following, we describe a method for approximating surface HMCF of interfaces separating K multiphase regions on a surface S.
To this end, let P^i_n represent the region of phase i at time nτ, and P_n=⋃_i=1^KP^i_n.
In addition to the initial regions P_0^i, we also provide the initial velocities v_0^i for each phase i.
We denote the vector given to phase i as p_i, as explained in Section <ref>.
Again, z_n^ϵ(x) denotes surface SDVF defined by equation (<ref>) and which is constructed from P_n.
The surface HMBO algorithm for multiphase regions is as follows:
1. Determine P_1 from the initial domain P_0 and the initial velocities v_0^i on ∂ P_0^i. For details, refer to Section <ref>.
2. Using equation (<ref>), create the surface SDVFs z_0^ϵ, and z_1^ϵ, from P_0,P_1.
3. Extend z_0^ϵ, and z_1^ϵ to Ω_λ, and denote their extensions by z_0^ϵ,λ and z_1^ϵ,λ respectively, as follows:
z_l^ϵ,λ(x)=z_l^ϵ(C_S(x)), x∈Ω_λ, l=0,1
4. Repeat the following for n=1,2,⋯,N-1:
* Solve the vectorial wave equation:
u_tt=αΔu,
u(x,0)=2z^ϵ,λ_n(x)-z^ϵ,λ_n-1(x),
u_t(x,0)=0,
x∈Ω_λ, τ>t>0
where, α>0 is a constant.
* Extract the solution u(x,τ) of equation (<ref>) on the surface S and denote it by u^S:
u^S(x)=u(x,τ), x∈ S∩Ω_λ.
* Obtain P_n+1 on the surface S using u^S as follows:
P_n+1 =⋃_i=1^K{P^i_n+1},
P^i_n+1 ={x∈ S|u^S(x)·p_i≥u^S(x)·p_j, for all j∈{1,⋯,K}}
* Create the surface SDVF z_n+1^ϵ from P_n+1 using equation (<ref>).
* Extend z_n+1^ϵ to Ω_λ and denote it by z_n+1^ϵ,λ as follows:
z_n+1^ϵ,λ(x)=z_n+1^ϵ(C_S(x)), x∈Ω_λ
In the next section, we show how MM can be combined with the above surface HMBO algorithm to realize multiphase surface HMCF of interfaces with optional area-preserving conditions.
§.§.§ Surface HMBO for multiphase area-preserving motions
In the following, we describe our method for approximating multiphase surface HMCF, where the area of each domain's preserved.
We assume that the initial conditions for each phase i are given by the initial domain P_0^i and the initial velocity v_0^i.
As in section <ref>, we denote the vector given to phase i by p_i.
Similar to section <ref>, MM are used to incorporate the area-preserving conditions. We take a sufficiently large integer M and set h=τ/M.
Let the functions w_m-1 and w_m-2 be defined using the surface SDVF extended to Ω_λ and the MM functional be as follows:
ℱ_m(w)=∫_Ω_λ(|w-2w_m-1+w_m-2|^2/2h^2+α|∇w|^2/2)dx
+ρ∑^K_i=1|A^i-V^i_w|^2
Here, α>0 and ρ>0 are constants, and A^i and V_w^i are defined by (<ref>).
The definitions of w_1 and w_0 are as described before. Namely, we use z^ϵ,λ_0 (the extension of z^ϵ_0 obtained from the initial domain P_0) together with the initial velocities v^i_0 on the surface S.
Here, ρ>0 is a sufficiently large penalty parameter.
Note that when ρ=0, the functional (<ref>) is the same as the vectorized extension of equation (<ref>), and that applying the algorithm below would result in a numerical approximation of surface HMCF without area preservation.
The Surface HMBO that realizes area preservation for multiphase domains is as follows.
1.
Determine P_1 from the initial domain P_0 and initial velocities v_0^i on ∂ P_0^i. For details, refer to Section <ref>.
2.
Using equation (<ref>), create the surface SDVF z_0^ϵ and z_1^ϵ from P_0 and P_1.
3. Define the extensions of z_0^ϵ and z_1^ϵ to Ω_λ as follows:
z_l^ϵ,λ(x)=z_l^ϵ(C_S(x)), x∈Ω_λ, l=0,1
4.
Using equation (<ref>), compute each A^i from z_0^ϵ,λ.
5. Repeat the following for n=1,2,⋯,N-1:
* Set w_0=w_1=2z^ϵ,λ_n(x)-z^ϵ,λ_n-1(x).
* For each m=2,⋯,M, minimize ℱ_m(w) (given by (<ref>)) and denote each minimizer by w_m.
* Let w^S denote the restriction of w_M to the surface S as follows:
w^S(x)=w_M(x), x∈ S∩Ω_λ
* Obtain P_n+1 on the surface S using w^S as follows:
P_n+1 =⋃_i=1^K{P^i_n+1}
P^i_n+1 ={x∈ S|w^S(x)·p_i≥w^S(x)·p_j, for all j∈{1,⋯,K}}
* Create Surface SDVF z_n+1^ϵ from P_n+1 using Equation (<ref>).
* Let z_n+1^ϵ,λ be the extension of z_n+1^ϵ to Ω_λ:
z_n+1^ϵ,λ(x)=z_n+1^ϵ(C_S(x)), x∈Ω_λ.
Numerical examples using the method described in this section are presented in section <ref>.
§ NUMERICAL RESULTS AND CONSIDERATIONS
In this section, we use the approximation methods presented in sections <ref> and <ref> to numerically solve the mean curvature flow and hyperbolic mean curvature flow on surfaces under various conditions.
The discrete approximation of Ω_λ uses a uniformly spaced orthogonal grid Ω_λ^D with a spacing of Δ x in all three directions.
Note that Ω_λ^D is obtained using the same method as in section <ref>.
Details regarding the approximation of the MM functional values are explained in section <ref>.
In all cases, the interpolation parameter ϵ used to construct the surface SDVF represented by Equation (<ref>) is set to ϵ=0.03.
Remark:
The interpolation parameter ϵ needs to be appropriately selected depending on the discretization of the surface S.
In the case that the surface is discretized by a point cloud, it was found from the numerical investigations in this study that a value of 3-5 times the average distance to neighboring points within the point cloud is appropriate for the interpolation parameter ϵ.
§.§ Regarding the initial conditions
Boundaries between regions determine the shape of the interface and hence the initial conditions used in the numerical calculations.
The following two types of initial conditions for the numerical calculations on the unit sphere were used. In both cases, points with different colors indicate different phases.
* Two-phases on the unit sphere
* Four-phases on the unit sphere
(Left: Figure viewed from an oblique angle. Right: Figure viewed from directly above.)
§.§ Computational details regarding surface mean curvature flow
In the following, we introduce the parameters used in our computations of surface mean curvature flow.
We performed computations of the surface mean curvature flow involving interfaces in two and four phase environments on the unit sphere, with and without area preservation.
Discussions of the corresponding computations are presented in section <ref>.
Result of two-phase mean curvature flow on the unit sphere (without area preservation)
We use the two-phase initial condition shown in Figure <ref> and the algorithm described in section <ref> for the numerical calculation.
Parameters were set as follows:
α=1.0, Δ x=0.05, h= Δ x^2/6, τ=15h, ρ=0
The numerical result is shown in Figure <ref>.
Result of two-phase mean curvature on the unit sphere (with area preservation)
We used the two-phase initial condition shown in Figure <ref> and the algorithm described in section <ref> for the numerical calculation.
Parameters were set as follows:
α=0.05, Δ x=0.05, h= Δ x^2/6, τ=100h, ρ=10^3
The numerical result is shown in Figure <ref>.
Result of four-phase mean curvature flow on the unit sphere (without area preservation)
We used the four-phase initial condition shown in Figure <ref> and the algorithm described in section <ref> for the numerical calculation.
Parameters were set as follows:
α=1.0, Δ x=0.05, h= Δ x^2/6, τ=15h, ρ=0
The numerical result is shown in Figure <ref>.
Result 1 of four-phase mean curvature flow on the unit sphere (with area preservation)
We used the four-phase initial condition shown in Figure <ref> and the algorithm described in Section <ref> for the numerical calculation.
Parameters were set as follows:
α=1.0, Δ x=0.015, h= Δ x^2, τ=15h, ρ=10^5
The numerical result is shown in Figure <ref>.
Result 2 of four-phase mean curvature flow on the unit sphere (with area preservation)
We used the four-phase initial condition shown in Figure <ref> and the algorithm described in Section <ref> for the numerical calculation.
Parameter were set as follows:
α=0.1, Δ x=0.01, h= Δ x^2, τ=100h, ρ=4×10^4
The numerical result is shown in Figure <ref>.
§.§ Computational details regarding surface hyperbolic mean curvature flow
Here, we introduce the parameters used in the numerical calculation of hyperbolic mean curvature flow on surfaces.
Numerical calculations were carried out in the two and four-phase setting on the unit sphere, both with and without area preservation.
The discussion of the results is presented in section <ref>.
Result of two-phase hyperbolic MCF on the unit sphere (without area preservation)
We used the two-phase initial condition shown in Figure <ref> and the algorithm described in section <ref> for the numerical calculation.
Parameters were set as follows:
α=0.1, Δ x=0.05, h= Δ x, τ=5h, ρ=0
The numerical result is shown in Figure <ref>.
Result of two-phase hyperbolic MCF on the unit sphere (with area preservation)
We used the two-phase initial condition shown in Figure <ref> and the algorithm described in section <ref> for the numerical calculation.
Parameters were set as follows:
α=0.1, Δ x=0.05, h= Δ x, τ=5h, ρ=10^3
The numerical result is shown in Figure <ref>.
Result of four-phase hyperbolic MCF on the unit sphere (without area preservation)
We used the four-phase initial condition shown in Figure <ref> and the algorithm described in section <ref> for the numerical calculation.
Parameters were set as follows:
α=1, Δ x=0.01, h=7.84Δ x/1000, τ=200h, ρ=0
The numerical result is shown in Figure <ref>.
Result of four-phase hyperbolic MCF on the unit sphere (with area preservation)
We used the four-phase initial condition shown in Figure <ref> and the algorithm described in section <ref> for the numerical calculation.
Parameters were set as follows:
α=1, Δ x=0.01, h= 1.4Δ x/100, τ=100h, ρ=10^6
The numerical result is shown in Figure <ref>.
§.§ Numerical error analysis of the area-preserving condition in the two-phase setting
We investigated how well the area enclosed by the interface in the two-phase setting on the unit sphere (shown in Figure <ref>), is preserved under the area-constrained mean curvature flow, and hyperbolic mean curvature flow. Here, we numerically solved the mean curvature flow and the hyperbolic mean curvature flow on the unit sphere with area preservation
using the algorithm “Surface MBO for multiphase regions with the area preservation condition" introduced in section <ref>,
and the algorithm “Surface HMBO for multiphase regions with the area preservation condition" introduced in section <ref>, respectively.
Note that setting K=2 in the above algorithms yields an approximation method for two-phase regions.
Numerical errors were investigated as follows.
The approximation of A^1 obtained
in Step 3 of the algorithm “Surface MBO for multiphase regions with the area preservation" and in Step 4 of “Surface HMBO for multiphase regions with the area preservation" is denoted by V_0.
The surface SDVF z_2^ϵ,λ obtained by executing the above algorithms for one step is used to calculate V^1_w using Eq.
(<ref>). Here, z_2^ϵ,λ is calculated in Steps 4.f and 5.f of the above algorithms, respectively.
The solution obtained by executing the algorithm for one step corresponds to the time τ.
The approximate value of V^1_w is denoted by V_τ and the value of the error ERR is defined as follows:
ERR=|V_0-V_τ|.
We investigated the response of ERR to changes in ρ (the penalty parameter for the area preservation) for the minimizing movements Eq. (<ref>) and Eq. (<ref>).
The parameters were as follows:
Two-phase MCF on the unit sphere (with area preservation)
α=0.05, Δ x=0.05, h= Δ x^2/6, τ=100h, 10^-1≤ρ≤10^3.
Two-phase hyperbolic MCF on the unit sphere (with area preservation)
α=0.1, Δ x=0.05, h= Δ x, τ=5h, 10^-1≤ρ≤10^3.
The parameters used in the calculation of “two-phase MCF on the unit sphere (with area preservation)” were the same as those used in section <ref>, except for the values of ρ.
The result obtained by evolving the system with ρ=10^3 is shown in Figure <ref>.
The parameters used in the calculation of “two-phase hyperbolic MCF on the unit sphere (with area preservation)” were the same as those used in section <ref>, except for the value of ρ.
The result obtained by evolving the system with ρ=10^3 is shown in Figure <ref>.
Table <ref> shows the specific values of ρ used for the numerical error analysis.
The results are presented in Table <ref> and Figure <ref>, where the x-axis uses a logarithmic scale.
Discussions of the results is presented in section <ref>.
§.§ Discussion
In this section, we will explain the results of the numerical calculations conducted in sections <ref> to <ref>. Our results are summarized in the following order: surface MCF, surface hyperbolic MCF, and numerical error analysis of the area preservation conditions in the two-phase setting. Following this, we discuss our observations regarding the motion of interfaces in the simulations, properties of the functionals used in our approximation methods, and issues related to area preservation.
Numerical results of surface MCF involving interfaces in the two-phase setting (Figure <ref>) without area preservation show that curves on the unit sphere disappear over time. Similar to the flat setting, their length decreases, and the interface becomes nearly circular before contracting to a single point.
When the area preservation condition (Figure <ref>) is prescribed, the curvature of the interfaces tend to decrease over time, and the interface converges to a circular shape.
The area remains approximately constant, and the curve approaches a stationary state at the terminal time.
In the four-phase setting (Figure <ref>) without area preservation, the interface smooths itself while the length of the network decreases with time. Each interface moves to maintain the junctions where they intersect before contracting to a single point.
When the area preservation condition is prescribed (see Figure <ref> and Figure <ref>), the curvature of the interfaces tend to decrease over time, while the junctions are maintained. However, even in the steady state, we note some slight irregularities near the junction (see Figure <ref>). Figure <ref> confirms that the curvature of the curve is still relatively large. The areas of each phase changed slightly compared to the initial state. Both observations can be attributed to various factors including: ambient mesh spacing, interpolation methods, and optimization stopping criteria.
Next, we explain the numerical results for the hyperbolic mean curvature flow on surfaces. In the two-phase setting without area-preservation (Figure <ref>), the length of the curve decreases while oscillating and approaching a circular shape, before contracting to a point. Under the area-preservation condition (Figure <ref>), the shape of interface converges to a circle over time and the area remained approximately constant. After becoming approximately circular, the curve continued to move atop the surface of the sphere.
In the four-phase setting, the numerical results of the hyperbolic mean curvature flow (Figure <ref>) showed that, without the area-preservation condition, the interfaces oscillated with time while the total length of the network decreased.
The interfaces evolved while maintaining junctions, and eventually the network contracted to a single point.
When the area-preserving condition was applied, the interfaces oscillated while while preserving their areas and maintaining junctions over time.
The area of each phase remained almost constant throughout the evolution, and eventually reached a nearly stationary state.
In the two-phase setting, surface MCF and hyperbolic MCF with area preservation both showed a tendency for the error ERR (equation (<ref>)) to decrease as ρ increases.
In Figure <ref>, ERR decreased as ρ increased over the range 10^-1≤ρ≤10, and after ρ>10, ERR increased and became almost constant and less than 0.001.
Overall, we observe that if we approximate the area-preserving MCF and hyperbolic mean curvature flow using the proposed numerical method, the value of ERR will decrease as ρ increases.
However, it is expected that increasing ρ beyond a certain value will not reduce ERR below a certain threshold.
In the numerical results (Figure <ref> and Figure <ref>) for the interfaces moving according to surface MCF with area preservation in the multiphase setting, a slight irregularity was observed at the point of stationary state.
This observation indicates that there are points on the interface with relatively large curvature, which is contrary to the expected result.
The reason for this stagnation, similar the original MBO, can be attributed to the fact that even though there are points with large curvature along the interface, the curvature may still be too small to resolve for a given threshold length.
Relatedly, setting an interface with sufficiently large curvatures along its initial curve tends to eliminate points with large curvature at the steady state.
Alternatively, using a smaller spatial discretization tends to alleviate such constrains on the interface's motion. This, of course, leads to an increase in the computational time required by the method.
In fact, all the methods developed in this study require a relatively fine spatial grid, and refining it further causes a significant increase in the required computation time.
For example, changing the spatial grid width from 0.05 to 0.01 for the method “Surface MBO for multiphase regions with the area preservation” described in section <ref> increases the require computational time by a factor of 60.
Consequently, improving the methods used in this study (especially their computation time) is an important future task.
Regarding the surface hyperbolic MCF, the oscillation of the interface tends to decrease with time (Figure <ref> and Figure <ref>). From the point of view of conservation of energy, this observation is unexpected. One of the reasons for this is thought to be the use the MM used in the algorithm created.
In MM, it is known that the energy of the obtained numerical solution decreases compared to the exact solution as time increases <cit.>.
Since the algorithm created reconstructs the interface based on the numerical solution obtained from the MM, it is understandable that the kinetic energy ofthe numerical solution of the interface decreases with time.
In this research, we have used minimizing movements to impose area constraints on surface-constrained multiphase interfacial motions. Since minimizing movements require one to minimize a functional, we have also considered the influence of the functional used in this process and on the corresponding numerical results. Again, the functional used in our method for dealing with the area-constrained curvature flows in the multiphase setting (Equations (<ref>) and (<ref>)) is expressed as follows:
ℱ_m(w)=∫_Ω_λ(F(w,w_m-1,w_m-2)+α|∇w|^2/2)dx
+ρ∑^K_i=1|A^i-V^i_w|^2
where V^i_w is the V^i_w included in Equations (<ref>) and (<ref>), and F is expressed as follows.
F(w,w_m-1,w_m-2)=
|w-w_m-1|^2/2h, Surface MBO
|w-2w_m-1+w_m-2|^2/2h^2, Surface HMBO
For simplicity, let us define
J(w) =α|∇w|^2/2
P(w) =ρ∑^K_i=1|A^i-V^i_w|^2
Then, the functional in equation (<ref>) can be expressed as follows:
ℱ_m(w)=∫_Ω_λ(F(w,w_m-1,w_m-2)+J(w))dx
+P(w)
The functional P determines the penalty of to the area constraints. If we set P(w)=0, then equation (<ref>) corresponds to the functional used in the absence of area preservation.
Emphasis of each area constraint is controlled through the value of ρ.
However, if ρ is taken too large so that F and J are significantly smaller than P, then the minimizing scheme will tend to focus only on the penalty term. That is, during the process of minimizing the functional, the significance of F and J are diminished when compared to that of P.
As a result, the approximate solution may deviate from the expected result. One may observe a jagged interface, even after several minimizers and at the stationary state.
On the other hand, if P is too small, the area constrain of each phase will not be satisfied at an acceptable level.
Therefore, in order to approximate the motion of each interface following the mean curvature flow or the hyperbolic mean curvature flow while satisfying the area constraints, it may be necessary to adjust the ratio of the magnitudes of F, J, and P.
Such an approach is would avoid large differences in the magnitudes of F, J, and P.
However, it is not clear what ratio the magnitudes of F, J, and P should satisfy at present.
We would like to return to this and related topics in a future study.
In imparting the area constraints a top the surface S, we numerically solved the constrained partial differential equations in the tubular region Ω_λ.
The functionals used in the surface-type MM (Eq. <ref>, Eq. <ref>) were designed to conserve volume in Ω_λ, where the width of Ω_λ is a constant λ (Eq. <ref>).
Consider the case of an interface in the two phase setting.
Let Q be the region enclosed by the interface on the surface S, let A be the area of Q, let R be the region obtained by extending Q in the normal direction of the surface S to Ω_λ, and let V be the volume of R.
Figure <ref> shows a schematic diagram of the relationships between S, Ω_λ, R, and Q.
In this case, assuming that λ is sufficiently small, we note that we can approximate V as
V≈2Aλ
In section <ref>, we investigated the numerical error of the area preservation for two-phase regions.
We observed that the mean curvature flow and the hyperbolic mean curvature flow conserve area at higher precisions as ρ is increased.
Since V is approximated by Eq. (<ref>) and λ is a constant, it is expected that for two-phase regions on a surface, increasing ρ will better conserve the area A surrounded by the interface on the surface.
Remark:
The value of the width λ of Ω_λ used in Section <ref> is given by
λ =√(17)Δ x
≈ 0.2
This is obtained by substituting p=3 and Δ x=0.05 into equation (<ref>).
The computational results in sections <ref> and <ref> showed that area preservation can be approximately satisfied on surfaces even in the multiphase case.
However, we have not yet performed a numerical error analysis to describe the relationship between the parameters of the computational algorithm, and the area preservation condition for cases other than the two-phase setting. We would like to treat this in a separate study.
§ SUMMARY
This study developed approximations methods for surface-constrained mean curvature flow and hyperbolic mean curvature flow of interfaces.
This was achieved by first creating approximation methods for a surface partial differential equations by combining the closest point method with minimizing movements. We then extended the methods to implement the conventional MBO and HMBO algorithms on surfaces.
In addition, we constructed the surface-signed distance vector field to distinguish multiphase geometries on surfaces.
Numerical error analyses of our methods were performed for the surface heat and wave equations, and convergence with respect to the spatial discretization was investigated. It was found that the numerical solution of the partial differential equation on the surface obtained by our approximation methods converges to the exact solution.
By using the surface version of the signed distance vector field, we extended the MBO and HMBO algorithms to the surface-constrained setting. These were used to perform numerical calculations of mean curvature flow and hyperbolic mean curvature flow for two and four phase interfacial motions.
The numerical error of the prescribed area in the two-phase setting for mean curvature flow and hyperbolic mean curvature flow on surfaces was evaluated. Our results confirm that increasing the value of the penalty parameter ρ leads to higher precision in the area preservation.
Improvements to our approximation methods could be made by adjusting the energy functionals used in the MM method. Namely, it is known that, by using appropriate functionals, energy conservation can be realized <cit.>. Therefore, creating approximation methods that conserve energy and performing their numerical error analysis for equations such as the surface-wave equation is an important future task. This is expected to clarify questions about the energy dissipation of the interface in the HMBO algorithm. In addition, we would like to design generalized surface-type threshold dynamics which impart damping terms on target interfacial motion.
§ APPENDIX
§.§ Surface HMBO and Initial Velocity for Multiphase Regions
In the surface HMBO for multiphase regions (see Section <ref> and Section <ref>) one needs to determine P_1 from the initial shape P_0 and initial velocities v_0^i of each phase. Here we describe the method.
Let Γ_ij be the interface between phase i and phase j. We define the following:
Γ=⋃_i,jΓ_ij, ℱ_ij=Γ-Γ_ij, 𝒥_ij=Γ_ij∩Σ_Γ_ij
Σ_Γ_ij={C|{Γ_ij∩ C}≠∅ , C ∈ℱ_ij}
Γ represents the union of all the interfaces, ℱ_ij represents the interfaces other than Γij, and 𝒥_ij represents the endpoints of Γij. Also, Σ_Γ_ij represents the set of interfaces connected to Γ_ij.
In the multiphase setting, the surface HMCF is represented by the following nonlinear partial differential equation. For each interface Γ_ij, we have:
d^2 Γ_ij/dt^2=-κ_ijn_ij, t>0
Γ_ij(t=0)=Γ_ij^0
dΓ_ij/dt(t=0)=V_ij^0n_ij^0
Γ_ij(P)=σ(P), t≥ 0, σ∈Σ_Γ_ij, P ∈𝒥_ij.
Here, κ_ij denotes the mean curvature of Γ_ij, n_ij represents the outward unit normal vector of Γij, Γ_ij^0 represents the interface between phases i and j at the initial time, V_ij^0 represents the initial velocity of Γ_ij^0, and n_ij^0 represents the outward unit normal vector of Γ_ij^0.
The fourth equation in Equation (<ref>) represents the continuity condition that the interface Γ_ij should satisfy.
Without the continuity condition, each interface may move independently over time, which could lead to the loss of junctions.
In the Surface HMBO for multiphase regions, after determining the initial shape P_0, the interface set Γ_ij^0 is defined for each interface. Then, for each interface, the set {Γ_ij^1} is defined as follows:
Γ_ij^1=Γ_ij^0+hV_ij^0n_ij^0
Following this, the regions P_1 for each phase are determined from the set {Γ_ij^1}.
§.§ Numerical error analysis of the surface MBO for two-phase regions
In this section, we present the results of an numerical error analysis for the mean curvature flow on surfaces using the algorithm introduced in section <ref>.
The analysis focuses on the motion of a circle a top the unit sphere.
As shown in Figure <ref>, let r denote the radius of the circle on the unit sphere.
A circular interface moving by the mean curvature flow on the surface of the unit sphere satisfies the following differential equation: <cit.>:
dr/dt=r^2-1/r, t>0
r(0)=r_0
where r denotes the radius of the circle at time t and r_0>0 denotes the radius of the circle at the initial time.
The exact solution of Eq. (<ref>) can be obtained as follows:
r(t)=√(1-(1-r_0^2)exp(2t))
We investigated the numerical error's dependence on the grid spacing Δ x (the discretization of the ambient space surrounding the unit sphere). Calculations were done using the algorithm introduced in section <ref> and we employ minimizing movements to solve Eq. (<ref>). The computation implementation of the MM employs the technique introduced in section <ref>. We set r_0=2/3, the time step was h=Δ x^2/5, and the threshold length was τ=0.03.
Let r̅(t) denote the average radius of the data points that approximate the interface at time t.
Remark:
The average radius of the data points approximating the interface is defined as follows. Assume that at time t, the interface is computed and represented by M points. For i=1,2,⋯,M, the coordinates of the i-th point are denoted as (x_i,y_i,z_i). Then, the average radius r̅(t) is calculated as follows:
r̅(t)=∑_i=1^M √(x_i^2+y_i^2)/M
The results obtained for Δ x = 0.05 and 0.025 are shown in Figure <ref>. The figure shows the exact solution r(t) and the average radius r̅(t) obtained our method. The points where the curves are interrupted in Figure <ref> correspond to the times that the interface could no longer be detected.
The exact solution at t=0.24 is approximately r(0.24) 0.31965745, while the average radius r̅(t) obtained from the numerical solution is 0.28291438 for Δ x=0.05 and 0.2955379 for Δ x=0.025.
Although the numerical errors are relatively small at the beginning of the computations, due to the coarsening of the numerical grid for small interfaces, the numerical errors tend to increase as time increases. We also note that the average radius of the numerical solution tends to be smaller than that of the exact solution. Regarding convergence, we indeed observe the tendency of numerical errors to decrease as Δ x becomes decreases.
When the time step size h is proportional to Δ x^2 and τ is fixed, reducing Δ x is expected to improve the accuracy of the approximation.
§.§ Numerical error analysis of surface HMBO for two-phase regions
In this section, show the results of a numerical error analysis for the hyperbolic mean curvature flow on a surface using the algorithm introduced in section <ref>.
Our analysis focuses on the motion of a circle on the surface of the unit sphere.
As shown in Figure <ref>, let r(t) denote the radius of a circle constrained to the unit sphere.
Since the hyperbolic mean curvature flow represents the motion in which the normal acceleration of the interface is proportional to the mean curvature, the test problem corresponding to Equation (<ref>) as follows:
d^2r/dt^2=r^2-1/r, t>0
.dr/dt|_t=0=V_0
r(0)=r_0
Here, V_0 is the initial speed of the interface, and r_0 is the radius of the circle at the initial time.
We assume that the exact solution represented by the numerical solution obtained by numerically solving Eq. (<ref>).
We compare this numerical solution with that obtained by our own computational algorithm.
Our computations use DifferentialEquations.jl <cit.> to numerically solve Eq.n (<ref>).
Similar to before, we investigated the error's dependence on the spatial discretization Δ x used in the surrounding space.
We use the algorithm introduced in section <ref> for our numerical calculations.
Minimizing movements were used to solve Eq. (<ref>) in the algorithm of section <ref>.
Our calculations set r_0=2/3, V_0=0 and the time step h was assigned to h=Δ x/10. The threshold time τ was set to τ=0.01.
As before, we define r̅(t) as the average radius of the data points that approximate the interface at time t using Eq. (<ref>).
We present the numerical results for Δ x=0.1, 0.05, 0.025 in Figure <ref>. The figure shows the numerical solution r(t) obtained by numerically solving equation (<ref>) and the average radius r̅(t) obtained using our own method. The points where the curves are interrupted in the lower right of Figure <ref> correspond to the time that the interface has dissappeared.
The value of r(t) at t=0.6 is approximately r(0.6) 0.50635371, while the average radius r̅(t) obtained from the numerical solution is 0.293178268 for Δ x=0.1, 0.380734809 for Δ x=0.05, and 0.474013006 for Δ x=0.025.
Except for Δ x=0.1, the average radius continues to decrease over time, and the interface can no longer be detected. However, for Δ x=0.1, the average radius starts to increase at some point. This is because the interface that initially shrinks inward eventually becomes a point and starts to expand outward. After the interface has contracted, the subsequent expansion is an interesting feather of the hyperbolic mean curvature flow. A detailed analysis of this phenomenon is a future research topic.
In all cases, the numerical error is relatively small at the beginning of the calculations, but tends to increases over time. Of course, decreasing Δ x tends to decrease the numerical error. Setting the time step h proportional to Δ x and fixing τ should lead to improved accuracy for decreasing Δ x.
§.§ Implementation methods
Here we explain a few important details about the implementation of the numerical algorithms introduced in section <ref> and section <ref>. The numerical algorithms described above involve the calculation of functional values and integrals, which require discretizations a computer.
Similar to section <ref> and section <ref>, we consider a smooth surface S in a three-dimensional Euclidean space and discretize a sufficiently large domain Ω_λ that covers S. As before, we refer to the discretized space as Ω_λ^D. As a result of the discretization, Ω_λ^D is divided into N_x points in the x-direction, N_y points in the y-direction, and N_z points in the z-direction, as in section <ref>. The interval between divisions is assumed to be equal in all three directions and is denoted by Δ x.
Approximation of functional values
In our method, it is necessary to compute the value of the functionals included in the first term of Eq. (<ref>) and Eq. (<ref>). We will explain the approximation using Eq. (<ref>) as an example. The first term of Eq. (<ref>) is expressed by
ℱ_m(w)=∫_Ω_λ(|w-w_m-1|^2/2h+α|∇w|^2/2)dx.
In equation (<ref>), function w: Ω_λ→ℝ^K-1(K:number of phases) is a vector-valued and so the functional value can be computed as follows:
ℱ_m(w)=∑_i=1^K-1∫_Ω_λ(|w^i-w_m-1^i|^2/2h+α|∇ w^i|^2/2)dx
Here, w^i is the ith component of w. The integrals included in equation (<ref>) are approximated using the same technique as in equation (<ref>). For equation (<ref>), we used the same method as in the calculation of equation (<ref>).
Approximation of V^i_w
The value of V^i_w appearing in Eq. (<ref>) and Eq. (<ref>) represents the volume of the extension of phase i within Ω_λ. It can be approximated using H^ϵ and ϕ^i_w defined in Eq. (<ref>) as follows:
V^i_w =∫_Ω_λH^ϵ(ϕ^i_w)dx,
≈Δ x^3 ∑_x_i,j,k∈Ω_λ^DH^ϵ(ϕ^i_w(x_i,j,k))
Calculation of w^S
The methods developed here include a step where one must extract the values of w_M at the points of S:
w^S(x)=w_M(x), x∈ S∩Ω_λ
In the numerical calculations, Ω_λ is discretized. Therefore, an interpolation on Ω_λ^D (the discretized grid points of Ω_λ) is required in order to obtain the values on the surface S.
In this study, we have used third-order polynomial interpolations.
Creation of geodesic distance functions
When constructing the surface SDVF, signed distance functions on the surface are required. These calculation are not very simple, and we use a method based on the Fast Marching Method <cit.> to construct the signed distance vector field on the surface.
plain
|
http://arxiv.org/abs/2409.02792v1 | 20240904150644 | UnLearning from Experience to Avoid Spurious Correlations | [
"Jeff Mitchell",
"Jesús Martínez del Rincón",
"Niall McLaughlin"
] | cs.LG | [
"cs.LG",
"cs.CV"
] |
UnLearning from Experience to Avoid Spurious Correlations
Jeff Mitchell
Queen's University Belfast
School of EEECS
United Kingdom
[email protected]
Jesús Martínez del Rincón
Queen's University Belfast
School of EEECS
United Kingdom
[email protected]
Niall McLaughlin
Queen's University Belfast
School of EEECS
United Kingdom
[email protected]
September 9, 2024
============================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
While deep neural networks can achieve state-of-the-art performance in many tasks, these models are more fragile than they appear. They are prone to learning spurious correlations in their training data, leading to surprising failure cases. In this paper, we propose a new approach that addresses the issue of spurious correlations: UnLearning from Experience (ULE). Our method is based on using two classification models trained in parallel: student and teacher models. Both models receive the same batches of training data. The student model is trained with no constraints and pursues the spurious correlations in the data. The teacher model is trained to solve the same classification problem while avoiding the mistakes of the student model. As training is done in parallel, the better the student model learns the spurious correlations, the more robust the teacher model becomes. The teacher model uses the gradient of the student's output with respect to its input to unlearn mistakes made by the student. We show that our method is effective on the Waterbirds, CelebA, Spawrious and UrbanCars datasets.
§ INTRODUCTION
Training Deep Learning (DL) models is a well-studied problem that usually involves minimizing the average loss on the training set. The underlying assumption is that the data in the training and testing sets are drawn from identical distributions. However, in many realistic situations, the training set does not reflect the full diversity of realistic test data. Therefore, the trained system does not generalise well to Out-Of-Distribution (OOD) or group-shifted data. This can happen because the trained system relies on various spurious correlations present in the training set but not present in the testing data, leading to performance drops in realistic settings.
Spurious correlations happen when, for a given dataset, there is a coincidental correlation between a non-predictive feature of the input and the label. If those spurious correlations are present during training, machine learning models may learn to use the non-predictive feature to solve the task. Then, when tested on the same task but without the spurious feature present, the system's performance will drop. For example, in the Waterbirds dataset <cit.>, an image's background is correlated with the label. A model could learn to associate the presence of water backgrounds with the label water-bird and land backgrounds with the label land-bird rather than looking at the bird to solve the task. In the simplest case, a dataset may have one spurious correlation. In more complex scenarios, the term group shift is used to describe cases with multiple sub-groups within the dataset, each of which may be subject to multiple different spurious correlations.
Many existing methods for spurious correlation robustness explicitly use group labels during training <cit.>.
In practice, full details of the number and kinds of spurious correlations and/or explicit group label information may not be available.
We propose, UnLearning from Experience (ULE), which allows the creation of a model robust to spurious correlations and does not require any group label information at either training or testing time. In our approach, two models are trained in parallel. A student model s(x) is directly trained on the dataset. A teacher model t(x) then unlearns spurious correlations by observing the gradient of the student's output ∂ s(x) / ∂ x with respect to its input x. Thus, the teacher model learns to avoid the mistakes made by the student, while also solving the task of interest. <Ref> shows an overview of our proposed method. We demonstrate the validity of this approach for classification tasks.
The main contributions of this paper are:
* We propose a new twist on student-teacher methods by reversing the traditional student and teacher roles to improve spurious correlation robustness.
* We train the two models in parallel, optimising a loss where the teacher looks at the student's gradient to avoid repeating the student's mistakes.
* We do not require knowledge of the presence of spurious correlation or group-shifts. Group labels are not required at training or testing time.
* We use XAI to show how our model avoids learning spurious correlations.
* Our method achieves SOTA results on Waterbirds and CelebA. And comparable results on Spawrious.
§ RELATED WORK
The standard neural network training approach is Empirical Risk Minimization (ERM) <cit.>, which minimizes the average loss on the training data.
ERM does not confer robustness to spurious correlations or group shifts. Recently, various approaches have been proposed to increase robustness to these effects <cit.>.
Group Labels used in Training Common approaches to improving group-shift robustness use group labels during training and validation.
Sagawa et al. <cit.> have shown that using Distributionally Robust Optimization (Group-DRO) to minimize the worst-group loss, coupled with strong L_2 regularization, results in models with high average test accuracy and high Worst-Group Accuracy (WGA). WGA has become established as the reference for validating spurious correlation robustness. However, DRO requires full supervision with explicit group labels, which is undesirable. Izmailov et al. <cit.> studies the quality of features learned by ERM. They find that many robustness methods work by learning a better final layer. A similar approach is taken by Kirichenko et al. <cit.> where the last layer features are reweighted using a small dataset without the spurious correlation present. Qiu et al. <cit.> again re-trained the final layer using a weighted loss that emphasizes samples where ERM predicts poorly.
Explicit Group Labels in Validation
More flexible approaches to improving group robustness do not require group labels during training. Instead, a small sample of group labels, usually explicitly provided, are present in the validation set. Nam et al. <cit.> train debiased models from biased models via their Learning from Failure (LfF) framework. They intentionally train a biased model and amplify its prejudice. They then train a debiased model that learns from the mistakes of the biased model by optimizing a generalized cross-entropy loss to amplify the bias. Similarly, Zhang et al. <cit.> improve robustness to OOD and noisy datasets using a noise-robust generalized cross-entropy loss.
Group labels and pseudo-labels can be inferred from the data as demonstrated by Environment Inference for Invariant Learning (EIIL) <cit.>, a general framework for domain-invariant learning that directly discovers partitions from the training data that are maximally informative for downstream invariant learning. Spread Spurious Attribute (SSA) <cit.> is a semi-supervised method that leverages samples with/without spurious attribute annotations to train a model that predicts the spurious attribute. It then uses pseudo-labelled examples to train a new robust model.
Xie et al. <cit.> propose NoisyStudent (NS), a semi-supervised self-learning method which first trains a supervised teacher model to generate pseudo-labels for an equal or larger student model trained on the data with noise injected. It makes use of data augmentation, dropout and stochastic depth. This method is iterated multiple times by making the student model the new teacher and repeating. In Just Train Twice (JTT) <cit.>, a model is trained for a small number of epochs; then a second model is trained by up-weighting samples where the first model has made a mistake.
Lee et al. <cit.> train an ensemble of models with a shared backbone. The models are forced to be diverse during training, and a final robust model is selected by observing a small number of new samples. Pagliardini et al. <cit.> take a similar approach of training an ensemble of models to agree on the training data but give different predictions on OOD data.
Finally, we note that related student-teacher approaches have been proposed for machine-unlearning in a more general context <cit.>.
No Explicit Group Labels Group labels may be scarce in real-world settings.
Invariant Risk Minimization (IRM) <cit.> assumes that training data is collected from separate distinct environments. IRM promotes learning a representation with correlations that are stable across these environments. CORrelation ALignment (CORAL) <cit.> for unsupervised domain adaptation aims to minimize the domain shift by aligning the second-order statistics of the source and target distributions without requiring any target labels. Deep-CORAL <cit.> extends this technique to deep neural networks. CausIRL <cit.> takes a causal perspective on invariant representation learning by deriving a regularizer to enforce invariance through distribution matching. Adversarial feature learning can also help with tackling the problem of domain generalization <cit.>, by using Adversarial Autoencoders (MMD-AAE) trained with a Maximum Mean Discrepancy (MMD) regularizer to align the distributions across different domains.
Finally, approaches have been proposed for producing robust models without group labels or the need for explicit domain generalization. Mehta et al. <cit.> extract embeddings from a large pre-trained Vision Transformer, then train a linear classification layer using these embeddings. This approach does not require group labels, although it primarily relies on the pre-existing robustness of the embeddings from the pre-trained model <cit.>. Tiwari et al. <cit.> use an auxiliary network to identify and erase predictive features from lower network layers. Zhao et al. <cit.> suppress shortcuts during training using an autoencoder, thus improving generalization. Zhang et al. <cit.> trains an ERM model to identify samples of the same class but with different spurious features, then uses contrastive learning to improve representation alignment. Recently Yang et al. <cit.> proposed SePArate early and REsample (SPARE), which identifies spuriously correlated samples early in training and uses importance sampling to minimize their effect. It does not require a group-labelled validation set.
In common with many of the above approaches, ULE only uses group labels in the validation set to select network hyperparameters, which is also done by all rival methods (JTT, SSA, LfF, EIIL, Group DRO). Additionally, SSA, JTT and Groups DRO require group labels during training, for their methods to work. ULE and other methods (EIIL, LfF) only need group labels in validation for tuning hyperparameters. The methods JTT <cit.>, LfF <cit.> and NS <cit.> are the most similar to our proposed method as they use a second model to gain further insight into the dataset. However, these methods rely on generating pseudo-labels during validation. In contrast, our method does not use explicit group labels. In our method, one model observes the gradients of the other model to counteract spurious correlations. The key advantage of ULE over our closest performing rivals, SSA and JTT, is that once the hyperparameters of ULE are set, ULE does not require explicit group labels to compensate for spurious correlations. This makes ULE more general and elegant than SSA and JTT, which require more domain knowledge, i.e., labelled examples, to work.
§ UNLEARNING FROM EXPERIENCE
In this section, we introduce our proposed approach, UnLearning from Experience (ULE). As illustrated in <Ref>, we train two models in parallel: a student model and a teacher model. The student model is trained to solve a classification task as normal, while the teacher model is trained in parallel, using a custom loss function, to solve the same classification task while avoiding spurious correlations learned by the student model. Both models are trained simultaneously, with identical batches, and their parameters are updated in parallel.
The core idea is that the student model will be prone to use shortcuts or spurious correlations in the dataset to solve the classification task. The teacher model is then trained to solve the same classification task with an additional term in its loss function to encourage it to avoid learning the same behaviour as the student model, hence avoiding the shortcuts or spurious correlations learned by the student model.
We purposefully reverse the names in our student-teacher paradigm. We want to emphasise the fact that the teacher is learning not to copy the student, i.e., it is unlearning from the experience of the student.
It has been shown to be theoretically impossible to mitigate shortcuts without prior assumptions about the data <cit.>. In practice, such mitigation is only possible when an assumption, such as simplicity bias, is imposed <cit.>. Thus, our underlying assumption is that learning short-cuts or spurious correlations is easier than learning the primary task. Therefore, it is more difficult to unlearn the correct semantic features.
Assume a classification function s(x) that maps from an input image x to the class, c, of a normalised probability distribution, ŷ_s, over the classes. We will refer to s(x) as the student network, trained using a conventional classification loss, such a cross-entropy ℒ_CE(ŷ_s,y), where y is the target probability distribution over the classes. We can define, g_s, a saliency map for s(x) as:
g_s = ∂ s(x)/∂ x
In other words, the saliency map is defined as the gradient of s(x) with respect to the input x. It indicates parts of the input that, if changed, would affect the classification decision.
We expect that any spurious correlations present will play an important part in the network's decision-making process. Hence, they will be highlighted in g_s.
During training, averaged over all batch updates, the saliency map is expected to be dominated by meaningful features. Noise and features with very small magnitudes, which do not strongly influence the classification decision, will average out.
Therefore, g_s can guide the training of another network to avoid following the same spurious correlations.
In parallel with the student classification network, we train a teacher classification network, t(x), using a loss function that includes both a standard classification loss and an additional term that causes it to avoid the spurious correlations highlighted in g_s.
To encourage the teacher network to avoid spurious correlations, we use the loss term ℒ_MSE to encourage the saliency map of the teacher network to be the opposite of the saliency map from the student network. ℒ_MSE is defined as:
ℒ_MSE(-g_s,g_t) = ‖𝒩(-g_s) - 𝒩(g_t) ‖_2^2
where
g_t = ∂ t(x) / ∂ x is the saliency map of the teacher network. The function 𝒩(z) normalises the saliency maps to have a maximum value of 1 by dividing the input by its maximum absolute value i.e.,
𝒩(z) = z / max(| z |).
Note the term -g_s in <Ref>. By multiplying g_s by -1, the saliency map from the student network is inverted. Recall that our overall goal is to use g_s to guide the training of the teacher network to avoid spurious correlations. In other words, features assigned high importance by the student network may coincide with spurious correlations; hence, the teacher network should try to assign low importance to these areas and vice versa. Thus, the ℒ_MSE term encourages the saliency maps of both networks to be opposites. The final loss value is calculated via the mean squared difference between the two vectors. Pseudocode for UnLearning from Experience (ULE) is included in the supplementary material.
The overall loss function for the teacher network is defined as follows:
ℒ_total = λℒ_CE(ŷ_t,y) + (1-λ) ℒ_MSE(-g_s,g_t)
where ℒ_CE is a classification loss, such a cross-entropy. The loss terms, ℒ_CE and ℒ_MSE, are normalised to the same order of magnitude, then the hyperparameter λ∈ [0,1] balances the two loss terms.
§.§ Practical Implementation
In the section above, we assume that saliency is based on ∂ t(x) / ∂ x, which is taken with respect to the input. Instead, we can freeze early layers of the network, treating them as a feature extractor, and compute ∂ t(x) / ∂ A_j, where A_j is the matrix of activations at an intermediate layer j.
It has been shown that modifying only the final layer of a pre-trained network can increase robustness to spurious correlations <cit.>. Moreover, we argue that features originally intertwined at the input may be separable at the final layer(s) of a pre-trained network, helping our approach to cope with more complex spurious correlations <cit.>.
If the student and teacher networks have different architectures, there may be no layers with equal dimensionality. Therefore, a different way to perform supervision is needed.
Let A_j ∈ℝ^b× d_j be the activation matrix for a given hidden layer of the teacher network, where b is the batch dimension, and d_j is the flattened layer dimension. We can form the matrix E_t=A_jA_j^𝖳, where E_t ∈ℝ^b× b, i.e., the size of E_t is independent of the hidden layer dimensionality. The same process can be applied to any hidden layer, A_k of the student network, to form matrix E_s ∈ℝ^b× b. In our training scheme, both networks are always trained in parallel with the same batch, so matrices E_t and E_s will always have the same size. The matrices E_t and E_s can be flattened and compared using ℒ_MSE(-ℱ(E_s),ℱ(E_t)) (see <Ref>), where ℱ is the flattening operation, thus encouraging the student and teacher hidden layers to diverge.
In practice (See Section <ref>), we select A_j and A_k as the final fully connected layers of the teacher and student networks. Earlier layers are frozen. We note that last layer re-training is common in the spurious correlation literature <cit.>. Our method then simplifies to training the final linear layer using ℒ_total (see <Ref>).
§ EXPERIMENTS
In this section, we experimentally evaluate our proposed method qualitatively and quantitatively. We test our approach using several pre-trained models, including ResNet-18 <cit.> pre-trained on the ImageNet1K_V1, ResNet-50 <cit.> pre-trained on ImageNet1K_V2, and ViT-H-14 <cit.> comprising the original frozen SWAG <cit.> trunk weights with a final linear classifier trained on ImageNet1K.
As mentioned in <Ref>, we only fine-tune the final linear layer of each model. Unless otherwise noted, we use the same model for the student and teacher in all experiments. In all experiments, we train the models for 300 epochs.
We tune our hyperparameters separately for each dataset and model by grid search. We vary the value of λ in steps of 0.1 over the range [0, 1], the learning rate in the range [1e-1, 1e-5] in powers of 10, and select between standard and strong L_2 regularization. We select the values of λ, learning rate, and regularization that achieve the highest worst-group accuracy (WGA) on the validation set. We evaluate the model on the validation set every 10 epochs to monitor the model. We investigate the effect of strong L_2 regularization, which can be used to reduce the effects of spurious correlations by preventing the model from overfitting <cit.>.
§.§ Datasets
The following datasets, containing group-shifted and OOD data, were used to evaluate our proposed method: Waterbirds <cit.>, CelebA <cit.> and Spawrious <cit.>.
Waterbirds Dataset <cit.>
Consists of images of birds (land and water) cropped from the Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset <cit.> imposed onto backgrounds from the Places dataset <cit.>. The resulting dataset contains ≈ 11,800 custom images. The objective is to classify the images into two classes: y={Waterbirds, Landbirds} given spurious correlation, a={Water, Land}, between the background and the bird class.
The test set consists of images from all four combinations of labels y and spurious correlations a. To prevent bias in evaluating the model, we ensure that the number of test images in each group is balanced.
We calculate two main metrics to evaluate the robustness of models on the group-shifted data. The first metric is the average test accuracy, which is the average accuracy over all groups.
Acc = 1/N_g∑_i=1^N_gAcc_i
where N_g=|y|*|a| is the number of groups and Acc_i
is the model's accuracy on group i, calculated as the number of correct predictions divided by the total number of predictions for that group. However, the average test accuracy does not consider the distribution of the groups. For example, suppose the model performs well on the majority groups but poorly on the minority groups. In that case, the average test accuracy may be high, but the model will not be robust to group-shifted data. To measure robustness against spurious correlation, we look at WGA, defined as the model's accuracy on the worst-performing group.
WGA = min{Acc_1, Acc_2, …, Acc_N_g}
WGA is the main metric in the literature <cit.> rather than Acc, as it specifically demonstrates robustness to spurious correlations.
CelebA Dataset
Following <cit.>, we train models to classify images into two classes: y={Blond hair, Non-blond hair} with a spurious correlation a={Female, Male} between gender presentation and hair colour. We evaluate using WGA. The dataset has ≈ 202,600 images and balanced testing groups.
UrbanCars Dataset
UrbanCars <cit.> include multiple types of spurious correlations. The task is to classify images into two classes: y={urban, country}, given two types of spurious correlations, the background and a co-occurring object, which are correlated with the true class, and which both also take the classes a={urban, country}.
Spawrious Dataset <cit.> contains six datasets with easy, medium and hard variants of one-to-one and many-to-many spurious correlations. Many-to-many spurious correlations are more complex than the one-to-one spurious correlations in Waterbirds or CelebA.
Each dataset shows various dog breeds on different backgrounds.
We classify images into four classes: y={Bulldog, Corgi, Dachshund, Labrador} with a spurious correlation a={Desert, Jungle, Dirt, Mountain, Snow, Beach} between the background and dog breed. Testing sets are balanced. Following the procedure in <cit.>, we evaluate using average accuracy and additionally with WGA for consistency with Waterbirds and CelebA.
§.§ Proof of Concept
We first demonstrate the effectiveness of our proposed ULE method using two modified versions of the MNIST dataset with artificial spurious correlations <cit.>.
MNIST-SC - MNIST modified by one-hot encoding the class label into the upper left corner of every image.
Coloured-MNIST (ten-class) - MNIST modified so that the digit colour is correlated with the class label. Hence, the input feature and spurious correlation are intertwined. We use all ten MNIST classes with ten unique colours, and the digit colour is perfectly correlated with the correct class label.
This experiment uses a convolutional neural network (CNN) comprised of two convolutional layers, with 32 and 64 filters, each followed by ReLUs, then 2× 2 max-pooling, a flattening layer, dropout layer, followed by two linear layers with 9216 and 128 neurons with dropout and ReLUs. The output is always a length-10 vector encoding the class label.
We train from scratch our CNN on MNIST-SC, Coloured-MNIST, and standard MNIST, with and without our proposed ULE method. Then test on MNIST, which does not contain the spurious correlation. Our results are shown in <Ref>.
When we train a neural network on MNIST-SC in the standard way, i.e., using ERM, we observe that the model focuses on the spurious correlation. It achieves perfect accuracy on both the training set and MNIST-SC testing set images with the spurious correlations present. However, the model's performance drops to 85% when evaluated on MNIST testing set images, which do not contain spurious correlations.
In contrast, when we train the same network using our proposed ULE method, it achieves an accuracy of 95% whether or not the spurious correlation is present. This demonstrates that ULE has helped increase model robustness to spurious correlations.
When we train on ten-class Coloured-MNIST and test on ten-class MNIST, the performance of ULE is significantly better than ERM. This suggests that when the features are intertwined at the input, ULE can, to an extent, guide the teacher in ignoring spurious correlations.
Finally, we train and test on the standard MNIST dataset using ULE. In this case, no spurious correlations were present during training or testing. ULE's testing-set performance is only marginally below ERM, suggesting that ULE doesn't significantly harm performance, even in cases where the student has learned the correct features.
To investigate further, we visualise g_t(x), the gradient of the trained teacher network's output with respect to its input, for a random sample of images from the MNIST-SC testing set. <Ref> shows that the ERM-trained model places significant attention on the upper left corner of the image, where the spurious correlation class labels were embedded. This shows the model learns to use the spurious correlation rather than focusing on the digits. In contrast, <Ref>, shows our ULE model does not place emphasis on the top left corner but focuses on the digits instead.
The results from <Ref> and <Ref> show that our proposed ULE method can help train models that are robust to spurious correlations. With ULE the model does not rely on spurious correlations to achieve high accuracy even if they are clearly present in the data.
§.§ Evaluating Spurious Correlation Robustness
In this section, we evaluate our ULE approach on several datasets with realistic spurious correlations and perform a comparison with state-of-the-art approaches.
For each experiment, we tune the hyperparameters of our models as discussed above in <Ref>. The learning rate, L_2 regularization and λ hyperparameters were tuned independently for all datasets.
The overall best-performing model on the validation set is saved and evaluated on the test set using Acc and WGA
(See <Ref>). Training and evaluation of the models are repeated five times to compute the mean and standard deviation of the results. This ensures results are not affected by random initialization data shuffling. In this set of experiments, we use the same architecture for the student and teacher models.
As discussed in <Ref>, we only fine-tune the final fully connected layer of the models.
The gradients of the student model, g_s,
are extracted from the output of the final block of convolutional layers for the ResNet and the output of the MLP Head for the ViT.
For all like-for-like comparison tables, we colour the 1st, 2nd and 3rd best results, and highlight results from other model architectures, which may not be direct comparable, ViT-H-14 in Gray.
§.§.§ Waterbirds
<Ref> shows the results of our ULE method applied to three network architectures: ResNet-18, ResNet-50 and ViT-H-14. We compare our approach with state-of-art robustness methods on the Waterbirds dataset. In a direct comparison, ULE-trained ResNet-50 equals the best result from the literature. Although not directly comparable, using the ViT-H-14 model, our method achieves a higher worst-group accuracy than all other current approaches, almost 2% higher than the next best approach, even when compared against models that use group labels.
§.§.§ XAI Analysis of Network trained on Waterbirds
To compare the different behaviours of models trained with our proposed ULE vs an ERM baseline, we use the GradCAM <cit.> eXplainable AI method. We visualize the saliency maps of ULE and ERM, ResNet-50 models on a random selection of images from the Waterbirds test set. The results in <Ref> show the ERM-trained model focuses on the background, i.e., the spurious correlations. In contrast, the ULE-trained model focuses on the subject, thus avoiding spurious correlations in the background. This is true across various images in <Ref>, demonstrating that our ULE method increases model robustness to spurious correlations, even in challenging realistic conditions. We also show some failure cases where the ERM model makes the wrong prediction. In these cases, it focuses on the background.
§.§.§ Hyperparameter Sensitivity
The loss function, Eq. <ref>, is critical to functionality ULE. Therefore, we measure the sensitivity of ULE to changes in the value of the λ hyperparameter, which controls the balance between the classification and gradient loss terms of the teacher network. For each value of λ, we trained a ResNet-50 model with ULE on Waterbirds 3 times and averaged the WGA. The results in Fig. <ref> indicate that ULE performs well over a wide range of λ values.
Using the same procedure, we also varied the second component of the loss function in Eq. <ref> to use L1 loss rather than MSE loss. This resulted in a WGA of 87.3 and an average accuracy of 89.0. These values are marginally below those in Table <ref>, indicating that the choice of MSE or L1 loss is not critical to the functionality of ULE.
§.§.§ Waterbirds – Trained from Scratch
In common with the majority of other spurious correlation robustness methods <cit.>, ULE typically acts on the final layer representations from a pre-trained model. However, ULE can also be used when training a model from scratch (Also see Section <ref>). We train a ResNet-50 model from scratch on Waterbirds using ULE, where the saliency, ∂ t(x) / ∂ x, is taken with respect to the input image, and all network layers are allowed to train.
In Table <ref>, we contrast ULE with the comparable from-scratch results reported in <cit.>. In common with other methods, ULE's from-scratch performance is worse than when using a pre-trained model (See Section <ref>). However, ULE outperforms ERM, and is comparable with the best other methods in terms of average accuracy and WGA.
§.§.§ CelebA
<Ref> compares ULE with state-of-art approaches to spurious correlation robustness on the CelebA dataset. Our ULE method is in the top-3 best methods in terms of worst-group accuracy. Only SSA <cit.> and GroupDRO <cit.> achieve higher worst-group accuracy. It is important to note that our method does not use group labels in training or validation, while those other techniques do, meaning it is easier to apply our approach in new situations.
§.§.§ Spawrious
Spawrious is more complex than Waterbirds or CelebA. It includes one-to-one and many-to-many spurious correlations. Following the procedure in Lynch et al. <cit.>,
in <Ref> we evaluate using average accuracy to allow for direct comparison against the results reported in <cit.>. Additionally, in Table <ref>, we report WGA for consistency and to allow future comparison with our method. We are unaware of any WGA results in the literature for this dataset, so we make no claims about ULE's performance compared to other methods.
Spawrious: One-to-One
In <Ref>, we compare ULE against the literature on Spawrious: One-to-One.
ULE with the ResNet-50 model achieves the highest average accuracy on the easy setting and is competitive on other difficulties.
Spawrious: Many-to-Many
Many-to-many spurious correlations happen when the spurious correlations hold over disjoint groups of spurious attributes and classes. For instance, each class from the group {Bulldog, Dachshund} is observed with each background from the group {Desert, Jungle} in equal proportion in the training set <cit.>. These more complex scenarios require the model to learn to ignore spurious correlations for more than one class and group combination. Robustness to many-to-many spurious correlations is important because they can occur in real settings.
<Ref> shows a comparison between ULE and state-of-the-art methods on the Spawrious: Many-to-Many benchmark.
ULE with ResNet-50 achieves the highest average accuracy across all difficulty settings.
§.§.§ UrbanCars
UrbanCars <cit.> is a challenging dataset that includes multiple types of spurious correlations. Both the image background and a co-occurring object are correlated with the true class. We follow the protocol of Li et al. <cit.>, which allows for direct comparison with results from the literature.
In <Ref>, we see that ULE performs in the top three of recently published methods in terms of WGA. We also see that ULE has the highest average accuracy of all methods.
§.§ Mixed Models
We now investigate ULE's performance when different model architectures are used for teacher and student. We compare performance when models are paired with themselves versus paired with other models.
In common with <Ref>, we perform hyperparameter tuning to select the best hyperparameters for each model combination. All experiments were performed on the Waterbirds dataset.
<Ref> shows worst-group test accuracies for different teacher and student model architecture combinations. The results show a correlation between the total complexity of the teacher and student models and worst-group accuracy. As the total complexity increases, the worst-group accuracy increases consistently, with the most complex combination, ViT-H-14 paired with itself, achieving the highest worst-group accuracy. When using mixed models, we also see that having a simpler student model and a more complex teacher model leads to better performance. Indeed, since the student aims to highlight mistakes to the teacher, its complexity or architecture seems to be of little importance. Our intuition is that the teacher model requires more capacity to be capable of solving the task in a different way than the student model, thus learning to ignore the spurious correlations. The complexity of the teacher seems to be the main factor in determining performance, with the ViT-H-14 teacher clearly outperforming any other choice.
§ CONCLUSION
We propose UnLearning from Experience (ULE), a new twist on student-teacher approaches that reverses the usual roles of student and teacher. The teacher observes the gradients of the student model, and ensures its gradients are the opposite, hence “unlearning” the student's mistakes to increase its robustness to spurious correlations.
We demonstrate the effectiveness of ULE on the, Waterbirds, CelebA, and Spawrious datasets. ULE achieves state-of-the-art results on Waterbirds and CelebA, and is competitive on Spawrious.
ULE does not require prior knowledge of the spurious correlations and is not affected when spurious correlations are not present. It does not require group labels, unlike some other approaches (<Ref>). ULE is simple to implement and can be applied to many model architectures. These factors enhance its real-world applicability.
ieee_fullname
|
http://arxiv.org/abs/2409.02363v1 | 20240904011855 | Optimal Neural Network Approximation for High-Dimensional Continuous Functions | [
"Ayan Maiti",
"Michelle Michelle",
"Haizhao Yang"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
[subfigure]labelfont=rm
lemmaLemma
prop[lemma]Proposition
cor[lemma]Corollary
theorem[lemma]Theorem
remark[lemma]Remark
assmp[lemma]Assumption
example[lemma]Example
exExercise
definition[lemma]Definition
algorithmAlgorithm
|
http://arxiv.org/abs/2409.02838v1 | 20240904160623 | iConFormer: Dynamic Parameter-Efficient Tuning with Input-Conditioned Adaptation | [
"Hayeon Jo",
"Hyesong Choi",
"Minhee Cho",
"Dongbo Min"
] | cs.CV | [
"cs.CV"
] |
Multi-Track MusicLDM: Towards Versatile Music Generation with Latent Diffusion Model
Tornike Karchkhadze1 Mohammad Rasool Izadi2 Ke Chen1 Gerard Assayag3Shlomo Dubnov1
Accepted Sep 3 2024 to ApJ Letters
======================================================================================
§ ABSTRACT
Transfer learning based on full fine-tuning (FFT) of the pre-trained encoder and task-specific decoder becomes increasingly complex as deep models grow exponentially. Parameter efficient fine-tuning (PEFT) approaches using adapters consisting of small learnable layers have emerged as an alternative to FFT, achieving comparable performance while maintaining high training efficiency.
However, the inflexibility of the adapter with respect to input instances limits its capability of learning task-specific information in diverse downstream tasks. In this paper, we propose a novel PEFT approach, input-Conditioned transFormer, termed iConFormer, that leverages a dynamic adapter conditioned on the input instances. To secure flexible learning ability on input instances in various downstream tasks, we introduce an input-Conditioned Network (iCoN) in the dynamic adapter that enables instance-level feature transformation. To be specific, iCoN generates channel-wise convolutional kernels for each feature and transform it
using adaptive convolution process to effectively capture task-specific and fine-grained details tailor to downstream tasks. Experimental results demonstrate that by tuning just 1.6% to 2.8% of the Transformer backbone parameters, iConFormer achieves performance comparable to FFT in monocular depth estimation and semantic segmentation, while outperforming it in image classification and instance segmentation. Also, the proposed method consistently outperforms recent PEFT methods for all the tasks mentioned above.
§ INTRODUCTION
As deep neural networks (DNNs) grow increasingly complex, transfer learning—fine-tuning pre-trained models with task-specific data for downstream tasks—has become a widely adopted solution across diverse applications, including image classification, semantic segmentation, and object detection, to name a few. For instance, the model consisting of the pre-trained encoder <cit.> and task-specific decoder is fine-tuned, achieving remarkable performance gain when compared to the model trained from scratch.
However, training large, complex models separately for each task and dataset is highly inefficient.
Recently, parameter efficient fine-tuning (PEFT) approaches <cit.> that maximize the efficiency in terms of training parameters have emerged as an alternative to the above-mentioned full fine tuning (FFT) methodologies, achieving competitive performance even with limited computing resources while simplifying the training processes and deployment.
This remarkable progress in vision tasks
is primarily driven by approaches including prompt tuning <cit.> and adapter <cit.>, which have been successfully applied in natural language processing (NLP) tasks. Visual prompt tuning <cit.> is the first study to explore the potential of prompt tuning in visual recognition tasks, laying the foundation for the prompt tuning in the field of computer vision. In addition, the adapter-based PEFT methods <cit.> achieve significant training efficiency by applying the adapter used in NLP to the Vision Transformer (ViT) and its variants <cit.>.
While most PEFT-based approaches yield comparable performance to baseline methods using the FFT in the image classification task, they do not yet provide sufficiently satisfactory performance to compete with the FFT in other complex downstream tasks.
Additionally, the scalability of prompt-based methods <cit.> is significantly limited, leading to considerable performance degradation as the number of learnable parameters increases, as reported in <cit.>.
Adapter-based approaches <cit.> incorporate lightweight modules to reduce the number of trainable parameters while maintaining stable performance across a range of trainable parameter scales. However, the adapters always apply the same transformation to input features, ignoring individual characteristics of input instances. This may not be an issue when fully tuning DNNs, but it could be a limiting factor in improving performance in adapter-based PEFT methods. Namely, the inflexibility with respect to input instances exacerbates the transfer capability of adapter-based models with small learnable parts to downstream tasks, limiting their ability to capture unique and task-specific information.
Additionally, the Vision Transformer (ViT) backbone <cit.> used in adapter-based models tends to focus on global information rather than fine-grained local details within an image. While this limitation can be partially addressed by employing the Swin Transformer <cit.>, which utilizes local attention mechanisms, the constraint on the number of learnable parameters in PEFT still restricts the Swin Transformer's capability to effectively capture local features. Consequently, this negatively affects the performance in dense prediction tasks that require fine-grained information.
To address the aforementioned issues, we propose a novel PEFT approach, input-Conditioned transFormer, termed iConFormer, that leverages a dynamic adapter where parameters are adjusted input instance-level.
Unlike recent adapter-based approaches <cit.>, iConFormer introduces an input-Conditioned Network (iCoN) that dynamically generates the parameters for each input feature in the adapter. This approach enables for effectively capturing task-specific and local details while keeping the number of learnable parameters small. The effectiveness of our method is evidenced by the quantitative analysis in Figure <ref>, which shows that iConFormer captures task-specific information more effectively than conventional methods. We also analyze the capability to capture fine-grained details by visualizing attention maps in Figure <ref>.
iConFormer achieves performance competitive to full fine-tuning (FFT) on both classification and dense prediction tasks including monocular depth estimation, semantic segmentation, and instance segmentation with only additional 1.6% to 2.8% backbone parameters. Our method even surpasses FFT for the image classification on CIFAR100 <cit.> and the challenging instance segmentation task on COCO <cit.>. Additionally, iConFormer also outperforms conventional PEFT methods for monocular depth estimation task on NYU-v2 <cit.>, demonstrating the effective utilization of pre-trained backbone parameters with additional learnable parameters dynamically finetuned for specific tasks.
In summary, our contributions are threefold:
* We propose iConFormer to enhance representation learning by dynamically adjusts only a small subset of parameters conditioned on input instances in the PEFT framework.
* We demonstrate that iConFormer effectively captures fine-grained details with input-Conditioned Network (iCoN), leading to substantial improvements in dense prediction tasks.
* Through comprehensive experiments on classification, monocular depth estimation, instance segmentation, and semantic segmentation, we show that iConFormer achieves remarkable performance by tuning only 1.6% to 2.8% of the Transformer backbone parameters.
§ RELATED WORK
§.§.§ Transformer in Vision.
Transformers, initially designed for Natural Language Processing (NLP) tasks such as machine translation <cit.> and text generation <cit.>, have achieved significant success in these areas. This success has led to a shift towards computer vision, starting with the Vision Transformer (ViT) <cit.>. Subsequently, various Transformer-based models <cit.> have achieved notable advancements in tasks including image classification <cit.>, semantic segmentation <cit.>, object detection <cit.>, image restoration <cit.>, and depth estimation <cit.>. Furthermore, transformers have significantly advanced vision recognition through large-scale pretraining <cit.>. However, their larger size compared to previously prevalent CNN backbones presents challenges for fine-tuning on specific tasks. In this context, our work explores methods to adapt pre-trained transformers into target tasks in a more effective and efficient way.
§.§.§ Parameter Efficient Fine Tuning.
Parameter Efficient Fine-Tuning (PEFT) methods enable the adaptation of large self-supervised pre-trained models <cit.> to specific tasks without the need to train the entire model. In Natural Language Processing (NLP), notable approaches include adapter methods <cit.>, which integrate small modules into the model while keeping the pre-trained parameters frozen, with only the added modules being fine-tuned. Additionally, other methods involve tuning specific components, such as bias or normalization layers <cit.>, utilizing learnable prompt tokens <cit.>, or applying low-rank matrix approximations <cit.> to update parameters more efficiently.
In computer vision, PEFT techniques inspired by NLP have shown significant progress. For instance, VPT <cit.> is the first method to apply prompt tuning approaches to visual classification tasks. AdaptFormer <cit.> employs a parallel adapter framework to enhance the effectiveness of parameter efficient fine-tuning for visual classification. KAdaptation <cit.> optimizes the adapter using Kronecker products. In addition, LoRand <cit.> employs multi-branch low-rank adapters to achieve impressive performance on dense prediction tasks. Our work focuses on adapter methods, proposing new approaches that aim to achieve performance comparable to or exceeding that of full fine-tuning across various downstream tasks.
§ PRELIMINARY
§.§ Vision Transformer and its Variants
Vision Transformer (ViT) <cit.>, modified from the Transformer <cit.> proposed in NLP, integrates image patches and positional encodings to capture spatial information. It consists of a patch embedding layer and multiple sequential encoder blocks, as depicted in Fig. <ref>(a).
Given a batch of images x ∈ℝ^B× H× W× 3, the patch embedding layer transforms x into sequential patches x_p∈ℝ^B× M × (P^2C), where H× W is an image resolution, and B is a batch size. P× P is the resolution of an image patch, C is the output channel, and M = HW / P^2 is the number of image patches. The patches are linearly embedded into D dimensions to generate the final input x_in∈ℝ^B × M × D.
In the Transformer encoder block, x_in is first normalized using LayerNorm <cit.>, and then processed by a multi-head self-attention layer (MHSA). The output is combined with x_in via a residual connection:
x́_́íń = Attention(LN(x_in))+x_in .
Next, x́_́íń is normalized and passed through the MLP layer, followed by residual connection:
x̃ = MLP(LN (x́_́íń)) ,
x_out = x̃ + x́_́íń .
This process is repeated N times in the encoder block.
In ViT, the self-attention mechanism captures global features by evaluating relationships between all image patches, enabling a comprehensive understanding of complex dependencies. Advancing this approach, Swin Transformer <cit.> introduces hierarchical attention with shifted windows, which enhances both computational efficiency and feature representation. Other variants <cit.> leverage multi-scale feature extraction for specific vision tasks, demonstrating the adaptability of Transformer models to diverse computer vision challenges.
§.§ PEFT Methods
Parameter efficient fine-tuning (PEFT) methods, such as prompt tuning <cit.>, low-rank adaptation <cit.>, and adapters <cit.>, are designed to reduce the number of trainable parameters needed for fine-tuning large models.
Here, we briefly review adapter-based PEFT methods related to our approach, which will be detailed in the following section.
Adapters introduce small, trainable modules between the layers of the pre-trained model. For instance, if the original model's transformation is defined as h = W · x, adapters modify this operation to h = W · x + A · g(B · x), where B represents the down-projection of the input features, A indicates the up-projection back to the original space, and g(·) denotes a non-linearity. These approaches allow for effective task adaptation while minimizing the number of parameters that need to be updated.
§ PROPOSED METHOD
Conventional adapter-based methods <cit.> rely on static parameter tunings, which may constrain their ability to capture diverse input features with high precision. To address these limitations, we propose iConFormer, a novel framework incorporating the input-Conditioned Network (iCoN), as illustrated in Figure <ref> (b). Unlike conventional adapters, iConFormer dynamically generates convolutional kernels tailored to the unique characteristics of each input.
This dynamic generation facilitates more accurate and flexible feature extraction,
introducing locality inductive biases into the pre-trained Transformer encoder.
This approach enables the model to effectively capture both global and local features within a parameter-efficient framework. Consequently, iConFormer not only improves the model’s performance but also ensures a more precise adaptation to varying input conditions.
§.§ Input-Conditioned Network (iCoN)
The input-Conditioned Network (iCoN) is a key component of the iConFormer framework. Inspired by the concept of dynamic filter networks <cit.>, iCoN employs a channel-wise mechanism to generate convolutional kernels that are tailored to the unique characteristics of each input. This approach enhances parameter efficiency while maintaining the flexibility needed to capture diverse features effectively. Formally, the iCoN module generates channel-wise convolutional kernels using input features from the MLP output, denoted as x̃∈ℝ^B × M × D of (<ref>). First, the feature x̃ is down-projected to d channels and reshaped from M into the spatial dimensions Ĥ×Ŵ, which correspond to H/P × W/P. This reshaping effectively rearranges the patches into a spatial grid to produce the feature x̂∈ℝ^B ×Ĥ×Ŵ× d. Subsequently, the iCoN module generates dynamic convolutional kernels from the reshaped feature x̂. This process is mathematically represented as:
x̂ = Reshape(Down(x̃)) ,
W_dynamic = iCoN(x̂) ,
where W_dynamic∈ℝ^B × D × K × K is the dynamically generated kernel and K is the kernel size. For our implementation, we set K to 3 and d to 64 (d ≪ D). The dynamically generated convolution kernel W_dynamic is then applied to feature x̂, enabling a more precise capture of each input’s distinct features.
Afterward, the feature is reshaped back to ℝ^B × M × d, followed by applying a non-linear activation function and up-projection to restore the channel dimension to the original size D. This process produces the final output of adapter as follows:
x_A = Up( σ (Reshape(x̂⊗ W_dynamic))) ,
where ⊗ denotes the channel-wise convolution operation and σ represents the GeLU activation function.
To further enhance model robustness, the features obtained from the adapter and attention layers are combined with residual connections:
x_out = γ· x_A+x́_́íń ,
where γ is a weight that adjusts the impact of the adapter features. This weight is a learnable scalar, optimized during training.
By leveraging dynamically generated convolutional kernels, iConFormer refines the capture of input-specific features compared to conventional adapters, while maintaining parameter efficiency. This efficiency is achieved by employing a consistent set of parameters within the iCoN to generate convolutional kernels, even though the kernels are dynamically adapted for each specific input. Furthermore, the efficiency is enhanced by generating these dynamic kernels on a channel-wise basis, thereby minimizing the number of tunable parameters while preserving effective adaptation.
Consequently, iConFormer strikes a balance between effective feature extraction and computational efficiency relative to conventional PEFT methods, addressing their limitations and providing a robust solution for parameter-efficient adaptation.
§.§ Visual Analysis of Local Representation
To evaluate the performance of capturing both local and global information, Fig. <ref> visualizes the attention maps that offer insights on where the model focuses on. The attention maps are generated using attention rollout <cit.> with AdaptFormer <cit.> and iConFormer, employing Swin Transformer <cit.> as the backbone. Attention rollout computes token attentions by recursively multiplying attention matrices across layers, revealing how attention is distributed across different regions of the image. AdaptFormer adopts a standard pipeline of conducting down-projection, non-linear activation, and up-projection using static weight parameters that are not dynamically adjusted conditioned on input features. While the attention maps of AdaptFormer exhibit ambiguous and scattered attention distributions, with limitations in capturing fine-grained local features such as object edges, the attention maps of iConFormer are significantly more focused and well-aligned with objects.
iConFormer utilizes Input-Conditioned Network (iCoN) to dynamically generate convolutional filters tailored to the input features. These convolutional kernels enable iConFormer to capture detailed spatial features while preserving overall contextual awareness. By focusing on these salient details, iConFormer demonstrates significant improvements in processing complex input data, leading to enhanced accuracy and robustness in dense prediction tasks.
§ EXPERIMENTS
§.§ Experimental Settings
§.§.§ Datasets and Downstream Tasks.
To evaluate the effectiveness of iConFormer, we conduct comprehensive experiments on both image classification and dense prediction tasks, including monocular depth estimation, semantic segmentation, and instance segmentation. Implementation details can be found in the supplementary material. The datasets used in the experiments are as follows:
* Image Classification: CIFAR-100 dataset <cit.> consists of 50,000 training images and 10,000 validation images, each with a resolution of 32×32, categorized into 100 classes. The SVHN dataset <cit.> includes over 600,000 labeled images for digit classification, comprising 73,257 training samples, 26,032 test samples, and 531,131 additional training images. The Food-101 dataset <cit.> contains 101,000 images across 101 food categories, with each category having 750 training samples and 250 test samples.
* Monocular Depth Estimation: NYU-v2 <cit.> with diverse indoor scenes and KITTI <cit.> with high-resolution outdoor driving scenes are benchmark datasets for depth estimation.
For experiments, we used the standard splits and evaluate using Root Mean Square Error (RMSE). NYU-v2 images were cropped to 352 × 352 pixels, while KITTI images were cropped to 480 × 480 pixels.
* Semantic Segmentation: ADE20K <cit.> is a widely used semantic segmentation dataset with 20,000 training and 2,000 validation images. For our experiments, we utilized UperNet <cit.> as the framework and evaluated performance using the mean Intersection over Union (mIoU) metric.
* Instance Segmentation: MS COCO <cit.> is a prominent dataset for instance segmentation, with 118,000 training and 5,000 validation images. We used Cascade Mask R-CNN <cit.> as a task-specific decoder and measured performance with Average Precision for bounding boxes (AP_Box) and masks (AP_Mask).
§.§.§ Pretrained Backbones.
For a fair comparison with FFT baseline and current PEFT methods, we used different pre-trained backbones depending on the tasks. In the semantic segmentation and instance segmentation tasks, Swin Transformer backbones <cit.>, pre-trained on ImageNet-22k dataset <cit.>, were used <cit.>. Specifically, we used the Swin-Large backbone for semantic segmentation and the Swin-Base backbone for instance segmentation. For the monocular depth estimation, we utilized the standard Swin-V2-Base backbone <cit.> pre-trained using the MIM <cit.>. For the classification task, we adopted the ViT backbone <cit.> pre-trained using MAE <cit.>.
§.§.§ Baseline methods.
For the image classification task, we used the same set of comparison models as in <cit.>, and additionally included the Adapter method from <cit.>. In the monocular depth estimation, we included comparisons with partial tuning such as BiTFiT <cit.>, LN-Tuning <cit.>. We also evaluated against Partial-l <cit.>, which fine-tunes only the final block of the backbone and parameters outside the backbone. For adapter-based methods, we compared with recent approaches, including Adapter <cit.>, AdaptFormer <cit.>, LoRA <cit.>, and LoRand <cit.>. In the semantic segmentation and instance segmentation tasks, we configured the Adapter model under the same settings as described in <cit.> to ensure a fair comparison. Across all tasks, we also included full fine-tuning (FFT) for performance evaluation.
§.§ Main Results
§.§.§ Image classification.
We evaluate various fine-tuning approaches using backbones pre-trained via self-supervised learning paradigms <cit.>, as detailed in Table <ref>. The results demonstrate that iConFormer consistently outperforms linear probing, Visual Prompt Tuning (VPT) methods, and recently proposed adapter-based techniques. Specifically, iConFormer achieves performance improvements of 4.5%, 3.36%, and 4.99% over VPT on the image benchmarks CIFAR-100, SVHN, and Food-101, respectively. Furthermore, when compared to recent adapter-based methods such as Adapter and AdaptFormer, iConFormer shows up to 1.04%, 0.49%, and 1.08% higher accuracy, respectively. Notably, iConFormer also surpasses the full fine-tuning approach by more than 1% Top-1 accuracy on the CIFAR-100 dataset. In summary, iConFormer is not only highly efficient compared to other adapter-based methods but also achieves superior performance with up to 2% of the parameters used in full fine-tuning.
§.§.§ Monocular Depth Estimation.
Table <ref> present the performance results for the NYU-v2 datasets. As shown in the tables, iConFormer outperforms other parameter efficient fine-tuning methods in all metrics, with the RMSE value being within 0.04 of the full fine-tuning performance. Moreover, iConFormer shows an RMSE improvement of up to 0.2 compared to partial tuning methods, and an enhancement of up to 0.3 RMSE compared to adapter-based methods such as Adapter, AdaptFormer, and LoRA. These results suggest that iConFormer’s capability to generate and apply input-conditioned kernels significantly contributes to its precision in depth estimation tasks. The performance evaluation on the KITTI dataset is
presented in the supplemental material.
§.§.§ Semantic Segmentation.
We present the results of the semantic segmentation task on the ADE20K dataset in Table <ref>. By fine-tuning fewer than 3.3 million backbone parameters, iConFormer achieves 50.82% mIoU on ADE20K, which is approximately 0.7% lower than Full fine-tuning. Moreover, iConFormer requires fewer parameters for tuning compared to previously proposed adapter-based methods while still achieving superior performance. These results suggest that iConFormer effectively utilizes a limited subset of parameters to capture task-specific information and learn detailed features.
§.§.§ Instance Segmentation.
Table <ref> presents the instance segmentation results on the COCO dataset. iConFormer demonstrates significant performance gains by training only 2.78% of the total backbone parameters, surpassing both existing adapter-based methods and full fine-tuning. Specifically, it achieves a 0.8% improvement in AP_box and a 0.7% improvement in AP_mask compared to full fine-tuning. These results effectively reveal the advantages of the proposed method and demonstrate its superiority over full fine-tuning in terms of both storage efficiency and performance. Additionally, these findings suggest that iConFormer optimizes resource utilization through its dynamic kernel approach.
§.§ Ablation Studies
We conducted ablation studies to explore various aspects of iConFormer and identify the key factors that contribute to its performance. All ablation experiments were performed using the dense tasks.
§.§.§ input-Conditioned Kernel Size.
In Table <ref>, we conducted an ablation study on the KITTI dataset by varying the size of the input-conditioned kernel. We experimented with kernel sizes of 3×3, 5×5, and 7×7. The results demonstrate that iConFormer performs well with a 3×3 kernel across most evaluation metrics. Notably, the RMSE slightly improves as the kernel size decreases, with 3×3 kernel achieving the lowest RMSE. This observation indicates that 3×3 kernel captures essential local features effectively while maintaining computational efficiency. Given that the performance across kernel sizes is quite similar, 3×3 kernel was adopted for its efficiency, providing a balanced trade-off between accuracy and computational cost for the input-conditioned kernels of iCoN.
§.§.§ iConFormer Configuration.
We investigated the iConFormer configuration by comparing its sequential and parallel instances, as illustrated in Figure <ref>, where the distinction is based on the placement of the Adapter within the Transformer block. As demonstrated in Table <ref>, the sequential form of iConFormer significantly outperforms the parallel form across all dense tasks. The reason might be:
(1) the sequential design processes each layer's output in a progressive manner, facilitating deeper feature representations and gradual refinement of complex patterns; (2) the parallel design processes outputs simultaneously, which results in limited inter-layer interaction, weakening the information flow and hindering the model’s capacity to capture intricate features. Therefore, we adopted the sequential design as the default configuration for iConFormer, given its demonstrated superior performance.
§ CONCLUSION
In this work, we have presented iConFormer that leverages a parameter-efficient input-conditioned adapter to effectively capture task-specific features and local information with a limited number of learnable parameters in fine-tuning the models for various downstream tasks. iConFormer demonstrates performance comparable to full fine-tuning across image classification, monocular depth estimation, semantic segmentation, and instance segmentation tasks, by tuning only 1.6% to 2.8% of the backbone parameters. iConFormer effectively addresses the limitations of conventional adapter methods and provides superior performances in all tasks. Although our current focus is on vision recognition tasks, we plan to extend iConFormer to other domains such as natural language processing and multi-modal tasks in future work. We anticipate that this extension will inspire further research into efficient adaptation methods and contribute to developing robust solutions across a variety of applications.
|
http://arxiv.org/abs/2409.03727v1 | 20240905173212 | Design of CANSAT for Air Quality Monitoring for an altitude of 900 meters | [
"Soma Kunal Raj",
"Yalamanchili Surya Teja",
"Sridevi Jalakam"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Design of CANSAT for Air Quality Monitoring at an Altitude of 900m
1st Soma Kunal Raj
Electronics and Communication Engineering
Vidya Jyothi Institute of Technology
Hyderabad,India
[email protected]
2nd Yalamanchili Surya Teja
Computer Science Engineering
Vidya Jyothi Institute of Technology
Hyderabad,India
[email protected]
3rdSridevi Jalakam
Assistant Professor,ECE
Vidya Jyothi Institute of Technology
Hyderabad,India
[email protected]
September 9, 2024
======================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
A CANSAT is a can-shaped satellite that is a replica of the mega satellites. The air quality monitoring plays a crucial role in this present scenario because of the growing pollution. The proposed CANSAT named NAMBI-VJ is a cylindrical structure that has a dimension of 310mm in height and 125mm in Diameter. The primary objective of CANSAT is to design a satellite that weighs 1KG and monitor the air quality of the earth’s surface. The secondary objective is to design a mechanical gyroscope to stabilize the CANSAT. A spill-hole parachute is used for the descent control with a rate of 3m/s. Experiments were conducted by dropping the satellite from an altitude of 900m through a Drone. It was found that the vital parameters like PPM, longitude,CO2 latitude,etc were received from the satellite through Zigbee Communication on a ground station.
CANSAT, Xbee, Ground station,descent control
§ INTRODUCTION
CANSAT is a miniaturization of a real satellite that performs all the operations of a real satellite. Students can examine the difficulties encountered in the real satellite by designing a CANSAT. The satellite uses integrated components for collecting data from the atmosphere. The most crucial part of creating the CANSAT is choosing components that fit within the weight and size restrictions. It also helps in analyzing the reasons of the success or failure of the mission, the results of which can be inferred to build a new CANSAT with a different mission. Over the years many attempts were made in the realization of CANSAT.
The CANSAT includes all the parts found in satellites. These CANSATs are used to monitor the layer of air around the Earth. These are placed at lower heights and the tallest they can go is 1 kilometer. These CANSATs use various communication modules to send and receive information. The space satellites use different frequency bands such as C-band, L-band, X-band, and more. These CANSATs use VHF/UHF frequencies such as 433 MHz/863 MHz for communication. They also use RF which has a good communication range. The Lora has a range of 2Km but is not very accurate. In Zigbee communication, the frequency used is between 915.6 and 927.6 MHz, which falls under the UHF category and is used in private mode.
The CANSAT is useful for learning about space satellites, which can contribute to the development of the space industry. These CANSATs are created by student groups to help people learn about satellites in space. The CANSAT is a small cylindrical model designed for easy deployment in drones and rocket models. They are affordable and simple to build and operate compared to other types of satellites.
The CANSATs are a simple way to get started in the Space industry. These tools will improve how satellites are launched and used from rockets, making it easier to understand and deploy them in space.
§ LITERATURE SURVEY
This paper <cit.> gives outlines of the structure and payload for CANSAT. The main mission of the design payload is to provide a safe landing carrying all the components safely without any damage. The structure was made from PLA+ carbon fiber that makes it light and strong. The descent control for a container is a parachute which opens after the deployment phase. The auto gyro ensures that the CANSAT lands safely on the ground once the necessary altitude is reached. Sensors are used to measure the environmental parameters. The data from sensors will be collected and transmitted to ground station via XBEE-PRO-S1. The telemetry data is displayed on the GUI. <cit.>This project involves designing a miniature satellite that demonstrates essential satellite functions like telemetry, receiving commands and data collection. The CANSAT is released at a height of 725m which consists of two payloads. The container uses a parachute for descent, while the payloads use a maple seed mechanism. The payloads function as auto rotating payloads and are made to resemble the aerodynamics of a maple seed. XBEE radios are used to send the sensor data to the ground station. The data obtained from the CANSAT is formatted and shown on that computer via a GUI program. <cit.> The main objective to design this satellite named Vecihi was to safely deploy and land payload and to protect the egg inside the payload. They used a quad-copter model for active descent control. They used carbon fibre for more durability and shock resistance. Ardupilot Mega APM2 is used as controller to optimize system. The systems communication component consists of an XBEE Pro S2B model. <cit.>The study focuses on providing a comparative study between Arduino and Vega processors TheJas32 when used with various sensors like pressure, gas, PIR, and temperature, which are having some applications related to space technology. It discovers that Vega has compatibility problems at higher baud rates, while Arduino operates better at these rates. The report highlights the protocols of serial communication, such as 12C, UART, and SPI. <cit.> The structure of the CANSAT is made using 3D print technology. All electronic components are covered by two ABS elements and the rocket SkySnake 82 is used for carrying the CANSAT. To lift the payload Wessex Rocket motor is used. The solid fuel is used in rockets. And for the deployment they employed parachute. <cit.> The main goal of AathreyaSat is to meet the guidelines set by WCRC. The primary mission of this CANSAT is to insert the satellite into the sounding rocket and it should be deployed at an altitude of 500m and it is tasked to measure the air pollution. They designed the parachute of ripstop nylon and employed triple circular parachute for safe landing. At ground station LoRa Ra-02 receiver module is used to collect the data. <cit.> The aim of the designed CANSAT was to take the atmospheric data and images. Its primary mission is to sense atmospheric parameters like temperature, pressure, and altitude, and send real-time data to a ground station. The secondary mission involves capturing and storing images. The CANSAT features a parachute-based recovery system that allows recovery at any height and time. The data was communicated successfully and was displayed on the GUI in ground station system. The camera captured the images during the entire time of flight.<cit.>The design and validation of a novel descent control system with a Mars glider concept was presented in this presentation for the 2016 International CANSAT Competition. This study aims to design a descent control system for a CANSAT that simulates a sensor payload travelling through a Martian atmosphere and sampling the atmospheric composition during flight. The descent control system involves two parts, parachute and glider system the glider unfolding at 400 meters altitude to collect and send telemetry data.<cit.> The VJIT CANSAT is a trash can sized satellite designed for air quality monitoring at an altitude of 300m. The structure is made of poly lactic acid material developed by 3D printer. They used spill hole parachute for deployment mechanism because of its high drag coefficient and durability. The telemetry data from the CANSAT was collected at the Ground Station with the help of LoRa communication module.<cit.>The goal of the CANSAT project is to create a nanosatellite for the purpose of gathering telemetry data on air pollutants in Veracruz-Boca del Rio, specifically carbon monoxide, methane, benzene, and butane. The system includes an Arduino Nano, various sensors and an RF transmitter. It features a parachute for controlled descent. The ground station is equipped with an RF receiving antenna and a computer interface for real-time data processing and storage. <cit.>The paper outlines the design and development of a CANSAT for environmental monitoring and scientific exploration, focusing on sensors, telemetry, and descent systems. The telemetry data is also transmitted in real-time to the ground station using the MQTT protocol. An unique aerodynamic rotor mechanism with a 3 axis mechanical gyroscopic system is placed in the CANSAT to control the stabilisation of the satellite. A two-stage parachute system used for safe descent. <cit.> The main objective is to design a decent control system. It used parachute for initial descent and glider for controlled descent considering shock free and safe landing. They also used an autogyro mechanism for orientation of the system. The electrical architecture in this system is divided into two, primary load and secondary load. Primary where egg is being protected and secondary load where it includes microcontroller, sensors, battery and actuator. <cit.> The main purpose of this project is to design and implement an affordable CanSat for Bangladeshi students. The deployment of CANSAT is done using the parachute. The cylindrical body was designed using PVC pipes with suitable caps fitted in both the terminals which tolerates shocks generated by collision between CANSAT and the ground. They used a GUI interface in order to collect the data and finally the CANSAT was designed within 50 dollars . <cit.>The project aims to develop a low-cost, open-source communication architecture for scientific CANSATs using Commercial-Off-The-Shelf (COTS) components. It employs Amplitude Shift Keying (ASK) and Pseudo Morse Code (PMC) for data transmission, making it accessible to amateur radio operators. CANSATs are equipped with An Arduino Nano and additional sensors are installed on CANSAT to gather environmental data.<cit.>The paper presents a new modular design philosophy for Scientific CANSATs, calling for fast integration of different scientific instruments through COTS products and open-source technologies. This means a single power and communication bus, mechanical attachments, and distributed computing and data management. The modular CANSATs framework was tested against a set of experiments that showed fast assembly and reconfiguration, thus improving the possibility of research into near-space technology.
§ ARCHITECTURE OF THE CANSAT
The Cansat consists of the:
* Payload
* Structural Subsystem
* On-Board Communication
* On-Board Computer
* Descent Control Subsystem
* Power Subsystem
* Ground Station
§.§ Payload Subsystem
The main objective of the payload system is to have a gyroscope stabilisation mechanism which is used to make the cansat have a soft landing.
The secondary objective is to measure the concentration of Gases like CO2 in parts per million (ppm) over the vicinity of 900 meters approximately using an MQ135 sensor. Also, the location of data acquisition is done by GPS Neo6m.
§.§ Structural Subsystem
The structural subsystem aims to provide a rigid structure that sustains the flight at an altitude of 900m from Ground Level. The Structure of the NAMBI-VJ CANSAT is made of Poly Lactic acid material developed by the 3D printer facility of the college available in-house. The Poly Lactic Acid is lightweight compared to other materials and hence chosen for the body of the satellite.The dimensions of the NAMBI-VJ CANSAT have a length of 310mm with a diameter of 125mm and a thickness of 3mm. The CAD model of the Satellite is shown in figure 2. The Volume of the CAN is 3,804.27 cm3. The satellite weighs 727.6 g as indicated in table 1
§.§ Descent Control Subsystem(DCS)
The Aim of DCS is to maintain a constant velocity and land the Cansat smoothly to the ground with the help of Parachute deployed from a Drone.Designing of a good parachute is mandatory. The Elements of a DCS is Parachute Material selection, Type of Parachute,way of holding the Parachute, the way we fold the Parachute and building a recovery system that prevents any casualties.
§.§.§ Decent Rate Estimation
When the parachute is deployed there are forces like drag and gravitational forces which show impact on the parachute. We use the drag equation given in Equation 1 for the design of the parachute. This formula is used for the understanding of the CANSAT decent.
D > 1/2 C_d · P · V^2 · A
Where P=density of air (1.225kg/m3), V=velocity (CANSAT), Cd=coefficient of drag, A= area of parachute, D = drag force. When the drag is equal to the weight of the CANSAT then the force becomes zero. Hence Weight equals equation 1. From Equation 2 we find the Velocity of the CANSAT that descends.
V = √(2W/C_d pA)
Nylon was the Parachute material chosen because of its balance of low density, sufficient tensile strength and low stiffness (low Young's modulus), good elasticity property, and ability to protect against friction, indicating it can absorb shocks. Spill hole parachute was chosen because of its high Drag coefficient, High Durability when deployed, and also increases the stability by decreasing the drift. It was also maintained that the diameter of the Spill Hole is 20 percent of the Diameter of the Parachute, this will help in the stability of the CANSAT. The quality of the nylon also affects the stability of the CANSAT. The Arduino Nano, which is connected to the servo motors.When the BME180 reaches the 500m altitude then the parachutes are deployed from the CANSAT.The gyroscope works with help of the MPU6050 which is used to know the axis of the CANSAT and BO motors, which help stabilize the CANSAT by detumbling it.
§.§ Electrical Power System (EPS)
The Objective of EPS is to power the Cansat for its functioning during the flight. The function of each block of EPS is given in Table II. Lithium Ion Cell with a battery capacity of 1090 mAh Battery placed on a 5. 5mm x 2.1mm Adapter Port for UNO Power Jack. The nominal voltage of 12v is the output of the battery pack. A switch is mechanically operated to turn on the CANSAT. The battery pack should power the gyroscope,sensors and the parachute deployment system.
MQ135,MPU6050,BME180,servo motors,Neo 6M,Xbee S2c pro, these were connected to the 3.3v rail.The arduino Uno and Nano are connected with the 5v rail.The BO motors need 12v battery supply so it is directly connected to the battery.The Figure 3 shows the Block diagram of EPS.The CANSAT will be turned ON before putting it in the drone.Once all the subsystems are turned on the data keeps coming to the ground station which helps in giving real-time data and position of CANSAT.
The Table 2 explains all the functions.
§.§ On Board Computer Subsystem
The On board computer is to process the data collected from the various sensors on the satellite and send them to the Xbee module for transmission. Arduino Uno acts as the on board computer in a NAMBI-VJ CANSAT. It uses ATMEGA328P.It has 14 digital input/output pins , 6 analog inputs, a 16 MHz ceramic resonator.The Data from the primary sensor is received by the OBC and sent it to On board Communication module called Xbee and sends it to ground station during flight. The On Board computer powers on and works on modes of operation. There are four modes of operation.
These Xbee modules are used in the transmitter and receiver which will
send the data from the CANSAT to the Ground station.
§.§.§ Modes of Operation
* Mode 1: CANSAT is launched to an altitude of 900 m by a drone.The CANSAT will be already turned ON in the start when it is loaded into the drone. The CANSAT is in Descent towards the ground. Deployment of CANSAT is called Mode 1.
* Mode 2: A parachute is connected to an eye hook which is the primary parachute with descent rate of 10m/s to 12m/s.
* Mode 3: The secondary parachute is deployed using the servo motors which will flap out the parachutes at the altitude of 500 m which is known through the BME 180.The spin hole parachute with 15 cm radius is launched out of the can.The descent rate is 1-3m/s.
* Mode 4: Once all the parachutes are launched then the CANSAT goes through a lot of shock and instability.To overcome that, the gyroscope will be running from the height of 500 m which will help in increasing the stability of the CANSAT.This also helps in reducing the drift of the satellite and the supports soft landing.
* Mode 5: The sensed data by the sensor are converted into digital in case of analog and transmitted to the ground station. The ground station will receive the data for every 2 seconds.
* Mode 6: The data will be scraped from the serial monitor using Python code and will be pushed into the FIREBASE which is a cloud database. Once the data is sent from the CANSAT it will be a buzzer sound so that it verifies whether it is sending data or not.Once it hits the ground as per the altitude sensor then is continuously beeps. This will be helpful at the time of recovery of the CANSAT.Figure 4 show the graphs. As we receive the data packets from the CANSAT it will show longitude and latitude, PPM values, Temperature, Power consumption, Pressure, Altitude,(x,y,z) axis rotation, and acceleration of the CANSAT from the launch.
§.§ On Board Communication
Xbee Module is used for Data communication with the satellite as Ground Station. XBee is a module produced by Digi International mainly used as a radio communication transceiver and receiver. It is a mesh communication protocol that sits on top of IEEE 802.15.4 ZigBee standard. In XBee S2C Pro communication rate is 2.4GHz speed.This has a range of 1.2km with an omnidirectional antenna.Table 3 shows the configuration of Xbee S2C Pro with Arduino Uno which acts as a Transmitter.The Xbee operates at 3.3V and should be configured using XCTU Software.
§.§ Ground Station Subsystem
The use of the ground station is to receive and transmit data from CANSAT during Flight. The block Diagram of Ground station is shown in figure 6. The Data from the CANSAT is transmitted via Xbee S2C pro.The Ground station has a Xbee module that is configured in Receive mode. Received data is sent to the processing unit of the ground station made of Arduino Nano.The data is first displayed on the serial monitor of the arduino IDE.Then this data is scrapped using the python code and then pushed in to the database.This database will be updated as per the data coming from the serial monitor.In the front end Next.js is used and in the back end Node.js.This data is visible in the vercel app.The Figure 4 shows the GUI of GS.
§ EXPERIMENTAL RESULTS
Before the launch test, we have performed a couple of hardware simulations. We have tested the transmitter and the receiver circuits by varying the distance between them from around 0m to 900m. The MQ135 sensor, BME180 sensor, GPS Neo 6m module,MPU6050, Xbee S2C Pro module and the Servo motor were connected to the Arduino Uno to form the transmitter circuit. The Arduino UNO was connected to a Xbee module to form the receiver circuit. The transmitted data and the received data were observed on the GUI. After the breadboard tests, we have performed a few drop tests from one of the buildings of our college. The testing were done to know which type of parachute has a good descent rate which can help in soft landing of the CANSAT. The descent of the parachute is 3m/s.
§.§ Experiment at Ahmedabad, Gujarat
On April 16th 2024,Drone has 5 cm greater than that of the CANSAT for easy insertion. Once the CANSAT was secure in the Drone, we launched the Satellite to a height of 900m. During the descent, the Parachutes of the CANSAT were deployed, the electronics of the CANSAT were turned on and servo motors opened the parachutes. Following this, the telemetry data from the CANSAT was collected at the Ground Station with the help of Xbee communication module. Simultaneously,The results of the experiment were observed at the GUI Dashboard. The descent of CANSAT was timed to be approximately 25 minutes. The Snapshot of experiments at Ahmedabad,Gujarat.The Table 4 shows the results after the experiments.
§ CONCLUSION
Thus a Novel CANSAT was designed for 727gms tested and operated at an altitude of 900 m from the ground level and intended actions such as power on of the Satellite after deployment from the satellite, Antenna Deployment, Sensor information of Air quality Monitoring and Temperature with latitude and longitude of a decent path were tested, deployed and verified.
IEEEtran
8
8985514
Ramadhan, Rizki Pratama and Ramadhan, Aditya Rifky and Putri, Shindy Atila and Latukolan, Merlyn Inova Christie and Edwar and Kusmadi,
"Prototype of CanSat with Auto-gyro Payload for Small Satellite Education," in 2019 IEEE 13th International Conference on Telecommunication Systems, Services, and Applications (TSSA), pp. 243-248, 2019, doi: 10.1109/TSSA48701.2019.8985514.
10074552
Shukla, Prakhar and Mishra, Raj and Sardar, Uzair Ahmad and Mohapatra, B.,
"Satellite Design for CANSAT with Autorotatig payloads," in 2022 4th International Conference on Advances in Computing, Communication Control and Networking (ICAC3N), pp. 2385-2392, 2022, doi: 10.1109/ICAC3N56670.2022.10074552.
6581344
Bulut, Sultan Nur and Gül, Mahircan and Beker, Can and İpek, İbrahim İlge and Koçulu, Ömer Eren Can and Topaloğlu, Çınar and Dinçer, Nurullah and Kırlı, Ahmet and Ertuğrul, Hasan Fatih and Tüfekci, Celal Sami,
"Model satellite design for CanSat Competition," in 2013 6th International Conference on Recent Advances in Space Technologies (RAST), pp. 913-917, 2013, doi: 10.1109/RAST.2013.6581344.
10212104
Kaur, Roopmeet and Dash, Biswajeet and Shiney O, Jeba and Singh, Sukhpreet,
"Revolutionizing CanSat Technology with Vega Processors: A Comparative Study," in 2023 2nd International Conference on Edge Computing and Applications (ICECAA), pp. 1276-1282, 2023, doi: 10.1109/ICECAA58104.2023.10212104.
8399591
Ostaszewski, Michał and Dzierzek, Kazimierz and Magnuszewski, Łukasz,
"Analysis of data collected while CanSat mission," in 2018 19th International Carpathian Control Conference (ICCC), pp. 1-4, 2018, doi: 10.1109/CarpathianCC.2018.8399591.
10528162
Reddy, Ramya and Panangipalli, Vishwanath Kumar and Mala, Narayana and Bairu, Charan Sunny and Rumale, Rujul,
"AathreyaSat: A CanSat Model for Air Pollution Measurement in Competition," in 2024 IEEE Wireless Antenna and Microwave Symposium (WAMS), pp. 1-5, 2024, doi: 10.1109/WAMS59642.2024.10528162.
8994718
Islam, Tooba and Noureen, Ayesha and Mughal, Muhammad Rizwan and Nadeem, M. Asim,
"Design and Development of a Weather Monitoring Satellite, CanSat," in 2019 15th International Conference on Emerging Technologies (ICET), pp. 1-6, 2019, doi: 10.1109/ICET48972.2019.8994718.
8002947
Kizilkaya, Muhterem Özgür and Oğuz, Abdullah Ersan and Soyer, Süleyman,
"CanSat descent control system design and implementation," in 2017 8th International Conference on Recent Advances in Space Technologies (RAST), pp. 241-245, 2017, doi: 10.1109/RAST.2017.8002947.
10245113
Sinha, Madhurima and R, Lakshya and R, Pranitha and K, Srikanth and Raj, Kunal and K, Vasanth,
"Design of Trash Can Sized Satellite for Air Quality Monitoring at an Altitude of 300m Above Ground Level," in 2023 International Conference on Circuit Power and Computing Technologies (ICCPCT), pp. 88-95, 2023, doi: 10.1109/ICCPCT58313.2023.10245113.
7362931
Bautista-Linares, Efren and Morales-Gonzales, Enrique A. and Herrera-Cortez, Mario and Narvaez-Martinez, Esther A. and Martinez-Castillo, Jaime,
"Design of an Advanced Telemetry Mission Using CanSat," in 2015 International Conference on Computing Systems and Telematics (ICCSAT), pp. 1-4, 2015, doi: 10.1109/ICCSAT.2015.7362931.
10441654
Sharma, Harsh and Sehgal, Abhinav and Jindal, Harsh and Dutta, Aditi and Sharma, Bhawna,
"Designing and Developing a CanSat for Environmental Monitoring and Scientific Exploration," in 2023 International Conference on Advanced Computing & Communication Technologies (ICACCTech), pp. 358-363, 2023, doi: 10.1109/ICACCTech61146.2023.00065.
10436175
Rivadeneira, Franco and Godinez, Diego and Kiyan, Kioshi and Huayapa, Victor and Acosta, Sebastian and Perez, Nicole and Hinostroza, Abel and Arce, Diego,
"Run2Sat I: Design and Implementation of a CanSat with Autogyro System," in 2023 IEEE Colombian Caribbean Conference (C3), pp. 1-6, 2023, doi: 10.1109/C358072.2023.10436175.
9231031
Hasan Raian, F.M. Tanvir and Islam, H.M. Jahirul and Islam, Md. Saiful and Azam, Rafiul and Islam, H.M. Jahidul and Debnath, Sutapa,
"An Affordable CanSat Design and Implementation to Study Space Science for Bangladeshi Students," in 2020 IEEE Region 10 Symposium (TENSYMP), pp. 1205-1208, 2020, doi: 10.1109/TENSYMP50017.2020.9231031.
10398763
Chun, Carrington and Kihei, Billy and Chakravarty, Sumit and Tanveer, M. Hassan,
"Open-Source, Low-Cost, and Bimodal Amateur Radio Communication Paradigm for Scientific CanSats," in 2023 6th International Conference on Robotics, Control and Automation Engineering (RCAE), pp. 212-218, 2023, doi: 10.1109/RCAE59706.2023.10398763.
10374870
Chun, Carrington and Patel, Uday and Tanveer, M. Hassan and Swift, Tom and Dallesasse, Kevin and Chakravarty, Sumit,
"Crafting CanSats: A Novel Modular Design Paradigm for Scientific CanSats," in 2023 IEEE 20th International Conference on Smart Communities: Improving Quality of Life using AI, Robotics and IoT (HONET), pp. 68-72, 2023, doi: 10.1109/HONET59747.2023.10374870.
|
http://arxiv.org/abs/2409.03571v1 | 20240905142532 | K-polystability of Fano 4-folds with large Lefschetz defect | [
"Eleonora A. Romano",
"Saverio A. Secci"
] | math.AG | [
"math.AG",
"math.DG",
"14J45"
] |
A multi-scale analysis of the CzrA transcription repressor
highlights the allosteric changes induced by metal ion binding
Roberto Menichetti
September 9, 2024
==========================================================================================================================
§ ABSTRACT
In this paper we study K-stability on smooth complex Fano 4-folds having large Lefschetz defect, that is greater or equal then 3, with a special focus on the case of Lefschetz defect 3. In particular, we determine whether these Fano 4-folds are K-polystable or not, and show that there are 5 families (out of 19) of K-polystable smooth Fano 4-folds with Lefschetz defect 3.
§ INTRODUCTION
The notion of K-stability was first introduced in <cit.> as a criterion to characterize the existence of a Kähler–Einstein metric on complex Fano varieties, and has been later formulated using purely algebraic geometric terms in <cit.>. Nowdays, by the celebrated works <cit.>, it is well known that a complex smooth Fano variety admits a Kähler–Einstein metric if and only if it is K-polystable.
This correspondence links together differential and complex algebraic geometry, and it represents one of the main motivations to investigate K-polystability of Fano varieties. Moreover, the condition of K-stability has been succesfully used to construct moduli spaces of Fano varieties, thus increasing its relevance within modern algebraic geometry (see <cit.> and references therein). We refer to <cit.> for the original definitions of K-stability involving ℂ^*-degenerations of Fano varieties, and for a survey on this topic from an algebro-geometric viewpoint. More recently, in <cit.> valuation methods have been introduced to reinterpret one parameter group degenerations: these new techniques gave a fundamental development to the algebraic theory of K-stability, due to equivalent and easier ways to test K-stability notions in many situations, such as the computation of the beta invariant of divisors over the target variety (see <cit.>). Indeed, the beta invariant may be explicitly computed for many classes of Fano varieties whose structure of divisors in their birational models is well understood.
The situation is completely known for del Pezzo surfaces (see Corollary <ref>), while we refer to <cit.> for the case of Fano 3-folds and for a general and updated literature on this topic.
In this paper, we will use valuation methods to study K-polystability of some families of Fano 4-folds which have been first studied in <cit.> and then completely classified in <cit.>, that is Fano 4-folds X having Lefschetz defect δ_X=3; we refer to <cit.> for an introduction on this invariant and the first implications on the geometry of Fano varieties in the case δ_X≥ 3.
From the viewpoint of K-polystability, the case of Fano 4-folds X with δ_X≥ 4 easily follows from known results. Indeed, by <cit.> these varieties are products of two del Pezzo surfaces, and applying <cit.> (see also Lemma <ref>) we see that a product of Fano varieties is K-polystable if and only if both of its factors are (see Remark <ref> for details).
Thus, it arises our motivation to study the subsequent case of Fano 4-folds having Lefschetz defect δ=3: among the possible 19 families of such Fano 4-folds classified in <cit.> and <cit.>, we establish which ones are K-polystable. We state our conclusions in the following result.
Let X be a Fano 4-fold with δ_X ≥ 3. Denote by F' (resp. F) the blow-up of ℙ^2 along two (resp. three non-collinear) points. Then:
(i) if δ_X ≥ 4, then X is K-polystable if and only if X S ×𝔽_1, X S × F', with S a del Pezzo surface having ρ_S=δ_X+1.
(ii) If δ_X=3, then X is K-polystable if and only if it is one of the following:
• X≅ℙ^2× F;
• X≅ℙ^1×ℙ^1 × F;
• X≅ F× F;
• X, the blow-up of ℙ^1×ℙ_ℙ^1×ℙ^1(𝒪⊕𝒪(1,-1)) along two surfaces isomorphic to ℙ^1×ℙ^1;
• X≅ℙ^1× Y, where Y is the blow-up of ℙ^3 along a disjoint union of a line and a conic, and along two non-trivial fibers of the exceptional divisor over the blown-up line.
Outline of the paper. After giving some preliminaries on K-polystability, on the Lefschetz defect δ and on the structure of Fano 4-folds having δ=3 in Sections <ref> and <ref>, we dedicate Section <ref> to the proof of Theorem <ref>. As we have already observed, proving (ii) will require the most effort. Note that in (ii) all but the last one are toric varieties.
To prove our result, we distinguish between the toric and the non-toric cases, proceeding in two different ways. The key point to study the toric case is a well known criterion on K-polystability for toric Fano varieties (see Proposition <ref>). The non-toric case, on the other hand, consists of 5 possible families and is the more difficult to check: we will use the Fujita-Li's valuative criterion (see Theorem <ref>). The strategy here is to show Proposition <ref> which gives an explicit formula to compute the beta invariant on a special exceptional divisor, denoted by D, that all non-toric Fano 4-folds with δ=3 contain. We introduce and describe D, as well as the geometry of its ambient variety, in <ref>. To deduce the formula in Proposition <ref>, we first determine the Zariski decomposition of -K_X-tD for t≥ 0 (Proposition <ref>) throughout some technical and preliminary lemmas, for which we deeply use our knowledge on the birational geometry of Fano 4-folds with δ=3.
Finally, we deduce that four out of five families of non-toric Fano 4-folds with δ=3 are not K-polystable, as the beta invariant on D turns out to be negative. The remaining case (that is the fifth variety in our list (ii) from Theorem <ref>) is isomorphic to a product and it gives the only example of non-toric K-polystable Fano variety with δ=3: we will again apply <cit.> to deduce its K-polystability. Finally, we summerize our conclusions in Table <ref> and Table <ref>.
Notations. We work over the field of complex numbers. Let X be a smooth projective variety.
• ∼ denotes linear equivalence for divisors. We often will not distinguish between a Cartier divisor D and its corresponding invertible sheaf _X(D).
• 𝒩_1(X) (resp. 𝒩^1(X)) is the ℝ-vector space of one-cycles (resp. divisors) with real coefficients, modulo numerical equivalence, and
ρ_X:=𝒩_1(X)=𝒩^1(X) is the Picard number of X. Sometimes we denote it simply by ρ.
• The pseudoeffective cone is the closure of the cone in 𝒩^1(X) generated by the classes of effective divisors on X; its interior is the big cone. An ℝ-divisor is called pseudoeffective if its numerical class belongs to the pseudoeffetive cone.
• We denote by [C] the numerical equivalence class in 𝒩_1(X) of a one-cycle C of X.
• NE(X)⊂𝒩_1(X) is the convex cone generated by classes of effective curves.
• A contraction of X is a surjective morphism φ X→ Y with connected fibers, where Y is normal and projective.
• The relative cone NE(φ) of φ is the convex subcone of NE(X) generated by classes of curves contracted by φ.
• We denote by δ_X, or simply by δ, the Lefschetz defect of X.
• A small ℚ-factorial modification (SQM) among two normal projective ℚ-factorial varieties is a birational map g Y Z that is an isomorphism in codimension one.
§ PRELIMINARIES
This section includes the preliminaries on K-polystability, see <ref>, and on the Zariski decomposition in Mori dream spaces, see <ref>.
§.§ Fujita-Li's valuative criterion
In this subsection we recall the characterization of K-semistability using valuations and we collect some preliminary results that arise from this study.
A key definition is the invariant β(E) computed on a divisor E over X, that is a divisor on a normal birational model Y over X (see <cit.>). For our purposes, we will focus on smooth varieties, even if the treatment can be made more general, referring to ℚ-Fano varieties.
Let X be a smooth Fano variety and E a prime divisor on a normal birational model μ Y→ X. We define:
β(E)=A(E)- 1/(-K_X)^n∫_0^∞(-μ^*K_X-tE) dt
where A(E) is the log-discrepancy of X along E, namely A(E):=1+ord_E(K_Y-μ^*(K_X)).
We refer to <cit.> for the definition of ( - ). For simplicity, we set
S(E):=1/(-K_X)^n∫_0^∞(-μ^*K_X-tE) dt
and notice that this integral takes values in a closed set [0, τ], where τ=τ(E) is the pseudoeffective threshold of E with respect to -K_X, namely:
τ(E)=sup{s∈ℚ_>0| -μ^*K_X-sE is big}.
Therefore, we have β(E)=A(E)-S(E).
The importance of the β-invariant mainly arises from the following result, that is known as the valuative criterion for K-(semi)stability, and it is due to Fujita and Li, <cit.> to which we also refer for a more general statement.
Let X be a smooth Fano variety. Then X is K-semistable if and only if β(E)≥ 0 for all divisors E over X.
For our purposes, we will use that if X is not K-semistable, then it is not K-polystable by definition.
Using the valuative criterion, it is easy to deduce that among del Pezzo surfaces, 𝔽_1 and the blow-up of ℙ^2 at two points are not K-polystable (see for instance <cit.>). More precisely, we have the following:
<cit.> Let S be a del Pezzo surface. Then S is K-polystable if and only if S is neither isomorphic to 𝔽_1 nor to the blow-up of ℙ^2 at two points.
Many varieties that we are going to study are products, and so we recall the following result. We refer to <cit.> for a more general statement involving the other notions of K-stability.
<cit.> Let X_1, X_2 be Fano varieties and let X=X_1× X_2. Then X is K-polystable if and only if X_i is K-polystable for i=1,2.
Although the computation of the beta invariant involves the volume of divisors that are not necessarily nef (we will use the Zariski decomposition to this end, see <ref>), it may be possible to compute it explicitly for divisors whose structure in the birational models of their ambient variety is well known, thanks to the powerful tools from birational geometry. This will be our approach in the proof of Theorem <ref> for the non-toric Fano 4-folds of our classification (see Proposition <ref> and proof of Proposition <ref>).
§.§ Zariski decomposition in Mori dream spaces
A common approach to compute the beta invariant of an effective divisor on a Fano variety, thus its volume, is to determine its Zariski decomposition. In our case, we note that smooth Fano varieties are Mori dream spaces (MDS) by <cit.>. In fact, the existence of such a nice decomposition characterizes Mori dream spaces, and on such varieties the Zariski decomposition is unique, as observed in <cit.>. To make our exposition self-contained, we start with the following basic definition, see <cit.> for details.
Let X be a normal projective variety and D a pseudoeffective ℚ-Cartier divisor on X. A Zariski decomposition of D is given by a pair of ℚ-Cartier divisors P and N on X which satisfy the following properties:
* P is nef;
* N is effective;
* D is ℚ-linearly equivalent to P+N;
* for any sufficiently divisible m∈ℤ_>0 the multiplication map
H^0(X, 𝒪(mP))→ H^0(X, 𝒪(mD))
given by the tautological section of 𝒪(mN) is an isomorphism.
If X is a MDS, by <cit.> we know that there exist finitely many SQMs g_i X X_i and that the pseudoeffective cone of X is given by the union of finitely many Mori chambers 𝒞_i; each chamber is of the form 𝒞_i=g_i^* (X_i)+ℝ_≥ 0{E_1, …, E_k} with E_1, …, E_k prime divisors contracted by g_i, and where (X_i) denotes the nef cone of X_i.
We may now interpret such a result as an instance of Zariski decomposition, as done in <cit.>. Indeed, for every ℚ-Cartier divisor D on a MDS X, there exists a rational birational contraction g X Y (factorizing through an SQM and a birational contraction Xψ X^' Y) and ℚ-Cartier divisors P and N on X such that D is ℚ-linearly equivalent to P+N, P':=ψ_*P is nef on X^' and defines g' X'→ Y, N':=ψ_*N is g'-exceptional, and the multiplication map H^0(X', mP')→ H^0(X', m(ψ_*D)) is an isomorphism for m≫0; namely P' and N' give a Zariski decomposition of ψ_*D as a divisor in X^'. To see this, we simply set P:=g^*g_*D and N:=D-P.
§ FANO MANIFOLDS WITH LEFSCHETZ DEFECT 3
In this section we recap the classification (and construction) of smooth complex Fano varieties with Lefschetz defect δ=3.
The Lefschetz defect δ_X of a smooth Fano variety X is an invariant of X that depends on the Picard number of its prime divisors, and it was first introduced in <cit.>. We recall its definiton below, see also <cit.> for a recent survey on this new invariant and its properties.
Let X be a complex smooth Fano variety, and D be a prime divisor on X. Consider the pushforward ι_*(D)→(X) induced by the inclusion and set (D,X):=ι_*((D)). The Lefschetz defect of X is
δ_X:=max{(D,X) | D a prime divisor in X}.
Smooth Fano varieties with high Lefschetz defect have been completely described in arbitrary dimension: indeed, X has a rigid geometry when δ_X ≥ 4, that is X is the product of Fano varieties of lower dimension (cf. <cit.>).
In particular, if X is a Fano 4-fold having δ_X ≥ 4, then X≅ S_1× S_2 with S_i del Pezzo surfaces, and applying <cit.> we may assume that ρ_S_1=δ_X+1. Then, by Corollary <ref> and Lemma <ref> we conclude that X is K-polystable if and only if S_2 is neither isomorphic to 𝔽_1 nor to the blow-up of ℙ^2 at two points.
Thus, we consider the next case, i.e. Fano 4-folds with δ=3. The strategy to prove Theorem <ref> is to compute the β-invariant on a particular divisor that these varieties carry out. We see that this invariant turns out to be negative in many examples, so that we understand when K-polystability fails thanks to Theorem <ref>. Although not necessarely a product, Fano varieties with δ=3 still have a very explicit description, indeed by <cit.> they are obtained via two possible constructions that we are going to recall below (cf. <cit.>).
Let X be a smooth Fano variety with δ_X=3. Then, there exist a smooth Fano variety T with T= X-2 and a ^2-bundle Z→ T, such that X is obtained by blowing-up Z along three pairwise disjoint smooth, irreducible, codimension 2 subvarieties S_1,S_2,S_3; we will denote by h X→ Z the blow-up map and set σ:=h ∘φ X→ T. The ^2-bundle Z→ T is the projectivization of a suitable decomposable vector bundle on T, and S_2 and S_3 are sections of . Instead, _|S_1 S_1→ T is finite of degree 1 or 2: this yields two distinct constructions depending on the degree of S_1 over T, whenever the degree is 1 we refer to it as Construction A, otherwise we get Construction B.
As a consequence, in <cit.> and <cit.> we get the complete classification in the case of dimension 4 and δ=3, as follows. In Theorem <ref> we are going to analyze the K-polystability for all of these families.
Let X be a Fano 4-fold with δ_X=3. Then 5≤ρ_X ≤ 8 and there are 19 families for X, among which 14 are toric.
* If ρ_X=8, then X≅ F× F, where F is the blow-up of ℙ^2 at 3 non-collinear points;
* if ρ_X=7, then X≅ F'× F, where F' is the blow-up of ℙ^2 at 2 points;
* if ρ_X=6, there are 11 families for X, among which 8 are toric;
* if ρ_X=5, there are 6 families for X, among which 4 are toric.
In view of <cit.>, the toric families of Theorem <ref> are exaclty those arising via Construction A. More precisely, they correspond to the products F× F and F'× F if ρ≥ 7 and, following Batyrev's classification of smooth toric Fano 4-folds and its notation (see <cit.>), to the toric varieties of type U (eight possible families) if ρ=6, and to the toric varieties of type K (four possible families) if ρ=5. For most of these cases we will use a characterization result on K-polystability for toric varieties (see <ref>). Thus, the most effort will be required by the Fano 4-folds obtained via Construction B, that is the non-toric families. The two non-toric families with ρ=5 have been studied in <cit.>, while the remaining three families with ρ=6 are described in <cit.>.
§.§ Construction B: relative cone and relative contractions
Construction B is described in <cit.>, we summarize it in the following.
We have
φ Z≅_T(𝒪(N)⊕𝒪⊕𝒪) → T,
where N is a divisor on T such that h^0(T,2N)>0 and -K_T± N is ample. We denote by H a tautological divisor of Z. Let D:=ℙ(𝒪⊕𝒪)↪ Z be the divisor given by the projection 𝒪(N)⊕𝒪⊕𝒪→𝒪⊕𝒪, so that D≅ℙ^1× T and D∼ H -φ^*N. Let now S_2, S_3⊂ D, S_i≅{pt}× T⊂ D, be the sections corresponding to the projections 𝒪⊕𝒪→𝒪, while φ_| S_1 S_1→ T is a double cover ramified along Δ∈ |2N| (see <cit.>). There exists a unique smooth divisor H_0∈ |H| containing S_1 such that H_0≅ℙ_T(𝒪(N)⊕𝒪), H_| H_0 is a tautological divisor, and S_1 is linearly equivalent to 2H_| H_0. Moreover, the surfaces {S_1, S_2, S_3} are pairwise disjoint and fiber-wise in general position.
Let h X→ Z be the blow-up along {S_1, S_2, S_3}, and set σ:=h ∘φ X→ T. We denote by E_i the exceptional divisors over S_i, i=1,2,3, and by H_0 and D the strict transforms of H_0 and D in X.
We now recall the description of the relative cone (σ) and its elementary contractions, which are all divisorial. The corresponding exceptional divisors will be our key to study the K-polystability of the varieties obtained via Construction B. We refer to <cit.> for details.
Let t∈ T∖Δ, so that X_t:=σ^-1(t) is a smooth del Pezzo surface of degree 5 and a smooth σ-fiber. Denote by {p_1,p_1', p_2, p_3}∈ Z_t:=φ^-1(t) the points blown-up by h_|X_t X_t→ Z_t, where p_i= S_i ∩ Z_t for i=2,3, and {p_1, p_1'}=S_1 ∩ Z_t. The 5-dimensional cone (X_t) is generated by the classes of the ten (-1)-curves in X_t, given by the exceptional curves and the transforms of the lines through two blown-up points.
We denote by
e_i (respectively e_1') the exceptional curve over p_i (respectively p_1'), and ℓ_i,j
(respectively ℓ_1,1', ℓ_1',i for i=2,3)
the transform of the line p_ip_j
(respectively p_1p_1', p_1'p_i for i=2,3). Let ι X_t↪
X be the inclusion; by <cit.> one has that every relative elementary contraction of X/T restricts to a non-trivial contraction of X_t, and
ι_*(X_t)=(σ).
Figure <ref> shows the 3-dimensional polytope obtained as a hyperplane section of the 4-dimensional cone (σ), which has 7 extremal rays, and their generators.
By <cit.> we deduce that every relative elementary contraction of (σ) is the blow-up of a smooth variety along a smooth codimension 2 subvariety. The contraction corresponding to [e_1]=[e_1'] (resp. [e_2], [e_3]) is the blow-down of E_1 (resp. E_2, E_3), while the contractions corresponding to [ℓ_1,1'] and [ℓ_2,3] have respectively exceptional divisors H_0 and D.
Moreover, we denote by G_i the exceptional divisor of the contraction corresponding to [ℓ_1,i]=[ℓ_1',i] for i=2,3; by construction, G_i has a ^1-bundle structure over S_1 whose fibers are numerically equivalent to ℓ_1,i and ℓ_1',i for i=2,3.
Lastly, we observe that E_1 ≅ G_2 ≅ G_3 and that E_2≅ E_3≅ H_0.
§.§ Construction B: relations among exceptional divisors
In this section we refer to <cit.>.
By <cit.>, we know that σ X → T has three factorizations of the form X Z T, where h X → Z is the divisorial contraction of {E_1, E_2, E_3}, {G_2, E_3, H_0} or {G_3, E_2, H_0}, and ZT is isomorphic to the ^2-bundle from <ref>. In fact, there is a ℤ_3-action on the set of σ-exceptional divisors
{E_1, E_2, E_3, H_0, D, G_2, G_3}
induced by an automorphism of a general σ-fiber X_t (see <cit.> for the description of (X_t)), which in turn it extends to an automorphism of X over T. This action corresponds to the permutation (1,2,3) on the triplets (E_1, G_2, G_3) and (E_2, E_3, H_0), while D is left invariant.
The symmetry on the σ-exceptional divisors given by the three factorizations of σ allows us, for instance, to deduce computations on E_3 and H_0 from computations on E_2. This will be a key tool for the computation in <ref>. Moreover, the unique behaviour of D among all σ-exceptional divisors led us to the computation of its β-invariant.
Recall that H_0 -φ^*N∼ D, S_1 ⊂ H_0 and S_2, S_3 ⊂ D, so that the pull-back h^* and the above Remark yield the following relations among the σ-exceptional divisors:
H_0 +E_1 -σ^*N ∼ D +E_2+E_3,
E_2 +G_2 -σ^*N ∼ D + H_0+E_3,
E_3 +G_3 -σ^*N ∼ D + H_0+E_2.
Moreover, E_2, E_3, H_0 and D are ^1-bundles over T, while E_1, G_2 and G_3 are ^1-bundles over S_1. We have that:
(i) H_0 ≅_T(-K_T ⊕ -K_T-N) and -K_X_| H_0 is the tautological divisor; the same holds for E_2 and E_3.
(ii) D≅_T(-K_T-N ⊕ -K_T-N) and -K_X_| D is the tautological divisor.
Note that E_2, E_3 and H_0 are pairwise disjoint, and that their intersection with D is a section {pt}× T of D. As a divisor in D, this intersections correspond to surjections (-K_T-N) ⊕(-K_T-N) →(-K_T-N), while they correspond to the projection (-K_T) ⊕(-K_T-N) →(-K_T-N), as a divisor in E_2, E_3 and H_0.
Finally, we can write -K_X as
-K_X ∼σ^*(-K_T+N)+H_0+2D+E_2+E_3.
§ PROOF OF THEOREM <REF>
In this section we show Theorem <ref>. We keep the notation introduced in the previous section. The case δ≥ 4 has been explained in Remark <ref>, thus from now on we consider the case δ=3.
§.§ Toric case
We recall from Theorem <ref> that there are 14 families of toric Fano 4-folds with δ=3, and from Remark <ref> that all of them arise via Construction A. The aim of this section is to deduce which ones among them are K-polystable. Our conclusion will be the following:
Let X be a toric Fano 4-fold with δ_X=3. Then it is K-polystable if and only if it is one of the following varieties:
* X≅ℙ^2× F, where F is the blow-up of ℙ^2 along three non-collinear points;
* X≅ℙ^1×ℙ^1 × F;
* X is the blow-up of ℙ^1×ℙ_ℙ^1×ℙ^1(𝒪⊕𝒪(1,-1)) along two surfaces isomorphic to ℙ^1×ℙ^1.
* X≅ F× F.
In order to prove the above result, we recall that Gorenstein toric Fano varieties correspond to reflexive lattice polytopes, that is those for which the dual
is also a lattice polytope. We will make use of the following characterization of K-polystability for toric Fano varieties.
<cit.>
Let X_P be a toric Fano variety associated to a reflexive polytope P. Then, X_P is K-polystable if
and only if 0 is the barycenter of P.
In the following proof we follow Batyrev's notation <cit.> for the toric Fano 4-folds of Theorem <ref> with ρ=5,6: type K for varieties of Theorem <ref> having ρ=5, and type U for the ones with ρ=6.
Assume that X is a product of surfaces. If ρ_X=5, then X=K_4≅ℙ^2× F is K-polystable by Corollary <ref> and Lemma <ref>. If ρ_X=6, then either X=U_4≅𝔽_1× F or X=U_5≅ℙ^1×ℙ^1 × F, and applying the same results we deduce that among them only U_5 is K-polystable. For the same reason, and by Theorem <ref>, we deduce that X is not K-polystable if ρ_X=7, while it is K-polystable if ρ_X=8, namely if X≅ F× F.
Assume now that X is not a product of surfaces. In view of Lemma <ref> we are left to check whether 0 corresponds to the barycenter of the polytopes corresponding to the remaining varieties of our classification. To this end, we use the Graded ring database (see <cit.>), giving the invariants of these varieties (computed in <cit.>) as inputs. It turns out that among them, the only K-polystable variety is U_8, that is the blow-up of ℙ^1×ℙ_ℙ^1×ℙ^1(𝒪⊕𝒪(1,-1)) along two surfaces isomorphic to ℙ^1×ℙ^1.
§.§ Non-toric case
The purpose of this section is to prove that among the five possible families of non-toric Fano 4-folds having δ=3 (see Theorem <ref> and Remark <ref>), only one is K-polystable. More precisely, after our discussion we will deduce the following:
Let X be a non-toric Fano 4-fold having δ_X=3. Then X is K-polystable if and only if X≅ℙ^1× Y, where Y is the blow-up of ℙ^3 along a disjoint union of a line and a conic, and along two non-trivial fibers of the exceptional divisor over the blown-up line.
We recall by Remark <ref> that all non-toric Fano 4-folds of Theorem <ref> arise from Construction B. In particular, the variety X of Proposition <ref> is obtained via this construction, taking T≅ℙ^1×ℙ^1 and N∈ |𝒪_ℙ^1×ℙ^1(0,1)|.
In order to prove Proposition <ref>, the first objective is to compute the β-invariant of D (see <ref>), and to this end we will show the following result.
Set a=-K_X^4; b=N^2; c=(-K_T-N)^2; d=N · (-K_T-N); e=(-K_T+N)^2; f=N· (-K_T+N). Then:
β(D)=1/a(2/5b+8c+6d-4e+4f).
We start with some preliminary computations that follow from <ref> and <cit.>; we will use these to prove the lemmas below.
Recall from Remark <ref> that there is a symmetry among the exceptional divisors {E_2, E_3, H_0}. Denote by η a tautological divisor of ℙ_T(N⊕ N) and by ξ a tautological divisor of ℙ_T(N⊕). Then,
* D_|D=-η, so that D^3∼η·σ_| D^*(2N)- σ_| D^*(N)^2 and D^4=-3N^2;
* H̃_̃0̃_| H_0=-ξ, so that ( H_0)^3∼ξ·σ_|E_2^*(N) and ( H_0)^4=-N^2; the same holds for E_2 and E_3;
* -K_X_| D∼η + σ_| D^*(-K_T-2N);
* (-K_X_| D)^2∼-K_X_| D·σ_| D^*(-2K_T-2N) - σ_| D^*(-K_T-N)^2;
* -K_X_| H_0∼ξ+σ_|H_0^*(-K_T-N); the same holds for E_2 and E_3;
* ( H_0)^2· D∼ 0 and ( H_0)^3· D=0; the same holds for E_2 and E_3;
* (σ^*M)^i∼0 for all M∈(T) and i=3,4.
Notice that in all the examples of Fano 4-folds with δ=3 obtained via Construction B, the divisor N is nef (see <cit.>), therefore η and ξ are nef as well.
We will first describe the Zariski decomposition of the divisor -K_X-tD, where t≥ 0.
The restriction of -K_X-tD to H_0, E_2 and E_3 is nef for 0≤ t≤ 1, while (-K_X-tD)_|D is nef for t≥ 0.
Recall that by construction, -K_T± N is an ample divisor on T, so -K_T+ sN is ample for -1 ≤ s ≤ 1. Thus, (-K_X-tD)_|H_0∼ (1-t)ξ+σ_|H_0^*(-K_T+(t-1)N) is nef for t≤ 1.
Similarly, (-K_X-tD)_|D∼(1+t)(η-σ_| D^*N)+σ_| D^*(-K_T+(t-1)N); the claim follows since η-σ_| D^*N is the tautological divisor of ℙ_T(𝒪⊕𝒪), -K_T-N is ample, and N is nef.
Let Γ be an irreducible curve not contained in H_0∪ E_2 ∪ E_3 ∪D. If Γ is contracted by σ, then by construction H_0·Γ, E_2·Γ, E_3·Γ, D·Γ≥0 and at least one inequality is strict.
The divisor -K_X-tD is nef for 0≤ t≤ 1.
Assume t>1. Let ℓ be a fiber of the restriction σ_|H_0H_0→ T. Since ℓ is a fiber of the exceptional divisor of a smoth blow-up (see 2.4), one has (-K_X-tD)·ℓ=1-t<0. Thus, we may assume t≤ 1. If Γ is an irreducible curve contained in H_0∪ E_2 ∪ E_3 ∪D, then (-K_X-tD)·Γ≥ 0 by Lemma <ref>. Otherwise, using equation (<ref>) in <ref>, one has
(-K_X-tD)·Γ=[σ^*(-K_T+N)+(2-t)D+(H_0+E_2+E_3)]·Γ >0,
and this follows from Remark <ref> and the ampleness of -K_T+N.
The divisor -K_X-D is a supporting divisor of the birational contraction X→ W associated to the facet ⟨ [e_2],[e_3],[l_1,1'] ⟩ of (σ).
By <cit.> we know that the contraction X→ W is divisorial, and we show that -K_X-D is a supporting divisor. From Lemma <ref> and its proof, one has that -K_X-D is nef and not ample, and the curves on which it vanishes are contained in H_0∪ E_2 ∪ E_3 ∪D. Furthermore, we see from the proof of Lemma <ref> that -K_X-D has zero intersection only with the fibers of σ that are contained in H_0∪ E_2 ∪ E_3. This gives the claim.
The following result is a consequence of the above lemmas and of the discussion done in <ref>.
The Zariski decomposition of the divisor -K_X-tD is given by
P(t)=
-K_X-tD, t∈[0,1]
H(t), t∈(1,2]
where H(t)=[σ^*(-K_T+N)+(2-t)D]+(2-t)(H_0+E_2+E_3), and the pseudoeffective threshold of D with respect to -K_X is τ( D)=2.
In view of Lemma <ref>, we are left to understand the decomposition of -K_X-tD into positive and negative part for t≥1.
By Lemma <ref>, being H_0∪ E_2∪ E_3 the exceptional locus of the divisorial contraction having -K_X-D as a supporting divisor, we need to determine a, b, c≥ 0 and all values of t≥ 1 such that
P(t)=-K_X-tD-aH_0-b E_2-c E_3.
Denote by h_0, e_2, e_3 the fibers of σ contained respectively in H_0, E_2, and E_3. Requiring that -K_X-tD-aH_0-b E_2-c E_3 has zero intersection with h_0, e_2, e_3, we get a=b=c=t-1.
Set H(t):= -K_X-tD-(t-1)(H_0+E_2+E_3). By equation (<ref>) in <ref>, we deduce that
H(t)=[σ^*(-K_T+N)+(2-t)D]+(2-t)(H_0+E_2+E_3).
Let Γ be a fiber of σ contained in D, so that H(t)·Γ=2(2-t); thus, H(t) is not nef for t>2. Finally, we see that H(t) is nef for t≤ 2, and this follows from Remark <ref> and the ampleness of -K_T+N.
Since H(2) ∼σ^* (-K_T+N) is a nef and not big divisor, we deduce that τ( D)=2, hence our claim.
Finally, we will use the following lemmas to compute S( D) (see <ref> for its definition).
Notation as in Proposition <ref>. Then,
∫_0^1 (-K_X-t D)^4 = a- 8/5b-8c-6d.
We compute (-K_X-t D)^4. By Remark <ref>, we have:
* -K_X^3· D=(-K_X_| D)^3=3(-K_T-N)^2;
* -K_X^2· D^2=(-K_X_| D)^2·(-η)=-(-K_T-N)^2-2N·(-K_T-N);
* -K_X· D^3=(-K_X_| D)·η^2=2N·(-K_T-N)+N^2.
Therefore,
(-K_X-t D)^4=a-12ct-6(c+2d)t^2-4(b+2d)t^3-3bt^4
and the claim follows.
Notation as in Proposition <ref>. Then,
∫_1^2 H(t)^4 = 6/5b-4f+4e.
We recall that H(t)=[σ^*(-K_T+N)+(2-t)D]+(2-t)(H_0+E_2+E_3). In order to obtain H(t)^4, we compute the intersections
(2-t)^i[σ^*(-K_T+N)+(2-t)D]^4-i·(H_0+E_2+E_3)^i,
for i=0,…,4. We use Remark <ref> for the following computations.
[σ^*(-K_T+N)+(2-t)D]^4=-6e(2-t)^2+8f(2-t)^3-3b(2-t)^4.
Indeed:
* σ^*(-K_T+N)^2· D^2=σ_| D^*(-K_T+N)^2·(-η)=-(-K_T-N)^2;
* σ^*(-K_T+N)· D^3=σ_| D^*(-K_T+N)·η^2= 2N·(-K_T+N).
[σ^*(-K_T+N)+(2-t)D]^3·( H_0+E_2+E_3)=9e(2-t)-9f(2-t)^2+3b(2-t)^3.
Recall that H_0∩ D is a section {pt}× T of D. By restricting to D we obtain:
* σ^*(-K_T+N)^2·D·( H_0+E_2+E_3)=3(-K_T+N^2);
* σ^*(-K_T+N)·D^2 ·( H_0+E_2+E_3)=-3N·(-K_T+N);
* D^3 ·( H_0+E_2+E_3) · 3N^2.
[σ^*(-K_T+N)+(2-t)D]^2·( H_0+E_2+E_3)^2=-3e.
Recall that E_2, E_3 and H_0 are pairwise disjoint. Thus:
* σ^*(-K_T+N)^2 ·[( H_0)^2+(E_2)^2+(E_3)^2]=-3(-K_T+N)^2;
* σ^*(-K_T+N)· D ·[( H_0)^2+(E_2)^2+(E_3)^2]=0;
* D^2 ·[( H_0)^2+(E_2)^2+(E_3)^2]=0.
[σ^*(-K_T+N)+(2-t)D]·( H_0+E_2+E_3)^3=3f.
Indeed:
* σ^*(-K_T+N)·[( H_0)^3+(E_2)^3+(E_3)^3]=3N· (-K_T+N);
* D ·[( H_0)^3+(E_2)^3+(E_3)^3]=0.
We conclude that
H(t)^4=6b(2-t)^4-16f(2-t)^3+12e(2-t)^2,
and the claim follows.
We compute β( D)=A( D)-S( D). Since D⊂ X is a prime divisor on X, we have A( D)=1. Moreover, due to Proposition <ref>, we can compute
a· S( D)= ∫_0^2(-K_X-t D) dt
by splitting it as
∫_0^1 (-K_X-t D)^4 dt + ∫_1^2 H(t)^4dt.
Thus, the claim follows from Lemma <ref> and Lemma <ref>.
We now apply Proposition <ref> to conclude this section with the proof of Proposition <ref>. We keep the notation of such proposition.
Assume that X is a product. Then by the classification of Fano 4-folds having δ=3 (see Theorem <ref> and Remark <ref>) it follows that X≅ℙ^1× Y with Y being as in the statement. By <cit.> we know that Y is K-polystable, then using Lemma <ref> we conclude that X is K-polystable.
Suppose now that X is not a product. We will observe that for all the remaining four families of varieties of our classification, one has β(D)<0 so that we conclude by Theorem <ref> that they are not K-semistable, hence they are not K-polystable. Being a>0, in view of Proposition <ref> we are left to show that λ:=2/5b+8c+6d-4e+4f<0.
Assume first that ρ_X=5. By construction B, one has T=ℙ^2 and by the proof of <cit.> we know that either N=𝒪(1) or N=𝒪(2). Using the needed numerical invariants of the corresponding varieties computed in <cit.>, in the first case one can check that λ=-18/5, in the second case we get λ=-192/5.
Assume now that ρ_X=6. Construction B gives either T=𝔽_1 or T=ℙ^1×ℙ^1. In the first case, by the proof of <cit.> we know that N=π^*L where π𝔽_1→ℙ^2 is the blow-up, and L general line in ℙ^2. For this variety, using the numerical invariants of <cit.> one gets λ=-38/5. Otherwise, by the proof of the same proposition we have N=𝒪(1,1), and we obtain that λ=-96/5, hence our claim.
§.§ Conclusions and final table
We obtain the proof of Theorem <ref> as a direct consequence of Remark <ref>, of the classification theorem of Fano 4-folds having δ=3 (see Theorem <ref>, Remark <ref>) and Propositions <ref>, <ref>. We summarize our results in the following tables: Table <ref> gathers all Fano 4-folds with δ=3, while Fano 4-folds with δ≥ 4 appear in Table <ref>.
The notation in the tables is as follows. In the first column we use the description of Construction B from <ref> for the non-toric Fano 4-folds with δ=3, while we use the notation in <cit.> for the toric case when δ=3 and ρ =5,6, explicitly showing which 4-folds are product of surfaces. The second column contains the Picard number ρ, while in the last column with the symbol 51 (resp. 55) we mean that the 4-fold is K-polystable (resp. not K-polystable). Table <ref> contains an extra column, where we write whether β( D) is positive (+ve) or negative (-ve), when applicable.
Finally, we recall that F' (resp. F) is the blow-up of ℙ^2 along two (resp. three non-collinear) points.
Acknowledgements. Both authors are members of GNSAGA, INdAM. The second named author is grateful to the University of Genova for the kind hospitality and support provided during part of the preparation of this work. The first named author is supported by the MIUR Excellence
Department Project awarded to Dipartimento di Matematica, Università di Genova, CUP D33C23001110001. She dedicates this work to her eldest daughter, Miriam.
|
http://arxiv.org/abs/2409.02181v1 | 20240903180003 | Quasi-periodic X-ray eruptions years after a nearby tidal disruption event | [
"M. Nicholl",
"D. R. Pasham",
"A. Mummery",
"M. Guolo",
"K. Gendreau",
"G. C. Dewangan",
"E. C. Ferrara",
"R. Remillard",
"C. Bonnerot",
"J. Chakraborty",
"A. Hajela",
"V. S. Dhillon",
"A. F. Gillan",
"J. Greenwood",
"M. E. Huber",
"A. Janiuk",
"G. Salvesen",
"S. van Velzen",
"A. Aamer",
"K. D. Alexander",
"C. R. Angus",
"Z. Arzoumanian",
"K. Auchettl",
"E. Berger",
"T. de Boer",
"Y. Cendes",
"K. C. Chambers",
"T. -W. Chen",
"R. Chornock",
"M. D. Fulton",
"H. Gao",
"J. H. Gillanders",
"S. Gomez",
"B. P. Gompertz",
"A. C. Fabian",
"J. Herman",
"A. Ingram",
"E. Kara",
"T. Laskar",
"A. Lawrence",
"C. -C. Lin",
"T. B. Lowe",
"E. A. Magnier",
"R. Margutti",
"S. L. McGee",
"P. Minguez",
"T. Moore",
"E. Nathan",
"S. R. Oates",
"K. C. Patra",
"P. Ramsden",
"V. Ravi",
"E. J. Ridley",
"X. Sheng",
"S. J. Smartt",
"K. W. Smith",
"S. Srivastav",
"R. Stein",
"H. F. Stevance",
"S. G. D. Turner",
"R. J. Wainscoat",
"J. Weston",
"T. Wevers",
"D. R. Young"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.CO",
"astro-ph.GA",
"astro-ph.SR"
] |
C[1]>p#1
somespace
rowpostcode =somespace, margins = centering
0.0cm
0.2cm
16cm
21cm
1.0cm
sciabstract
lastnote
scilastnote
lastnote+1
lastnote.
23pt
Quasi-periodic X-ray eruptions years after a nearby tidal disruption event
M. Nicholl^10000-0002-2555-3192,
D. R. Pasham^20000-0003-1386-7861,
A. Mummery^3,
M. Guolo^40000-0002-5063-0751,
K. Gendreau^5,
G. C. Dewangan^6,
E. C. Ferrara^7,8,50000-0001-7828-7708,
R. Remillard^2,
C. Bonnerot^9,10,
J. Chakraborty^20000-0002-0568-6000,
A. Hajela^11,
V. S. Dhillon^12,13,
A. F. Gillan^10000-0003-4094-9408,
J. Greenwood^1,
M. E. Huber^14,
A. Janiuk^150000-0002-1622-3036,
G. Salvesen^160000-0002-9535-4914,
S. van Velzen^17,
A. Aamer^1,
K. D. Alexander^18,
C. R. Angus^1,
Z. Arzoumanian^5,
K. Auchettl^19,20,
E. Berger^21,
T. de Boer^14,
Y. Cendes^21,22,
K. C. Chambers^14,
T.-W. Chen^230000-0002-1066-6098,
R. Chornock^24,
M. D. Fulton^1,
H. Gao^14,
J. H. Gillanders^25,
S. Gomez^26,
B. P. Gompertz^9,10,
A. C. Fabian^27,
J. Herman^14,
A. Ingram^28,
E. Kara^2,
T. Laskar^29,30,
A. Lawrence^31,
C.-C. Lin^14,
T. B. Lowe^14,
E. A. Magnier^14,
R. Margutti^24,
S. L. McGee^9,10,
P. Minguez^14,
T. Moore^1,
E. Nathan^32,
S. R. Oates^33,
K. C. Patra^24,
P. Ramsden^1,9,100009-0009-2627-2884,
V. Ravi^32,
E. J. Ridley^9,10,
X. Sheng^1,
S. J. Smartt^25,1,
K. W. Smith^1,
S. Srivastav^25,1,
R. Stein^340000-0003-2434-0387,
H. F. Stevance^25,1,
S. G. D. Turner^350000-0002-8641-7231,
R. J. Wainscoat^14,
J. Weston^1,
T. Wevers^26,
D. R. Young^1
^ 1Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast,
Belfast BT7 1NN, UK
^ 2Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology,
Cambridge, MA, USA
^ 3Oxford Theoretical Physics, Beecroft Building, Clarendon Laboratory, Parks Road, Oxford,
OX1 3PU, UK
^ 4Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles St.,
Baltimore MD 21218, USA
^ 5NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA
^ 6Inter-University Centre for Astronomy and Astrophysics (IUCAA), PB No.4, Ganeshkhind,
Pune-411007, India
^ 7Department of Astronomy, University of Maryland, College Park, MD, 20742, USA
^ 8Center for Research and Exploration in Space Science & Technology II (CRESST II),
NASA/GSFC, Greenbelt, MD 20771, USA
^ 9School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT
^ 10Institute for Gravitational Wave Astronomy, University of Birmingham, Birmingham
B15 2TT
^ 11DARK, Niels Bohr Institute, University of Copenhagen, Jagtvej 155, 2200 Copenhagen,
Denmark
^ 12Department of Physics and Astronomy, University of Sheffield, Sheffield, S3 7RH,
United Kingdom
^ 13 Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain
^ 14Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI
96822, USA
^ 15Center for Theoretical Physics, Polish Academy of Sciences, Al. Lotnikow 32/46,
02–668, Warsaw, Poland
^ 16Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos,
NM 87545, USA
^ 17Leiden Observatory, Leiden University,
Postbus 9513, 2300 RA Leiden, The Netherlands
^ 18Department of Astronomy and Steward Observatory, University of Arizona, 933 North
Cherry Avenue, Tucson, AZ 85721-0065, USA
^ 19School of Physics, The University of Melbourne, VIC 3010, Australia
^ 20Department of Astronomy and Astrophysics, University of California, Santa Cruz,
CA 95064, USA
^ 21Center for Astrophysics, Harvard & Smithsonian, 60 Garden Street, Cambridge,
MA 02138-1516, USA
^ 22Department of Physics, University of Oregon, Eugene, OR 97403, USA
^ 23Graduate Institute of Astronomy, National Central University, 300 Jhongda Road,
32001 Jhongli, Taiwan
^ 24Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA
^ 25Astrophysics sub-Department, Department of Physics, University of Oxford, Keble Road,
Oxford, OX1 3RH, UK
^ 26Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
^ 27Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge
CB3 0HA, UK
^ 28School of Mathematics, Statistics and Physics, Newcastle University, Herschel Building,
Newcastle upon Tyne, NE1 7RU, UK
^ 29Department of Physics & Astronomy, University of Utah, Salt Lake City, UT 84112, USA
^ 30Department of Astrophysics/IMAPP, Radboud University, P.O. Box 9010, 6500 GL,
Nijmegen, The Netherlands
^ 31Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill,
Edinburgh EH9 3HJ, UK
^ 32Cahill Center for Astronomy and Astrophysics, California Institute of Technology,
Pasadena, CA 91125, USA
^ 33Department of Physics, Lancaster University, Lancaster LA1 4YB, UK
^ 34Division of Physics, Mathematics, and Astronomy, California Institute of Technology,
Pasadena, CA 91125, USA
^ 35Department of Applied Mathematics and Theoretical Physics, University of
Cambridge, Wilberforce Road, Cambridge, CB3 0WA, UK
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Quasi-periodic Eruptions (QPEs) are luminous bursts of soft X-rays from the nuclei of galaxies, repeating on timescales of hours to weeks <cit.>. The mechanism behind these rare systems is uncertain, but most theories involve accretion disks around supermassive black holes (SMBHs), undergoing instabilities <cit.> or interacting with a stellar object in a close orbit <cit.>. It has been suggested that this disk could be created when the SMBH disrupts a passing star <cit.>, implying that many QPEs should be preceded by observable tidal disruption events (TDEs). Two known QPE sources show long-term decays in quiescent luminosity consistent with TDEs <cit.>, and two observed TDEs have exhibited X-ray flares consistent with individual eruptions <cit.>. TDEs and QPEs also occur preferentially in similar galaxies <cit.>. However, no confirmed repeating QPEs have been associated with a spectroscopically confirmed TDE or an optical TDE observed at peak brightness. Here we report the detection of nine X-ray QPEs with a mean recurrence time of approximately 48 hours from , a nearby and extensively studied optically-selected TDE <cit.>. We detect and model the X-ray, ultraviolet and optical emission from the accretion disk, and show that an orbiting body colliding with this disk provides a plausible explanation for the QPEs.
The TDE was discovered by the Zwicky Transient Facility (ZTF) on 2019-09-19 UT (Universal Time), at Right Ascension 04:46:37.88 and Declination -10:13:34.90 (J2000.0 epoch), in the nucleus of a barred spiral galaxy at redshift z=0.0151 (luminosity distance of 65.6 Mpc). Its optical spectrum was typical of TDEs, with broad emission lines from hydrogen and ionised helium <cit.>, and it is a particularly well-studied event due to its proximity and early detection <cit.>. The ultraviolet (UV) and optical luminosity declined over a few months until reaching a steady, years-long plateau at ∼10^41 <cit.>, consistent with an exposed accretion disk <cit.>. Highly ionized iron lines appeared at this phase, indicating a gas-rich environment ionized by the TDE <cit.>. The mass of the central SMBH has been estimated as a few ×10^6 (where is the solar mass) using various techniques (Extended Data Table <ref>).
We observed on 2023-12-09 and 2023-12-10 UT (approximately 1500 days after its first optical detection) with the X-ray observatory and on 2023-12-21 UT with the Hubble Space Telescope () as part of a joint program to study TDE accretion disks. The data were obtained across three exposures of 15.4, 18.8 and 16.1 ks, shown in Fig. <ref>a. The average count rate in the broad band (0.5-7 keV) is more than an order of magnitude larger in the middle exposure than in the first and final exposures. The images show another X-ray source ≈ 7 arcseconds south-east (SE) of , but the high spatial resolution of the images (∼0.5 arcseconds) allows us to definitively associate the increase in count rate with . The count rate increases and then decreases over the course of the middle exposure, while no other source in the field (Extended Data Fig. <ref>) shows evidence for variability. By analysing the spectra of these sources, we find that reported X-rays from /XRT during the initial optical flare in 2019-2020 <cit.> are instead detections of the nearby SE source, and we exclude these from any analysis in this work (Methods).
To probe the variability of further, we obtained high-cadence observations using the Neutron Star Interior Composition Explorer () from 2024-02-29 to 2024-03-09 UT, the X-ray Telescope (XRT) on-board the Neil Gehrels Observatory on 2024-03-12 UT, and starting on 2024-03-14 UT. The soft X-ray (0.3-1.0 keV) light curves from showed repeating sharp increases in count rate followed by a return to quiescence, with six consecutive peaks detected in just over 10 days. Two more peaks were detected over the next four days with /XRT and . The light curves are shown in Fig. <ref>b. The time between successive peaks ranges from 39 to 54 hours in the rest-frame, measured by fitting skewed Gaussian profiles (Extended Data Fig. <ref>). The mean recurrence time is 48.4±0.3 hours, with a standard deviation of 7.2 hours. Typical durations are 8-10 hours, with a consistent light curve shape exhibiting a fast rise and slower decay (Fig. <ref>c).
The combination of soft X-ray sensitivity and cadence in the data allows us to perform time-resolved spectral fitting (Fig. <ref>, Extended Data Fig. <ref>). The nearby SE source detected by does not contribute significantly in the bandpass (Methods). Single-temperature blackbody fits to the second peak (chosen for good temporal coverage and low background; Methods) show an increasing temperature as the luminosity rises, and a lower temperature for the same luminosity during the decay phase, due to an increase in the blackbody radius. The expanding emitting region is ∼1 solar radius (∼10^11 cm). The bolometric luminosity at peak reaches (1.8±0.1)×10^43 erg s^-1, with a temperature of 109±1eV.
In the quiescent phase, spectral information could only be retrieved by stacking the data from /XRT. This can be well modeled as a color-corrected disk model with maximum disk temperature kT_ p≈67±10 eV (Methods, Extended Data Fig. <ref>).
All of the above properties are consistent with the six known QPE sources repeating on timescales of hours to days <cit.> and the longer duration Swift J0230+28 <cit.>.
This includes the luminosity and temperature, both in eruption and quiescence, and the lack of any detected optical/UV variability (Extended Data Fig. <ref>). The `hysteresis loop' in the luminosity-temperature plane (Fig. <ref>c) is characteristic of QPE emission <cit.>. The recurrence time and eruption duration are towards the higher ends of their respective distributions (though well below Swift J0230+28), but their ratio of ≈0.2 is consistent with the duty cycle of 0.24±0.13 exhibited by other QPEs <cit.> (Fig. <ref>). Performing our own correlation analysis on duration versus recurrence time for the QPE population including yields strong Bayesian evidence in favour of a correlation, with a mean duty cycle of 0.22^+0.11_-0.04 (Methods). The ≈15% variation in recurrence times in is also similar to known QPEs. The variations in appear somewhat irregular, but with a limited number of cycles we cannot establish robustly at this point whether or not there is an underlying pattern of alternating long and short recurrence times, as seen in some of the other QPE sources <cit.>.
We conclude that is now exhibiting X-ray QPEs fully consistent with the known source population, and with an average recurrence time T_ QPE≈48 hours. Our result confirms theoretical predictions that at least some QPEs arise in accretion disks created by TDEs <cit.> (although we note that QPEs have also been discovered in galaxies with evidence for active nuclei <cit.>). It also increases confidence in the candidate QPEs following the TDEs AT2019vcb <cit.> and XMMSL1 J0249 <cit.>, and the proposed X-ray TDE in the QPE source GSN 069 <cit.>. We are unable to constrain when QPEs began in , though data in the two months around optical peak exhibit no QPEs. XRT data obtained on 2022-01-13 (∼840 days after disruption) over a duration of 25 hours show the possible beginning of an eruption, but the duration of the observation is too short to confirm this (Methods; Extended Data Fig. <ref>).
Our imaging shows UV emission (effective wavelength 2357 Å) coincident with the nucleus of the host galaxy. At this distance the luminosity is ν L_ν=3.2×10^41 . This source is unresolved, indicating an angular size ≲0.08 arcseconds or 25 pc (Extended Data Fig. <ref>). The luminosity is consistent with a TDE accretion disk <cit.>, but not with a nuclear star cluster (Methods). We also detect far-UV emission (1480 Å) with . We model the UV and quiescent X-ray light curves, alongside 3.5 years of optical measurements from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) and ZTF, using a time-dependent relativistic thin disk <cit.> (Fig. <ref>, Methods). We find a SMBH mass log_10 M_∙/M_⊙ = 6.3^+0.3_-0.2, and an initial disk mass M_ disk/M_⊙ = 0.06^+0.04_-0.03 (Extended Data Fig. <ref>).
The properties of the disk help to constrain the cause of the QPE emission. In models of disk pressure instability, the variability amplitude and recurrence timescale depend on the SMBH mass and accretion rate. With the SMBH mass well constrained, the late-time disk luminosity is (4±1)% of the Eddington luminosity. At this Eddington ratio, radiation pressure instability models can explain the amplitude of the eruptions, but predict a recurrence time of ∼years <cit.>. A disk that is dominated by magnetic (rather than radiation) pressure is expected to be stable for this mass and Eddington ratio <cit.>. We therefore examine models that can explain QPE emission on hour-day timescales within a stable disk. These models involve another body (a star or compact object) already on a close, decaying orbit around the SMBH (an extreme mass-ratio inspiral, or EMRI), that interacts with the spreading disk from the TDE once the disk is sufficiently radially extended.
The disk size is well constrained in our analysis by the UV and optical emission (Fig. <ref>), and is several times larger than an orbit with a 48.4 hour period (radius ≈200GM_∙/c^2). Since any orbiting body with this period is expected to cross the disk, this provides a promising explanation for the observed QPEs. The same argument applies also to a 98.6 hour orbit, required if interactions occur twice per orbit (Fig. <ref>).
The luminosity in this model can be produced by the ejection of shocked disk material <cit.>, shock breakout within the disk <cit.>, or a temporarily enhanced accretion rate <cit.>. The compact emitting radius and its expansion during the eruptions may favour the first of these mechanisms. As the density of expanding ejecta decreases, we would expect the photosphere (the surface of the optically thick region) eventually to recede, consistent with our findings in Fig. <ref>d.
In the simplest case of an EMRI crossing the disk twice per elliptical orbit, recurrence times would exhibit an alternating long-short pattern, as seen in a subset of the known QPE sources <cit.>. In the EMRI model, more complex timing behaviour <cit.> can be caused by relativistic precession of the disk if its rotational axis is misaligned with that of the SMBH <cit.>.
Significant precession over the course of a few cycles in would require a dimensionless SMBH spin a_∙≳0.5-0.7; however, such a large spin would tend to align the disk and damp precession in ≪1000 days (Methods).
Changing gas dynamics following star-disk collisions has recently been proposed as an alternative way to explain QPE timing variations <cit.>. Continuing high-cadence observations of will be required to better constrain the nature of its timing variations and enable more detailed comparisons to QPE models.
The serendipitous discovery of QPEs in TDE suggests that QPEs following TDEs may be common.
We find that the long-term accretion disk properties in are consistent with the star-disk interaction model for QPEs, indicating that the fraction of TDEs with QPEs can be used to constrain the rate of EMRIs, an important goal for future gravitational wave detectors <cit.>.
The latest observational estimates of the QPE rate <cit.> are about one-tenth of the TDE rate <cit.>, consistent with recent theoretical predictions for the formation rate and lifetimes of EMRIs <cit.>.
The QPEs in show that long-term, high-cadence X-ray follow-up of optical TDEs will be a powerful tool for future QPE discovery, without the need for wide-field X-ray time-domain surveys, providing a path to measure the EMRI rate directly through electromagnetic observations.
§ METHODS
§ OBSERVATIONS AND DATA ANALYSIS
§.§ X-ray data
§.§.§
We downloaded processed images and event files and associated calibration data from the archive. Analysis was performed using ciao (version 4.16) <cit.> and CALBD version 4.11.0. We checked for pileup using the pileup_map task, finding a pileup fraction of ≈1% only for the central 4 pixels of the middle exposure. Therefore pileup has negligible impact on our analysis. Count rates were extracted using the srcflux task. We used a 2 arcsecond (4 pixel) circular radius and the default PSF model. The background was estimated using an annular region with inner and outer radii of 15 and 60 arcseconds centered on . This excludes other point sources including the south-east source (see below). The ciao srcflux task includes the Bayesian Gregory-Loredo (G-L) algorithm <cit.> to determine the optimal number of bins for investigating a time-varying (or more formally, periodic) signal. The algorithm provides an odds ratio for variability (2.5 for ) and a light curve with the number of bins that maximises this odds ratio. None of the other five sources in Extended Data Fig. <ref> show an odds ratio >1.
We extract the spectrum both in eruption and quiescence (see below) using the specextract task. The spectrum of the eruption is soft, and can be reasonably fit with a blackbody of ≈ 100 eV. We perform a more detailed spectral analysis of using the later eruptions and quiescent-phase data from instruments with greater sensitivity to softer (0.3-0.7 keV) X-rays (sections <ref>, <ref>).
§.§.§ The nature of the SE X-ray source
The images show a nearby source ≈7 arcsecs to the southeast (labeled `SE source'; Fig. <ref>). It overlaps with the point-spread function of in all instruments other than . We extracted individual X-ray (0.5-7.0 keV) spectra from all three obsIDs to characterize the SE source. We perform spectral analysis with the Bayesian X-ray Analysis software (BXA) version 4.0.7 <cit.>, which connects the nested sampling algorithm UltraNest <cit.> with the fitting environment version 12.13.0c <cit.>, in its Python version . To improve the quality of the spectrum, we jointly fit all 3 obsIDs. The source can be fit with a simple power-law model with foreground absorption (tbabs×cflux(pow)) and is consistent with being constant over all three obsIDs. The neutral column density was fixed at the Milky Way value of 6.6×10^20 cm^-2. The 0.5-3.0 keV flux in the model is 2.1^+1.6_-0.9× 10^-14 erg s^-1 cm^-2 (90% posterior), and the photon index of the power-law is Γ=1.8 ± 0.5 (90% posterior). The fit is shown in Extended Data Fig. <ref>a.
§.§.§ /XRT and the quiescent spectrum of
We obtained Target of Opportunity Time to follow up with the X-Ray Telescope (XRT) on-board the Neil Gehrels Observatory (). 11 observations were obtained from 2024-03-12 through 2024-03-14, with a typical exposure time of ≈1200 s per visit and cadence of 4.5 hours. We clearly detect one eruption in the new data (Fig. <ref>). We also re-analysed all previous XRT data for this source obtained under previous programs, using the online tools available through the UK Science Data Centre <cit.>.
Due to the better sensitivity at soft energies compared to , we are able to model the underlying disk spectrum using the XRT observations during the quiescent phase. For this we use a color-corrected thermal disk model (tdediscspec) <cit.>, to be consistent with the full spectral energy distribution fit (<ref>).
Given the larger PSF of XRT, we simultaneously model the and the SE source contributions to the total spectrum. We use the model
tbabs×(zashift(tdediscspec) + cflux(pow)), where zashift(tdediscspec) is the contribution from and cflux(pow) is the contribution from the SE source. The fit does not require a redshifted absorption component. We
employ and . For the disk parameters (i.e. ) we assume flat priors; however, for the SE source we use
the posteriors from fitting its spatially resolved spectrum (<ref>) as the priors. Extended Data Fig. <ref>b shows
their individual contributions to the observed spectrum, confirming dominates at energies below ≃ 1.0 keV. The posteriors of the fit indicate a peak disk temperature kT_p=67 ± 10 eV (90% posterior), in agreement with the bulk TDE population <cit.>.
§.§.§ Archival data from /XRT
The X-ray spectrum of observed by /XRT in 2019-2020 was reported to be hard <cit.>, suggesting a possible contribution from the SE source. To test this, we fit the combined spectrum (MJD 58714 to 59000) with the same power-law plus disk model. We again use our power-law fit posteriors for the SE source from as a prior in , and this time fix the temperature of the disk component while letting its flux vary freely. The early-time XRT spectrum is entirely consistent with the SE source, with no statistically significant contribution from the disk component (Extended Data Fig. <ref>c). This results in a 3σ upper limit on the flux (0.3-1.0 keV) from at early times of ≤ 1.4 × 10^-14 erg s^-1 cm^-2, or a luminosity ≤ 7.2 × 10^39 erg s^-1.
In contrast, is brighter and detected at high significance in data from 2022 onwards, with a spectrum dominated by the thermal component <cit.>.
's luminosity measured during all quiescent phases with XRT and is ≈10^41 , more than an order of magnitude fainter than
the eruptions. Extended Data Fig. <ref> shows the observation from 2022 in bins of 5 ks. The final bin shows an increase in flux, but the temporal baseline is too short to confirm or rule out that this represents the onset of a QPE (see also Fig. <ref>). The spectral fit from Ref. <cit.> is consistent with a blackbody with kT_BB=130±10eV, dominated by the final bin. We use the blackbody spectrum to calculate the luminosity in the final bin, and exclude this bin from the disk model fit in Fig. <ref>a. We stack the remaining counts in a single bin and compute the quiescent luminosity using the fit from Extended Data Fig. <ref>.
§.§.§
The Neutron Star Interior Composition Explorer () <cit.> observed in two distinct campaigns, first at early times (around optical peak) from 2019-09-25 to 2019-11-05 and another at late times (∼ 1600 days after optical peak) from 2024-02-29 to 2024-03-09.
The cleaned events lists were extracted using the standard Data Analysis Software (HEASoft 6.33.2) tasks using the following filters: nicersaafilt =YES, saafilt =NO, trackfilt =YES, ang_dist =0.015, st_valid =YES, cor_range =“*-*”, min_fpm=38, underonly_range= 0-80, overonly_range=“0.0-1.0”, overonly_expr=“1.52*COR SAX**(-0.633)”, elv =30 and br_earth=40. The entire dataset was acquired during orbit night time and hence the daytime optical light leak (<https://heasarc.gsfc.nasa.gov/docs/nicer/data_analysis/nicer_analysis_tips.html#lightleakincrease>) does not apply to our data analysis. The latest calibration release xti20240206 (06 February 2024) was used. Light curves in the 0.3-1.0 keV range were extracted using the task with a time bin size of 100 seconds and the SCORPEON background model.
The data obtained in the first campaign show no evidence for QPEs. Although the cadence is lower than that of the late-time data, it should be sufficient to detect QPEs occurring with the same frequency and duration as at late times, with a probability of detecting no QPEs of ≈0.02 (using binomial statistics with a 20% duty cycle). We can therefore likely rule out QPEs within the first ≈ 2 months after TDE fallback commenced (estimated to have occurred around 2019-09-11 <cit.>). However, we note that one would not expect QPEs during this phase in any model, as was found to have an extended debris atmosphere <cit.>, which remained optically thick to X-rays until much later <cit.>.
During the second observing campaign, we clearly detect QPEs.
The field of view of is shown in Extended Data Fig. <ref>, overlaid on the image. All of the sources detected by have intensities (at energies less than 1 keV) that is more than a factor of 10 below the measured peak of the QPE. Any contributions from these sources to the spectra are further diminished by their offset angles from the centre of the field. We conclude that the counts during eruptions are completely dominated by .
The six consecutive eruptions detected by were modeled using a skewed Gaussian fit to each peak (Extended Data Fig. <ref>). We measure rest-frame delay times of 39.3±0.3, 56.3±0.3, 42.1±0.3, 51.2±0.2, and 53.5±0.2 hours between successive eruptions.
Given the high count rate and good coverage we extracted time resolved X-ray spectra from the second eruption (Fig. <ref>) in the 0.3-0.9 keV band. We created Good Time Intervals (GTIs) with the for four intervals representing the Rise, Peak and Decay (two phases) of the eruption. We extracted these spectra using task, and produced SCORPEON background spectra in `file mode' bkgmodeltype=scorpeon bkgformat=file) for each of the four GTIs. We simultaneously fit the four spectra using and , assuming the model tbabs×zashift(bbody). We fixed the redshift to z=0.0151 and included foreground absorption, with a fixed neutral hydrogen column density fixed to n_H=6.6×10^20 cm^-2 <cit.>. We initially included a redshifted absorber, but the model preferred zero contribution from this component, so we excluded it for simplicity. The full posteriors of the parameters are shown in Extended Data Fig. <ref>.
§.§.§ /SXT
We observed with <cit.> for four days starting on 2024-03-12 UT using the Soft X-ray Telescope (SXT) <cit.> and the Ultra-Violet Imaging Telescope (UVIT) <cit.>. We used the level2 SXT data processed at the Payload Operation Center using sxtpipeline v1.5. We merged the orbit-wise level2 data using SXTMerger.jl. We extracted the source in 200 s bins using a circular region of 12 arcmin. The broad PSF of the SXT does not leave any source-free regions for simultaneous background measurement. However, the background is low (0.025±0.002 counts s^-1) and steady. As the quiescent flux measured by is below the SXT detection limit, we take this count rate as our background estimate and subtract it from the light curve. SXT detected one eruption (MJD 60383.548).
§.§ Optical/UV Observations
§.§.§
We observed using on 2023-12-21 UT (MJD 60299.55), obtaining one orbit with the Wide-Field Camera 3 (WFC3) UVIS channel in the F225W band. We downloaded the reduced, drizzled and charge-transfer corrected image from the archive. We clearly detect a UV source coincident with the nucleus of the host galaxy.
We verify this source is consistent with a point source both by comparing the profile to other point sources in the image using the RadialProfile task in photutils, and by confirming that the fraction of counts within apertures of 3 and 10 pixels are consistent with published encircled energy fractions in the UVIS documentation.
We perform aperture photometry using a 10 pixel (0.396 arcsecond) circular aperture, measuring the galaxy background per square arcsecond using a circular annulus between 20-40 pixels and subtracting this from the source photometry. Although we cannot measure the galaxy light at the precise position of , having no UV images free from TDE light, the estimated background within our aperture is <2% of the transient flux, so our results are not sensitive to this approximation. We correct to an infinite aperture using the encircled energy fraction of 85.8% recommended for F225W. The zeropoint is derived from the image header, including a chip-dependent flux correction. We measure a final magnitude of 20.63±0.03 (AB).
While the angular scale of ∼ 25 pc is not small enough to rule out a nuclear star cluster (NSC), the UV source is an order of magnitude brighter than known NSCs <cit.>. Moreover, NSCs are generally red <cit.> and many magnitudes fainter than their host galaxies in bluer bands. The magnitude of the source we detect is comparable to the total UV magnitude of the galaxy <cit.>. An unresolved nuclear source was also detected in the QPE source GSN 069 <cit.>.
§.§.§ Ground-based photometry
Numerous observations of this galaxy have been obtained by all-sky optical surveys both before and after the TDE. The optical emission was independently detected by ZTF <cit.>, the Asteroid Terrestrial Impact Last Alert System (ATLAS) <cit.>, the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) <cit.> and the Gaia satellite <cit.>.
Pan-STARRS reaches a typical limiting magnitude of ∼22 in the broad w filter (effective wavelength of 6286 Å) in each 45 s exposure. All observations are processed and photometrically calibrated with the PS image processing pipeline <cit.>. We downloaded and manually vetted all w-band observations of since September 2019, and in most cases confirm a clean subtraction of the host galaxy light. We also retrieved ZTF forced photometry <cit.> in the r-band (with a similar effective wavelength of 6417 Å). Due to the shallower limiting magnitude of ∼20.5, we stack the fluxes in 7 day bins. Both surveys clearly detect an ongoing plateau, persisting for >1000 days with a luminosity ν L_ν∼7×10^40 . All Pan-STARRS and ZTF photometry was measured after subtraction of pre-TDE reference images using dedicated pipelines, and hence include only light from .
While the optical light curves show scatter consistent with noise, they do not appear to exhibit the intense flaring behaviour seen in the X-rays. An order-of-magnitude flare in the optical would easily be detected even in the unbinned ZTF photometry. Assuming a duty cycle of 20%, and conservatively restricting to data since January 2022 (when we first see signs of day-timescale X-ray variability with XRT), the probability of never detecting an eruption simply due to gaps in cadence is ≲10^-13.
To test for optical variability on shorter timescales, we conducted targeted observations with the 1.8 m Pan-STARRS2 telescope in Hawaii on 2024-02-11, with the IO:O instrument on the 2.0 m Liverpool Telescope <cit.> (LT) in La Palma on 2024-02-15, and with ULTRACAM <cit.> on the 3.5 m New Technology Telescope at the European Southern Observatory (La Silla) in Chile on 2024-02-10. Pan-STARRS images were obtained in the w band (50×200 s exposures) and LT in the r band (32×120 s), while ULTRACAM observed simultaneously in u_s,g_s,r_s bands <cit.> (384×20 s, with only 24 ms between exposures). All images were reduced through standard facility pipelines. For Pan-STARRS, this included subtraction of a pre-TDE reference image and forced photometry at the position of . In the case of LT and ULTRACAM, we performed photometry using psf <cit.>, an open-source python wrapper for photutils and other image analysis routines. We excluded 17 ULTRACAM images affected by poor seeing. We attempted manual subtraction of the Pan-STARRS reference images using psf, however we found that the additional noise introduced by the subtraction was larger than any detectable variability. As shown in Extended Data Fig. <ref>, there is no strong evidence for variability on timescales ∼hours.
§.§.§ /UVOT
UV observations were taken with /UVOT in the uvm2 filter contemporaneously with the XRT observations. We used the package to measure the UV photometry, using an aperture of 12”. We subtracted the host galaxy contribution by fitting archival photometry data with stellar population synthesis models using Prospector <cit.>. This standard procedure has been used to analyse previous UVOT observations of TDEs <cit.>. We apply Galactic extinction correction to all bands using E(B-V) value of 0.094 <cit.>.
The UVOT photometry is shown in Extended Data Fig. <ref>. Although lacking the resolution of to separate the central point source from the host light, the mean measured magnitude of 20.1 is ∼0.5 mag brighter than the host level estimated by SED modeling <cit.>. The individual measurements exhibit root-mean-square variation of 0.27 mag (Extended Data Fig. <ref>), possibly indicating variability that would further exclude a nuclear star cluster. The timing of the XRT QPE is marked, coinciding with a possible (but not statistically significant) dip in UV flux as seen in the QPE candidate XMMLS1 J0249 <cit.>.
§.§.§ /UVIT
We observed AT2019qiz with the UV Imaging Telescope (UVIT) using the broad filter CaF2 (F148W) <cit.>. We processed the level1 data with the CCDLAB pipeline <cit.>, and generated orbit-wise images, detecting a bright nuclear source. We performed aperture photometry using UVITTools.jl package and the latest calibration <cit.>, in a circular region of 20 pixels (8.2 arcsec). We also extracted background counts from a source-free area of the image. The background-corrected count rate in the merged image corresponds to a flux density f_λ=3.16±0.97× 10^-16 erg cm^-2 s^-1 Å^-1 or magnitude m = 20.49±0.03 (AB). We found no statistically significant FUV variability between the orbit-wise images. We do not attempt to remove host galaxy flux for the UVIT data, as the field has not been covered by previous FUV surveys. SED modelling would require a large extrapolation. Regardless, we expect that the galaxy flux should be negligible at these wavelengths <cit.>.
§ ANALYSIS
§.§ Assessing variability
We perform two checks that the X-ray variability corresponds to QPEs rather than random variation. First we compare to physically-motivated models of stochastic variability. Ref. <cit.> demonstrated a mechanism to produce order-of-magnitude X-ray variability through Wien-tail amplification of accretion disk perturbations. Their Figure 3 shows the X-ray light curve of a model with a SMBH mass of 2×10^6 , consistent with . The light curves are of a visibly different character to our data, with random variability rather than flares of consistent duration, and no obvious `quiescent' level. We ran additional simulations using their model, and never found a light curve segment resembling .
We also take a model-agnostic approach and assume the null hypothesis that the times of the X-ray peaks are random. Drawing a list of 10^5 delay times from a flat probability distribution between 0-60 hours, and examining every consecutive sequence of eight, we `measure' the standard deviation in delay times to be ≤15% of the mean in only ≲0.1% of trials. This is not sensitive to where we place the upper and lower bounds of the distribution. Therefore we can exclude random peak times at >3σ confidence.
§.§ QPE duration-recurrence time correlation
The data in Fig. <ref>a show an apparent correlation between the mean duration and mean recurrence time of QPEs from a given source <cit.>. An equivalent statement is that QPEs appear to show a constant duty cycle across the population, with previous work indicating a duty cycle 0.24±0.13 <cit.>. We reanalyse this correlation including by performing Bayesian regression with a linear model
T_ duration = α T_ recurrence + β. We find α = 0.22^+0.11_-0.04 (95% credible range), consistent with previous findings <cit.>. Comparing this model to the null hypothesis (α=0) we find a change in the Bayesian Information Criterion Δ BIC≈ 50, indicating a strong preference for a positive linear correlation over the null hypothesis of no correlation.
§.§ Disk modeling
We use the time dependent relativistic thin disk model developed in Refs. <cit.>. This computes the spectrum of an evolving accretion flow, produced at early times by the circularisation of some fraction of the TDE stellar debris. To generate light curves we follow the procedure of Ref. <cit.> (their Figure 2). The important input parameters are the mass and spin of the SMBH, the initial disk mass, the disk-observer inclination angle, and the turbulent evolutionary timescale. In addition, there are nuisance parameters relating to the initial surface density profile of the disk, which is generally unknown and has minimal effect on the late-time behaviour. As this initial condition is so poorly constrained, we simply consider an initial ring of material (as in Ref. <cit.>).
For each set of parameters {Θ}, we compute the total (log-)likelihood
L(Θ) = - ∑_ bands, i∑_ data, j(O_i, j - M_i, j)^2 /E_i, j^2 ,
where O_i, j, M_i, j and E_i, j are the observed flux, model flux and flux uncertainty of the j^ th data point in the i^ th band respectively. For the X-ray data we compute the integrated 0.3-1 keV flux using the best-fit models to the quiescent /XRT and data, while for optical/UV bands we compute the flux at the effective frequency of the band. We correct all data for foreground extinction/absorption <cit.>.
The early optical and UV observations do not probe direct emission from the accretion flow, either due to reprocessing <cit.> or shock emission from streams <cit.>. We add an early time component to model out this decay <cit.>, with functional form
L_ early = L_0 exp(-t/τ_ dec) ×B(ν, T)/B(ν_0, T) ,
where B(ν, T) is the Planck function, and ν_0 = 6 × 10^14 Hz is a reference frequency. We fit the amplitude L_0, temperature T and decay timescale τ_ dec in addition to the disk parameters. We only include data taken after the peak of the optical light curves.
The fit was performed using Markov Chain Monte Carlo techniques, employing the emcee formalism <cit.>. To speed up computations, analytic solutions of the relativistic disk equations <cit.> were used. The model satisfactorily reproduces all data. The model X-ray light curve shows a slow rise, however this is completely unconstrained by data and is therefore very sensitive to the uncertain initial conditions of the simulation. After a few hundred days (by the time of the earliest X-ray data in Fig. <ref>), the disk has spread to large radii and is no longer sensitive to initial conditions. We present the posterior distributions of the physically relevant free parameters in Extended Data Fig. <ref>. The best-fitting SMBH mass is consistent with all other observational constraints.
We note that a dimensionless SMBH spin parameter a_∙>0 is favoured by the model (though see caveats below), with a peak in the posterior around a_∙∼0.4-0.5. This constraint originates from the relative amplitudes of the optical/UV and X-ray luminosities, as is highlighted in Extended Data Fig. <ref>. As the optical and UV light curves are well separated in frequency, the properties of the disk at scales r ≳ 20 r_g are tightly constrained. The amplitude of the X-ray luminosity is controlled by the temperature of the inner disk, close to the innermost stable circular orbit (ISCO). For a given large-scale structure, this radius is determined by a_∙.
Our disk model parameterizes the color correction factor f_ col in terms of the local disk temperature <cit.>, but our posteriors do not marginalize over its unknown uncertainty. Recognizing that modest uncertainties in f_ col lead to substantial uncertainties in spin (for non-maximal black hole spins) <cit.>, we do not claim a spin measurement here, but simply note that a modest spin is consistent with our data. The spin estimates in this model also assume a planar disk that is aligned with the SMBH spin, which is not true in the case of a precessing disk (see next section).
While the disk temperature profile (and therefore the location of the disk's outer edge) is tightly constrained from the multi-band late-time observations, it is well known that disk temperature constraints only probe the product W^r_ϕΣ, where W^r_ϕ is the turbulent stress and Σ is the surface mass density. As the functional form of the turbulent stress cannot be derived from first principles, and must be specified by hand, there is some uncertainty in the mid-disk density slope. Our choice of W^r_ϕ parameterisation is optimized for computational speed <cit.>, and is given by W^r_ϕ = w = constant. Rather than fit for w, we fit for the evolutionary timescale of the disc (which has a more obvious physical interpretation), given by t_ evol≡2√(GM_∙ r_0^3) / 9w. We emphasise that this uncertainty has no effect on our constraints on the size of the disk.
With this choice of parameterization for the turbulent stress, the disk density profile (Fig. <ref>) can be approximated as Σ∝ r^-ζ, with ζ = 1/2, for r=(2-600)GM_∙/c^2. The density slope is not very sensitive to modelling assumptions, with the (potentially) more physical radiation pressure dominated α-disk model having ζ = 3/4.
§.§ Precession timescales
If the SMBH is rotating, any orbit or disk that is misaligned with the spin axis will undergo Lense-Thirring precession. This is a possible cause of timing variations in QPEs <cit.>. Changes in QPE timing in are seen over the course of ≲8 observed cycles, which would require that the precession timescale is T_ prec∼ few× T_ QPE, where T_ QPE≈48.4 hr is the QPE recurrence time.
The precession timescale can be calculated following <cit.>:
T_ prec = 8π GM_∙ (1+2ζ)/c^3(5-2ζ)r_ out^5/2-ζr_ in^1/2+ζ(1-(r_ in/r_ out)^5/2-ζ)/a_∙(1-(r_ in/r_ out)^1/2+ζ),
where r_ in and r_ out are the inner and outer radii of the disk or orbit, in Schwarzschild units (see also <cit.>). We assume log(M_∙/M_⊙)=6.3, and investigate the plausible precession period for different values of a_∙.
The nodal precession timescale for an orbiting body can be estimated by calculating T_ prec at the orbital radius (setting R_ in≈ R_ out≈ R_ orb). For a_∙=0.1-0.9, this gives T_ prec,orbit≈ (10^3-10^4) × T_ QPE, independent of ζ. Therefore in the EMRI model, nodal precession is too slow to account for changes in QPE timing over a few orbits.
The precession timescale of the disk can be calculated by assuming it behaves as a rigid body with r_ in = 2GM_∙/c^2, r_ out = 600GM_∙/c^2 and a density slope ζ=1/2 from our disk model. We use the equation above to find T_ prec,disk≈ (70-200) × T_ QPE (for the same range of spins). With a steeper density profile having ζ=1, this would reduce to T_ prec,disk≈ (8-70) × T_ QPE (since more mass closer to the SMBH enables stronger precession).
Therefore precession can explain detectable changes in QPE timing over the course of a few orbits only in the case of a rapidly spinning SMBH (a_∙≳ 0.5-0.7) and a steep disk density profile.
With these constraints, attributing the timing residuals primarily to disk precession becomes challenging. The larger the SMBH spin magnitude, the faster an initially inclined disk will come into alignment with the BH spin axis, damping precession on a timescale ≲100 days for a_∙>0.6 and M_∙∼10^6 <cit.>. To maintain precession for over 1000 days requires a spin a_∙≲0.2, in which case the precession is not fast enough to fully explain the timing variations in our data.
We also note that the disk inner radius used in our precession calculation was derived from a planar disk model. In a tilted disk around a spinning SMBH, the radius of the ISCO will differ from the equatorial case. Understanding the effect of disk precession in will likely require both continued monitoring to better understand the QPE timing structure, and a self-consistent model of an evolving and precessing disk that can explain both the multi-wavelength light curve and timing residuals.
§.§ Constraints on QPE models
Many models have been proposed to explain QPEs. Disk tearing due to Lense-Thirring precession has been suggested <cit.>. This effect has plausibly been detected in the TDE AT2020ocn <cit.>, however its X-ray light curve did not resemble that of or those of other QPEs. As discussed above, it is also unclear whether strong precession will persist until such late times. The X-ray variability in AT2020ocn occurred only in the first months following the TDE.
Gravitational lensing of an accretion disk by a second SMBH in a tight binary could cause periodic X-ray peaks for the right inclination <cit.>. However, in the case of no signs of gravitational self-lensing were detected during the initial TDE. In this model a QPE magnification by a factor ≳10 requires an extremely edge-on view of the disk, which leads to a shorter duration of the QPE flares. This was already problematic for previous QPEs <cit.>, and is more so for the longer-duration flares in . Moreover, finding a TDE around a close SMBH binary within a very narrow range of viewing angles (≳ 89.5^∘) is very unlikely within the small sample of known TDEs, so a strong TDE-QPE connection is not expected in this model.
Limit cycle instabilities are an appealing way to explain recurrent variability <cit.>, <cit.>. The recurrence timescale for disk pressure instabilities depends on whether the disk is dominated by radiation pressure or magnetic fields <cit.>, as well as the accretion rate. Our disk model, which is well constrained by the multi-wavelength data, gives an Eddington ratio Ṁ/M_ Edd≈ L/L_ Edd=0.04±0.01. Ref. <cit.> give formulae to interpolate the recurrence time for radiation pressure instabilities, for a given amplitude relative to quiescence. We assume a peak-to-quiescence luminosity ratio of 60, though our analysis is not sensitive to this. Using either the prescription for intermediate-mass BHs (their equation 33) or SMBHs (their equation 34), we find a recurrence time of ∼ 5000 days.
In the magnetic case, we use equation 14 from Ref. <cit.>. Matching the observed recurrence time requires a dimensionless magnetic pressure scaling parameter p_0∼10. However, at this Eddington ratio the disk should be stable <cit.> if p_0 ≳1. This leaves no self-consistent solution in which magnetic pressure instabilities cause the QPEs in . The possibility of a long-short cycle in recurrence time, and the asymmetric profile of the eruptions <cit.>, also disfavour pressure instabilities. We also note that in disk instability models, the recurrence time of the instability correlates with SMBH mass. For the known QPEs, there is no apparent correlation in recurrence time with mass (Fig. <ref>).
The final class of models to explain QPEs involves an orbiting body (EMRI) either transferring mass to an accretion disk or colliding with it repeatedly <cit.>, <cit.>. Note that this is very unlikely to be the same star that was disrupted during the TDE: if a bound remnant survived the disruption, it is expected to be on a highly eccentric orbit with a much longer period than the QPEs <cit.>. The fundamental requirement for star-disk collisions to explain QPEs is that the disk is wider than the orbit of the EMRI. The size of the disk in is well constrained by our analysis, and the posteriors of our fit fully satisfy this requirement, at least in the case of a circular disk.
For an orbit with the QPE period to avoid intersecting the disk would require a disk ellipticity e>0.7 (assuming the semi-major axis of the disk is fixed) and an appropriately chosen orbital inclination. While some TDE spectra support a highly elliptical disk in the tens of days after disruption <cit.>, most can be explained with an approximately circular disk <cit.>. Simulations of TDE accretion disks show a high ellipticity in the days after disruption <cit.>, but shocks are expected to circularise the disk over the course of a few debris orbital periods <cit.> (days to weeks) whereas we observe QPEs on timescales of years after . An initially highly eccentric disk becomes only mildly elliptical (e∼0.6) on timescales of a few days <cit.>. Once significant fallback has ceased (before the plateau phase), no more eccentricity will be excited in the disk, while turbulence will act to further circularise it, so we expect the disk in will be circular to a good approximation.
The case of an EMRI interacting with a TDE disk was specifically predicted by Refs. <cit.>. The formation rate of EMRIs by the Hills mechanism is ∼10^-5 yr^-1 galaxy^-1, about one tenth of the TDE rate. Since the time for inspiral via gravitational wave emission (∼10^6 yr) is longer than the time between TDEs (∼10^4 yr), theory predicts that ≳1 in 10 TDEs could host an EMRI capable of producing QPEs <cit.>. This is consistent with recent observational constraints on the QPE rate <cit.>.
10
url<#>1#1urlprefixURL
Miniutti2019
authorMiniutti, G. et al.
titleNine-hour X-ray quasi-periodic eruptions from a low-mass black hole galactic nucleus.
journal volume573, pages381–384 (year2019).
Giustini2020
authorGiustini, M., authorMiniutti, G. & authorSaxton, R. D.
titleX-ray quasi-periodic eruptions from the galactic nucleus of RX J1301.9+2747.
journal volume636, pagesL2 (year2020).
Arcodia2021
authorArcodia, R. et al.
titleX-ray quasi-periodic eruptions from two previously quiescent galaxies.
journal volume592, pages704–707 (year2021).
Arcodia2024a
authorArcodia, R. et al.
titleThe more the merrier: SRG/eROSITA discovers two further galaxies showing X-ray quasi-periodic eruptions.
journal volume684, pagesA64 (year2024).
Guolo2024
authorGuolo, M. et al.
titleX-ray eruptions every 22 days from the nucleus of a nearby galaxy.
journal (year2024).
Pan2022
authorPan, X., authorLi, S.-L., authorCao, X., authorMiniutti, G. & authorGu, M.
titleA Disk Instability Model for the Quasi-periodic Eruptions of GSN 069.
journal volume928, pagesL18 (year2022).
Sniegowska2023
authorŚniegowska, M., authorGrzȩdzielski, M., authorCzerny, B. & authorJaniuk, A.
titleModified models of radiation pressure instability applied to 10, 10^5, and 10^7 M_ accreting black holes.
journal volume672, pagesA19 (year2023).
Kaur2023
authorKaur, K., authorStone, N. C. & authorGilbaum, S.
titleMagnetically dominated discs in tidal disruption events and quasi-periodic eruptions.
journal volume524, pages1269–1290 (year2023).
Dai2010
authorDai, L. J., authorFuerst, S. V. & authorBlandford, R.
titleQuasi-periodic flares from star-accretion-disc collisions.
journal volume402, pages1614–1624 (year2010).
Xian2021
authorXian, J., authorZhang, F., authorDou, L., authorHe, J. & authorShu, X.
titleX-Ray Quasi-periodic Eruptions Driven by Star-Disk Collisions: Application to GSN069 and Probing the Spin of Massive Black Holes.
journal volume921, pagesL32 (year2021).
Linial2023
authorLinial, I. & authorMetzger, B. D.
titleEMRI + TDE = QPE: Periodic X-Ray Flares from Star-Disk Collisions in Galactic Nuclei.
journal volume957, pages34 (year2023).
Miniutti2023
authorMiniutti, G. et al.
titleRepeating tidal disruptions in GSN 069: Long-term evolution and constraints on quasi-periodic eruptions' models.
journal volume670, pagesA93 (year2023).
Chakraborty2021
authorChakraborty, J. et al.
titlePossible X-Ray Quasi-periodic Eruptions in a Tidal Disruption Event Candidate.
journal volume921, pagesL40 (year2021).
Quintin2023
authorQuintin, E. et al.
titleTormund's return: Hints of quasi-periodic eruption features from a recent optical tidal disruption event.
journal volume675, pagesA152 (year2023).
Wevers2022
authorWevers, T., authorPasham, D. R., authorJalan, P., authorRakshit, S. & authorArcodia, R.
titleHost galaxy properties of quasi-periodically erupting X-ray sources.
journal volume659, pagesL2 (year2022).
Nicholl2020
authorNicholl, M. et al.
titleAn outflow powers the optical rise of the nearby, fast-evolving tidal disruption event AT2019qiz.
journal volume499, pages482–504 (year2020).
Hung2021
authorHung, T. et al.
titleDiscovery of a Fast Iron Low-ionization Outflow in the Early Evolution of the Nearby Tidal Disruption Event AT 2019qiz.
journal volume917, pages9 (year2021).
Patra2022
authorPatra, K. C. et al.
titleSpectropolarimetry of the tidal disruption event AT 2019qiz: a quasi-spherical reprocessing layer.
journal volume515, pages138–145 (year2022).
Mummery2024
authorMummery, A. et al.
titleFundamental scaling relationships revealed in the optical light curves of tidal disruption events.
journal volume527, pages2452–2489 (year2024).
vanVelzen2019
authorvan Velzen, S. et al.
titleLate-time UV Observations of Tidal Disruption Flares Reveal Unobscured, Compact Accretion Disks.
journal volume878, pages82 (year2019).
Short2023
authorShort, P. et al.
titleDelayed appearance and evolution of coronal lines in the TDE AT2019qiz.
journal volume525, pages1568–1587 (year2023).
Evans2023
authorEvans, P. A. et al.
titleMonthly quasi-periodic eruptions from repeated stellar disruption by a massive black hole.
journal volume7, pages1368–1375 (year2023).
Arcodia2022
authorArcodia, R. et al.
titleThe complex time and energy evolution of quasi-periodic eruptions in eRO-QPE1.
journal volume662, pagesA49 (year2022).
Arcodia2024
authorArcodia, R. et al.
titleCosmic hide and seek: the volumetric rate of X-ray quasi-periodic eruptions.
journalarXiv e-prints pagesarXiv:2403.17059 (year2024).
Mummery2020
authorMummery, A. & authorBalbus, S. A.
titleThe spectral evolution of disc dominated tidal disruption events.
journal volume492, pages5655–5674 (year2020).
Grzdzielski2017
authorGrzędzielski, M., authorJaniuk, A., authorCzerny, B. & authorWu, Q.
titleModified viscosity in accretion disks. Application to Galactic black hole binaries, intermediate mass black holes, and active galactic nuclei.
journal volume603, pagesA110 (year2017).
Tagawa2023
authorTagawa, H. & authorHaiman, Z.
titleFlares from stars crossing active galactic nucleus discs on low-inclination orbits.
journal volume526, pages69–79 (year2023).
Sukova2021
authorSuková, P., authorZajaček, M., authorWitzany, V. & authorKaras, V.
titleStellar Transits across a Magnetized Accretion Torus as a Mechanism for Plasmoid Ejection.
journal volume917, pages43 (year2021).
Stone2012
authorStone, N. & authorLoeb, A.
titleObserving Lense-Thirring Precession in Tidal Disruption Flares.
journal volume108, pages061302 (year2012).
Franchini2023
authorFranchini, A. et al.
titleQuasi-periodic eruptions from impacts between the secondary and a rigidly precessing accretion disc in an extreme mass-ratio inspiral system.
journal volume675, pagesA100 (year2023).
Yao2024
authorYao, P. Z., authorQuataert, E., authorJiang, Y.-F., authorLu, W. & authorWhite, C. J.
titleStar-Disk Collisions: Implications for QPEs and Other Transients Near Supermassive Black Holes.
journalarXiv e-prints pagesarXiv:2407.14578 (year2024).
Babak2017
authorBabak, S. et al.
titleScience with the space-based interferometer LISA. V. Extreme mass-ratio inspirals.
journal volume95, pages103012 (year2017).
Sazonov2021
authorSazonov, S. et al.
titleFirst tidal disruption events discovered by SRG/eROSITA: X-ray/optical properties and X-ray luminosity function at z < 0.6.
journal volume508, pages3820–3847 (year2021).
Yao2023
authorYao, Y. et al.
titleTidal Disruption Event Demographics with the Zwicky Transient Facility: Volumetric Rates, Luminosity Function, and Implications for the Local Black Hole Mass Function.
journal volume955, pagesL6 (year2023).
Linial2024
authorLinial, I. & authorMetzger, B. D.
titleCoupled Disk-Star Evolution in Galactic Nuclei and the Lifetimes of QPE Sources.
journalarXiv e-prints pagesarXiv:2404.12421 (year2024).
Fruscione2006
authorFruscione, A. et al.
editorSilva, D. R. & editorDoxsey, R. E. (eds) titleCIAO: Chandra's data analysis system.
(eds editorSilva, D. R. & editorDoxsey, R. E.) booktitleObservatory Operations: Strategies, Processes, and Systems, Vol. volume6270 of seriesSociety of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pages62701V (year2006).
Gregory1992
authorGregory, P. C. & authorLoredo, T. J.
titleA New Method for the Detection of a Periodic Signal of Unknown Shape and Period.
journal volume398, pages146 (year1992).
Buchner2014
authorBuchner, J. et al.
titleX-ray spectral modelling of the AGN obscuring region in the CDFS: Bayesian model selection and catalogue.
journal volume564, pagesA125 (year2014).
Buchner2019
authorBuchner, J.
titleCollaborative Nested Sampling: Big Data versus Complex Physical Models.
journal volume131, pages108005 (year2019).
xspec
authorArnaud, K. A.
editorJacoby, G. H. & editorBarnes, J. (eds) titleXSPEC: The First Ten Years.
(eds editorJacoby, G. H. & editorBarnes, J.) booktitleAstronomical Data Analysis Software and Systems V, Vol. volume101 of seriesAstronomical Society of the Pacific Conference Series, pages17 (year1996).
Evans2007
authorEvans, P. A. et al.
titleAn online repository of Swift/XRT light curves of -ray bursts.
journal volume469, pages379–385 (year2007).
Evans2009
authorEvans, P. A. et al.
titleMethods and results of an automatic analysis of a complete sample of Swift-XRT observations of GRBs.
journal volume397, pages1177–1201 (year2009).
Mummery2021
authorMummery, A.
titleTidal disruption event discs are larger than they seem: removing systematic biases in TDE X-ray spectral modelling.
journal volume507, pagesL24–L28 (year2021).
Guolo2023
authorGuolo, M. et al.
titleA systematic analysis of the X-ray emission in optically selected tidal disruption events: observational evidence for the unification of the optically and X-ray selected populations.
journalarXiv e-prints pagesarXiv:2308.13019 (year2023).
Gendreau2012
authorGendreau, K. C., authorArzoumanian, Z. & authorOkajima, T.
editorTakahashi, T., editorMurray, S. S. & editorden Herder, J.-W. A. (eds) titleThe Neutron star Interior Composition ExploreR (NICER): an Explorer mission of opportunity for soft x-ray timing spectroscopy.
(eds editorTakahashi, T., editorMurray, S. S. & editorden Herder, J.-W. A.) booktitleSpace Telescopes and Instrumentation 2012: Ultraviolet to Gamma Ray, Vol. volume8443 of seriesSociety of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pages844313 (year2012).
Arzoumanian2014
authorArzoumanian, Z. et al.
editorTakahashi, T., editorden Herder, J.-W. A. & editorBautz, M. (eds) titleThe neutron star interior composition explorer (NICER): mission definition.
(eds editorTakahashi, T., editorden Herder, J.-W. A. & editorBautz, M.) booktitleSpace Telescopes and Instrumentation 2014: Ultraviolet to Gamma Ray, Vol. volume9144 of seriesSociety of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pages914420 (year2014).
HI4PI2016
authorHI4PI Collaboration et al.
titleHI4PI: A full-sky H I survey based on EBHIS and GASS.
journal volume594, pagesA116 (year2016).
Singh2014
authorSingh, K. P. et al.
editorTakahashi, T., editorden Herder, J.-W. A. & editorBautz, M. (eds) titleASTROSAT mission.
(eds editorTakahashi, T., editorden Herder, J.-W. A. & editorBautz, M.) booktitleSpace Telescopes and Instrumentation 2014: Ultraviolet to Gamma Ray, Vol. volume9144 of seriesSociety of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pages91441S (year2014).
Singh2017
authorSingh, K. P. et al.
titleSoft X-ray Focusing Telescope Aboard AstroSat: Design, Characteristics and Performance.
journalJournal of Astrophysics and Astronomy volume38, pages29 (year2017).
Tandon2017
authorTandon, S. N. et al.
titleIn-orbit Calibrations of the Ultraviolet Imaging Telescope.
journal volume154, pages128 (year2017).
Tandon2020
authorTandon, S. N. et al.
titleAdditional Calibration of the Ultraviolet Imaging Telescope on Board AstroSat.
journal volume159, pages158 (year2020).
Antonini2013
authorAntonini, F.
titleOrigin and Growth of Nuclear Star Clusters around Massive Black Holes.
journal volume763, pages62 (year2013).
Turner2012
authorTurner, M. L. et al.
titleThe ACS Fornax Cluster Survey. VI. The Nuclei of Early-type Galaxies in the Fornax Cluster.
journal volume203, pages5 (year2012).
Patra2023
authorPatra, K. C. et al.
titleConstraints on the narrow-line region of the X-ray quasi-periodic eruption source GSN 069.
journalarXiv e-prints pagesarXiv:2310.05574 (year2023).
Bellm2019
authorBellm, E. C. et al.
titleThe Zwicky Transient Facility: System Overview, Performance, and First Results.
journal volume131, pages018002 (year2019).
vanVelzen2021
authorvan Velzen, S. et al.
titleSeventeen Tidal Disruption Events from the First Half of ZTF Survey Observations: Entering a New Era of Population Studies.
journal volume908, pages4 (year2021).
Tonry2018
authorTonry, J. L. et al.
titleATLAS: A High-cadence All-sky Survey System.
journal volume130, pages064505 (year2018).
Chambers2016
authorChambers, K. C. et al.
titleThe Pan-STARRS1 Surveys.
journalarXiv e-prints pagesarXiv:1612.05560 (year2016).
Fabricius2016
authorFabricius, C. et al.
titleGaia Data Release 1. Pre-processing and source list creation.
journal volume595, pagesA3 (year2016).
Magnier2020
authorMagnier, E. A. et al.
titleThe Pan-STARRS Data-processing System.
journal volume251, pages3 (year2020).
Magnier2020a
authorMagnier, E. A. et al.
titlePan-STARRS Pixel Analysis: Source Detection and Characterization.
journal volume251, pages5 (year2020).
Waters2020
authorWaters, C. Z. et al.
titlePan-STARRS Pixel Processing: Detrending, Warping, Stacking.
journal volume251, pages4 (year2020).
Masci2023
authorMasci, F. J. et al.
titleA New Forced Photometry Service for the Zwicky Transient Facility.
journalarXiv e-prints pagesarXiv:2305.16279 (year2023).
Steele2004
authorSteele, I. A. et al.
editorOschmann, J., Jacobus M. (ed.) titleThe Liverpool Telescope: performance and first results.
(ed.editorOschmann, J., Jacobus M.) booktitleGround-based Telescopes, Vol. volume5489 of seriesSociety of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pages679–692 (year2004).
Dhillon2007
authorDhillon, V. S. et al.
titleULTRACAM: an ultrafast, triple-beam CCD camera for high-speed astrophysics.
journal volume378, pages825–840 (year2007).
Dhillon2021
authorDhillon, V. S. et al.
titleHiPERCAM: a quintuple-beam, high-speed optical imager on the 10.4-m Gran Telescopio Canarias.
journal volume507, pages350–366 (year2021).
Nicholl2023
authorNicholl, M. et al.
titleAT 2022aedm and a New Class of Luminous, Fast-cooling Transients in Elliptical Galaxies.
journal volume954, pagesL28 (year2023).
Johnson_21
authorJohnson, B. D., authorLeja, J., authorConroy, C. & authorSpeagle, J. S.
titleStellar Population Inference with Prospector.
journal volume254, pages22 (year2021).
Schlafly2011
authorSchlafly, E. F. & authorFinkbeiner, D. P.
titleMeasuring Reddening with Sloan Digital Sky Survey Stellar Spectra and Recalibrating SFD.
journal volume737, pages103 (year2011).
Postma2017
authorPostma, J. E. & authorLeahy, D.
titleCCDLAB: A Graphical User Interface FITS Image Data Reducer, Viewer, and Canadian UVIT Data Pipeline.
journal volume129, pages115002 (year2017).
Mummery2024b
authorMummery, A. & authorTurner, S. G. D.
titleThe turbulent variability of accretion discs observed at high energies.
journal (year2024).
Dai2018
authorDai, L., authorMcKinney, J. C., authorRoth, N., authorRamirez-Ruiz, E. & authorMiller, M. C.
titleA Unified Model for Tidal Disruption Events.
journal volume859, pagesL20 (year2018).
Shiokawa2015
authorShiokawa, H., authorKrolik, J. H., authorCheng, R. M., authorPiran, T. & authorNoble, S. C.
titleGeneral Relativistic Hydrodynamic Simulation of Accretion Flow from a Stellar Tidal Disruption.
journal volume804, pages85 (year2015).
Foreman-Mackey2013
authorForeman-Mackey, D., authorHogg, D. W., authorLang, D. & authorGoodman, J.
titleemcee: The MCMC Hammer.
journal volume125, pages306 (year2013).
Mummery2023
authorMummery, A.
titleAsymptotic Green's function solutions of the general relativistic thin disc equations.
journal volume518, pages1905–1916 (year2023).
Done2012
authorDone, C., authorDavis, S. W., authorJin, C., authorBlaes, O. & authorWard, M.
titleIntrinsic disc emission and the soft X-ray excess in active galactic nuclei.
journal volume420, pages1848–1860 (year2012).
SalvesenMiller2021
authorSalvesen, G. & authorMiller, J. M.
titleBlack hole spin in X-ray binaries: giving uncertainties an f.
journal volume500, pages3640–3666 (year2021).
Chakraborty2024
authorChakraborty, J. et al.
titleTesting EMRI Models for Quasi-periodic Eruptions with 3.5 yr of Monitoring eRO-QPE1.
journal volume965, pages12 (year2024).
Franchini2016
authorFranchini, A., authorLodato, G. & authorFacchini, S.
titleLense-Thirring precession around supermassive black holes during tidal disruption events.
journal volume455, pages1946–1956 (year2016).
Raj2021
authorRaj, A. & authorNixon, C. J.
titleDisk Tearing: Implications for Black Hole Accretion and AGN Variability.
journal volume909, pages82 (year2021).
Pasham2024
authorPasham, D. R. et al.
titleLense-Thirring Precession after a Supermassive Black Hole Disrupts a Star.
journalarXiv e-prints pagesarXiv:2402.09689 (year2024).
Ingram2021
authorIngram, A., authorMotta, S. E., authorAigrain, S. & authorKarastergiou, A.
titleA self-lensing binary massive black hole interpretation of quasi-periodic eruptions.
journal volume503, pages1703–1716 (year2021).
Cannizzo1993
authorCannizzo, J. K.
titleThe Accretion Disk Limit Cycle Model: Toward an Understanding of the Long-Term Behavior of SS Cygni.
journal volume419, pages318 (year1993).
King2020
authorKing, A.
titleGSN 069 - A tidal disruption near miss.
journal volume493, pagesL120–L123 (year2020).
Krolik2022
authorKrolik, J. H. & authorLinial, I.
titleQuasiperiodic Erupters: A Stellar Mass-transfer Model for the Radiation.
journal volume941, pages24 (year2022).
Lu2023
authorLu, W. & authorQuataert, E.
titleQuasi-periodic eruptions from mildly eccentric unstable mass transfer in galactic nuclei.
journal volume524, pages6247–6266 (year2023).
Wevers2022b
authorWevers, T. et al.
titleAn elliptical accretion disk following the tidal disruption event AT 2020zso.
journal volume666, pagesA6 (year2022).
Holoien2019
authorHoloien, T. W. S. et al.
titlePS18kh: A New Tidal Disruption Event with a Non-axisymmetric Accretion Disk.
journal volume880, pages120 (year2019).
Short2020
authorShort, P. et al.
titleThe tidal disruption event AT 2018hyz - I. Double-peaked emission lines and a flat Balmer decrement.
journal volume498, pages4119–4133 (year2020).
Hung2020
authorHung, T. et al.
titleDouble-peaked Balmer Emission Indicating Prompt Accretion Disk Formation in an X-Ray Faint Tidal Disruption Event.
journal volume903, pages31 (year2020).
Andalman2022
authorAndalman, Z. L., authorLiska, M. T. P., authorTchekhovskoy, A., authorCoughlin, E. R. & authorStone, N.
titleTidal disruption discs formed and fed by stream-stream and stream-disc interactions in global GRHD simulations.
journal volume510, pages1627–1648 (year2022).
Bonnerot2020
authorBonnerot, C. & authorLu, W.
titleSimulating disc formation in tidal disruption events.
journal volume495, pages1374–1391 (year2020).
Curd2021
authorCurd, B.
titleGlobal simulations of tidal disruption event disc formation via stream injection in GRRMHD.
journal volume507, pages3207–3227 (year2021).
Kormendy2013
authorKormendy, J. & authorHo, L. C.
titleCoevolution (Or Not) of Supermassive Black Holes and Host Galaxies.
journal volume51, pages511–653 (year2013).
Gultekin2009
authorGültekin, K. et al.
titleThe M- and M-L Relations in Galactic Bulges, and Determinations of Their Intrinsic Scatter.
journal volume698, pages198–221 (year2009).
McConnell2013
authorMcConnell, N. J. & authorMa, C.-P.
titleRevisiting the Scaling Relations of Black Hole Masses and Host Galaxy Properties.
journal volume764, pages184 (year2013).
Guillochon2018
authorGuillochon, J. et al.
titleMOSFiT: Modular Open Source Fitter for Transients.
journal volume236, pages6 (year2018).
Mockler2019
authorMockler, B., authorGuillochon, J. & authorRamirez-Ruiz, E.
titleWeighing Black Holes Using Tidal Disruption Events.
journal volume872, pages151 (year2019).
Nicholl2022
authorNicholl, M. et al.
titleSystematic light-curve modelling of TDEs: statistical differences between the spectroscopic classes.
journal volume515, pages5604–5616 (year2022).
Ryu2020
authorRyu, T., authorKrolik, J. & authorPiran, T.
titleMeasuring Stellar and Black Hole Masses of Tidal Disruption Events.
journal volume904, pages73 (year2020).
Data Availability:
All , and data presented here are public and can be found in the NASA archives at the following URL: <https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl>. data are public via the MAST archive: <https://archive.stsci.edu/missions-and-data/hst>. The reduced light curve data from Figs. <ref> and <ref> are available via this repository: <https://github.com/mnicholl/AT2019qiz>.
Code Availability:
Data reduction and X-ray spectral fitting were performed using standard publicly available codes (Methods). Code used for the relativistic disk model is described by Refs. <cit.>. Author AM is working towards releasing a user-friendly version of this code publicly via GitHub; the current version will be shared on request.
Acknowledgments:
We thank two anonymous referees for their insightful and helpful comments.
We thank the Swift, AstroSat and NICER teams for scheduling our DDT requests.
We thank the participants of the Kavli Institute for Theoretical Physics `TDE24' meeting and Chris Done for helpful discussions.
MN, AA, CA and XS are supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 948381) and by UK Space Agency Grant No. ST/Y000692/1.
DRP was funded by NASA grant 80NSSC19K1287. This work was supported by a Leverhulme Trust International Professorship grant [number LIP-202-014].
ECF is supported by NASA under award number 80GSFC21M0002.
AH is supported by Carlsberg Foundation Fellowship Programme 2015.
VSD and ULTRACAM are funded by the Science and Technology Facilities Council (grant ST/Z000033/1).
AJ is supported by grant No. 2023/50/A/ST9/00527 from the Polish National Science Center.
EJR and PR are supported by Science and Technology Facilities Council (STFC) studentships.
KDA acknowledges support from the National Science Foundation through award AST-2307668.
KA is supported by the Australian Research Council Discovery Early Career Researcher Award (DECRA) through project number DE230101069.
TWC acknowledges the Yushan Young Fellow Program by the Ministry of Education, Taiwan for the financial support.
RC benefitted from interactions with Theory Network participants
that were funded by the Gordon and Betty Moore Foundation through Grant GBMF5076.
KCP is funded in part by generous support from Sunil Nagaraj, Landon Noll and Sandy Otellini.
EN acknowledges support from NASA theory grant 80NSSC20K0540. AI acknowledges support from the Royal Society.
SGDT acknowledges support under STFC Grant ST/X001113/1.
AFG acknowledges support from the Department for the Economy (DfE) Northern Ireland postgraduate studentship scheme.
This research was supported in part by grant NSF PHY-2309135 to the Kavli Institute for Theoretical Physics (KITP).
This research has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO and Sherpa. The AstroSat mission is operated by the Indian Space Research Organisation (ISRO), the data are archived at the Indian Space Science Data Centre (ISSDC). The SXT data-processing software is provided by the Tata Institute of Fundamental Research (TIFR), Mumbai, India. The UVIT data were checked and verified by the UVIT POC at IIA, Bangalore, India.
We acknowledge the use of public data from the Swift data archive.
The Pan-STARRS telescopes are supported by NASA Grants NNX12AR65G and NNX14AM74G.
ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Oskar Klein Center at Stockholm University, the University of Maryland, University of California, Berkeley , the University of Wisconsin at Milwaukee, University of Warwick, Ruhr University, Cornell University, Northwestern University and Drexel University. Operations are conducted by COO, IPAC, and UW.
The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council.
Author contributions:
MN was PI of the , , and LT programs, performed data analysis, and led the writing and overall project.
DRP triggered the and follow-up, performed X-ray data reduction and analysis, and wrote parts of the manuscript.
AM performed the accretion disk modeling and wrote parts of the manuscript.
MG performed X-ray data reduction and spectral analysis, the comparison to other QPE sources, and wrote parts of the manuscript.
KG and ECF coordinated the observations. KG is the PI of .
GCW coordinated observations and reduced the data.
RR performed data reduction.
CB and JC contributed to the precession analysis.
AH performed data reduction.
VSD organised and reduced the ULTRACAM observations.
AFG analysed the PSF.
JG analysed the Pan-STARRS plateau.
MEH obtained the Pan-STARRS observations.
AJ contributed to pressure instability models.
GS analysed the SMBH spin systematics.
SvV contributed the ZTF forced photometry.
AA, KDA, KA, EB, YC, RC, SG, BPG, TL, AL, RM, SLM, SRO, EJR, XS contributed to the +program.
ZA, ACF, ECF, KG, EK, RR are members of the team.
ZA and KG carried out the observations.
AA, CRA, TWC, MDF, JHG, TM, PR, XS, SJS, KWS, SS, HFS, JW, DRY are members of the Pan-STARRS transients team.
TdB, KCC, HG, JH, CCL, TBL, EAM, PM, SJS, KWS, RJW, DRY contribute to the operation of Pan-STARRS.
AI, EN, SGDT provided theoretical expertise.
RC, RM, KCP, VR, RS are members of the ZTF TDE group.
TW provided expertise.
All authors provided feedback on the manuscript.
Competing interests: The authors declare that there are no competing interests.
Additional information: Correspondence and requests for materials should be addressed to Matt Nicholl ([email protected]).
|
http://arxiv.org/abs/2409.02469v1 | 20240904063711 | UAV-Mounted Movable Antenna: Joint Optimization of UAV Placement and Antenna Configuration | [
"Xiao-Wei Tang",
"Yunmei Shi",
"Yi Huang",
"Qingqing Wu"
] | math.NA | [
"math.NA",
"cs.NA"
] |
UAV-Mounted Movable Antenna: Joint Optimization of UAV Placement and Antenna Configuration
Xiao-Wei Tang, Yunmei Shi, Yi Huang, and Qingqing Wu
Xiao-Wei Tang, Yunmei Shi, Yi Huang ({xwtang, ymshi, and huangyi718b}@tongji.edu.cn) are with the Department of Information and Communication Engineering, Tongji University, Shanghai, China.
Qingqing Wu ([email protected]) is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
3 September 2024
======================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Recently, movable antennas (MAs) have garnered immense attention due to their capability to favorably alter channel conditions through agile movement. In this letter, we delve into a spectrum sharing system enabled by unmanned aerial vehicle (UAV) mounted MAs, thereby introducing a new degree of freedom vertically alongside the horizontal local mobility for MAs. Our objective is to maximize the minimum beamforming gain for secondary users (SUs) while ensuring that interference to the primary users (PUs) remains below a predefined threshold, which necessitates a joint optimization involving the UAV's height, the antenna weight vector (AWV), and the antenna position vector (APV). However, the formulated optimization problem is non-convex and challenging to solve optimally. To tackle this issue, we propose an alternating optimization algorithm that optimizes the UAV's height, APV and AWV in an iterative manner, thus yielding a near-optimal solution. Numerical results demonstrate the superiority of the proposed scheme as well as its ability to deliver full beamforming gain to SUs with reduced computational complexity.
MA, UAV, beamforming, spectrum sharing.
§ INTRODUCTION
Recently, moveable antenna (MA), also referred to as fluid antenna, has emerged as a pivotal technology endowed with the capability to be dynamically repositioned in response to changing environmental conditions or communication demands via the flexible movement of antennas <cit.>. This versatility bestows upon MA several distinct advantages, including enhanced sensing accuracy <cit.>, improved system capacity <cit.>, and reduced network interference <cit.>. Consequently, the investigation into MA can thoroughly unveil the full potential of future communication systems, particularly in dynamic and unpredictable environments.
Preliminary studies have demonstrated the superiority of MAs from various perspectives. MA-enhanced multiuser communication was investigated in <cit.>, aiming to minimize the total transmit power of users via jointly optimizing the positions of MAs, the transmit power of each user and the receive combining matrix of base station, while adhering to a minimum-achievable-rate requirement for each user. The work in <cit.> developed a field-response model for MA-based multi-path channel by leveraging the amplitude, phase, and angle of arrival/angle of departure, based on which the achievable maximum channel gain could be largely improved compared to the conventional fixed-position antennas (FPAs) case. MA-assisted spectrum sharing was studied in <cit.>, where the beamforming design and MA positions were jointly optimized to maximize the received signal power at a secondary user (SU) subject to constraints on its imposed co-channel interference power with multiple primary users (PUs). MA-based multi-beamforming <cit.> focused on maximizing the minimum beamforming gain over multiple desired directions through the collaborative optimization of the antenna position vector (APV) and antenna weight vector (AWV), taking into account the restrictions on the maximum interference power over undesired directions. Despite claiming that full beamforming gains can be achieved over all desired directions, our investigation finds that there is a discrepancy between the actual and asserted beamforming gains, which may arise from neglecting the non-negativity requirement of beamforming gain when loosening the constraint associated with the APV.
The aforementioned studies mainly exploit the local movement of MAs to create favorable channel conditions. Nevertheless, by mounting the MA array onto a UAV, we introduce an additional degree of freedom beyond UAV's mobility, which allows for dynamic adjustments in the relative positions between the MAs and the users <cit.>. Inspired by <cit.>, we delve into multi-beamforming using a UAV-mounted MA (UMA) array to facilitate spectrum sharing services for multiple ground-based SUs. In contrast to conventional UAV-mounted base station <cit.>, which prioritizes reducing distance-dependent pathloss, the investigated UMA system aims to enhance phase-sensitive beamforming gain by strategic UAV position adjustments. Specifically, by jointly optimizing the UAV placement and antenna configuration, we aim to maximize the minimum beamforming gain for SUs while ensuring their maximum interference to PUs. The formulated optimization problem exhibits non-convexity with respect to (w.r.t.) the UAV height, the APV, and the AWV, posing a significant challenge for solving it. To overcome this, we devise a low-complexity alternating algorithm that iteratively refines one of these variables while fixing the others. Numerical results demonstrate that the proposed algorithm enables SUs to harness the full potential of beamforming gain while effectively mitigating interference towards PUs concurrently.
Notations: (·), (·)^T, and (·)^H are used to denote conjugate, transpose, and conjugate transpose, respectively. The real part of vector a is denoted by Re{a}. Tr(A) denotes the trace of matrix A. f'(x) and f”(x) represent the first-order and second-order derivatives of f(x), respectively.
§ SYSTEM MODEL AND PROBLEM FORMULATION
As shown in Fig. <ref>, we consider a spectrum sharing system enabled by UAV-mounted MAs, which consists of K PUs and L SUs. The locations of PUs and SUs are fixed and denoted by pΔ = [p_1, … ,p_K]^T and sΔ = [s_1, …, s_L]^T, respectively. We assume that there are N MAs, which can be flexibly deposited along the x-axis within a line region of length D. Let 𝒩Δ = {1,…, N} denote the set of the MAs and x_n∈ [-D/2,D/2] denote the n-th MA's position. Then, the APV can be denoted by xΔ = [x_1,x_2, …,x_N]^T. The UAV is hovering over the origin point of x-axis, i.e., x = 0, and its height h can be dynamically adjusted as needed along the z-axis. As a result, the steering angles over the k-th PU and the l-th SU, denoted by θ_k ∈ [0,π], ∀ k and ϕ_l∈ [0,π], ∀ l, can be respectively expressed as
θ _k = arccos-p_k/√((h^2+p_k^2)), and ϕ _l = arccos-s_l/√((h^2+s_l^2)).
Hence, the steering vector can be expressed as
α (x,θ_k) = [ e^j2π/λx_1cos (θ_k), … ,e^j2π/λx_Ncos (θ_k)]^T , ∀ k,
α (x,ϕ_l ) = [ e^j2π/λx_1cos (ϕ_l ), … ,e^j2π/λx_Ncos (ϕ_l) ]^T, ∀ l,
where λ is the wavelength. Let wΔ = [w_1,w_2, …,w_N]^T∈ℂ^N denote the AWV. Thus, the beamforming gain over the steering angles θ_k and ϕ_l can be respectively represented as
G(w,x,θ_k) = | w^Hα (x,θ_k )|^2,∀ k,
G(w,x,ϕ_l) = | w^Hα (x,ϕ_l )|^2, ∀ l.
In this letter, we aim to maximize the minimum beamforming gain over SUs, denoted by δ, via jointly optimizing the UAV's height h, the APV x, and the AWV w, subject to constraints on the distance between adjacent MAs, the interference towards PUs, the total available power as well as the minimum hovering height. Accordingly, the problem is formulated as
(P1) max_h,w,x,δ δ
s.t. x_n - x_n - 1≥D_0,∀ n ∈𝒩\ 1,
G(w,x,ϕ _l) ≥δ ,∀ l ∈ℒ,
G(w,x,θ _k) ≤η ,∀ k ∈𝒦,
w_2 ≤ 1,
- D/2 ≤x_n≤ D/2,∀ n ∈𝒩,
h ≥H_0,
where D_0 is the minimum distance for each two adjacent MAs, η is a pre-defined interference threshold towards PUs, and H_0 is the minimum hovering height for the UAV. Specifically, (<ref>) ensures that there is no coupling among the MAs. (<ref>) guarantees that the beamforming gain over any SU is maintained above δ. Conversely, (<ref>) imposes a constraint that the interference towards any PU must not exceed a predefined threshold, i.e., η. (<ref>) specifies that the normalized power of MAs is no larger than 1. (<ref>) ensures that the MAs should be adjusted within the confined region. (<ref>) ensures that the UAV hovers above the minimum safe height. (P1) is a formidable optimization problem, primarily due to the non-convex nature of (<ref>), (<ref>) and (<ref>) w.r.t. h, w or x. This intricacy is further compounded by the intricate interdependence among these variables and thus significantly increases the complexity for solving (P1).
§ PROPOSED ALGORITHM
In this section, we divide (P1) into three subproblems and solve them iteratively in a sequential manner, where each subproblem is dedicated to optimizing either h, w, or x.
§.§ Optimization of h with Given w and x
With given w and x, we aim to optimize h in (P1), thereby formulating the following subproblem:
(P1.2) max_h,δ δ
s.t. (<ref>), (<ref>), (<ref>),
where (<ref>) and (<ref>) are non-convex w.r.t. h. Hence, we relax them by adopting the successive convex approximation (SCA) technique <cit.>. For ease of exposition, we denote the n-th element of w by w_n = |w_n|e^j∠w_n with amplitude |w_n| and phase ∠w_n. Furthermore, we define χ _n,mΔ = 2π/λ(x_n - x_m) and ϖ _n,mΔ = ∠w_n - ∠w_m. Thus, G(w,x,ϕ _l) can be further expressed as
G(w,x,ϕ _l) = ∑_n = 1^N ∑_m = 1^N κ _n,mcos (γ̂_n,m,l(h)) , ∀ l,
where κ _n,mΔ = |w_n||w_m| and γ̂_n,m,l(h) Δ = χ _n,mcos (ϕ _l) - ϖ _n,m. Since G(w,x,ϕ _l) is neither convex or concave w.r.t. h, we construct a surrogate function to locally approximate it based on the second-order Taylor expansion. Specifically, for a given point ℓ_0 ∈ℝ, the second-order Taylor expansion of cos(f(ℓ)) can be expressed as
[ cos(f(ℓ)) .= cos(f(ℓ_0)) - sin (f(ℓ_0))f'(ℓ_0)(ℓ - ℓ_0); -1/2(cos(f(ℓ_0))(f'(ℓ_0))^2+sin(f(ℓ_0))f”(ℓ_0))(ℓ-ℓ_0)^2. ]
Since (ℓ-ℓ_0)^2≥0 and cos(f(ℓ_0))(f'(ℓ_0))^2+sin (f(ℓ_0))f”(ℓ_0)≤√((f'(ℓ_0))^4+f”(ℓ_0)^2) according to the Cauchy-Schwartz inequality, we can construct the concave surrogate function ħ̂(ℓ|ℓ_0) to approximate cos(f(ℓ)) as
cos (f(ℓ)) ≥ħ̂(ℓ|ℓ_0)
Δ=cos (f(ℓ_0)) -sin (f(ℓ_0))f'(ℓ_0)(ℓ-ℓ_0)-1/2ψ̂(ℓ_0)(ℓ-ℓ_0)^2,
where ψ̂(ℓ_0) Δ = √((f'(ℓ_0))^4 + (f”(ℓ_0))^2). Then, for the given h^i in the i-th iteration of SCA, by letting f(ℓ) ←γ̂_n,m,l(h) and f(ℓ_0) ←γ̂_n,m,l(h^i), we can obtain f'(ℓ_0) ←γ̂_n,m,l'(h^i) Δ = χ _n,ms_lh^i/((h^i)^2 + s_l^2)^3/2 and f”(ℓ_0) ←γ̂_n,m,l”(h^i) Δ = χ _n,ms_l(s_l^2 - 2(h^i) ^2)/((h^i)^2 + s_l^2)^5/2. As a result, the surrogate function that provides a global lower-bound for G(w,x,ϕ _l) can be constructed as
G(w,x,ϕ _l)≥ ∑_n = 1^N ∑_m = 1^N κ _n,mħ̂(γ̂_n,m,l(h)|γ̂_n,m,l(h^i))
Δ = â_lh^2 + b̂_lh + ĉ_l, ∀ l,
where â_l, b̂_l, ĉ_l∈ℝ, ∀ l are given by
â_l=- 1/2∑_n = 1^N ∑_m = 1^N κ _n,mψ̂_n,m,l(h^i),
b̂_l=-∑_n = 1^N∑_m = 1^N κ _n,m[β̂_n,m,l(h^i)-h^iψ̂_n,m,l(h^i)],
ĉ_l=∑_n = 1^N ∑_m = 1^N κ _n,m[cos (γ̂_n,m,l(h^i))
+ β̂_n,m,l(h^i)h^i - 1/2ψ̂_n,m,l(h^i)(h^i)^2],
with ψ̂_n,m,l(h^i) Δ = √(γ̂_n,m,l'(h^i)^4 + γ̂_n,m,l”(h^i)^2) and β̂_n,m,l(h^i) Δ = sin (γ̂_n,m,l(h^i))γ̂_n,m,l'(h^i).
Additionally, since (<ref>) has a similar structure as (<ref>), we can relax it by modifying the procedure of constructing the relaxed form of (<ref>). Specifically, since (ℓ-ℓ_0)^2≥ 0 and cos (f(ℓ_0))(f'(ℓ_0))^2 + sin (f(ℓ_0))f”(ℓ_0) ≥-√((f'(ℓ_0))^4+f”(ℓ_0)^2) according to the Cauchy-Schwartz inequality, a global upper-bound for G(w,x,θ _k) can be approximated as
G(w,x,θ _k)≤ ∑_n = 1^N ∑_m = 1^N κ _n,mħ̅(γ̅_n,m,k(h)|γ̅_n,m,k(h^i))
Δ = a̅_kh^2 + b̅_kh + c̅_k, ∀ k,
where a̅_k, b̅_k, c̅_k∈ℝ, ∀ k are given by
a̅_k= 1/2∑_n = 1^N ∑_m = 1^N κ _n,mψ̅_n,m,k(h^i),
b̅_k =-∑_n = 1^N ∑_m = 1^N κ _n,m[β̅_n,m,k(h^i) -h^iψ̅_n,m,k(h^i)],
c̅_k = ∑_n = 1^N ∑_m = 1^Nκ _n,m[cos (γ̅_n,m,k(h^i))+β̅_n,m,k(h^i)
+ 1/2ψ̅_n,m,k(h^i)(h^i)^2],
with ψ̅_n,m,k(h^i)Δ = √(γ̅_n,m,k'(h^i)^4 + γ̅_n,m,k”(h^i)^2) and β̅_n,m,k(h^i) Δ = sin (γ̅_n,m,k(h^i))γ̅_n,m,k'(h^i).
Therefore, in the i-th iteration of SCA, h can be optimized by solving the following optimization problem:
(P1.2.1) max_h, δ δ
s.t. â_lh^2 + b̂_lh + ĉ_l≥δ ,∀ l,
a̅_k h^2 + b̅_kh + c̅_k≤η ,∀ k,
(<ref>).
Since (<ref>) and (<ref>) are convex quadratic constraints and (<ref>) is a
linear constraint w.r.t. h, (P1.2.1) is a convex problem and can be efficiently solved by existing solvers, e.g., CVX.
§.§ Optimization of w with Given h and x
With given h and x, we aim to optimize w in (P1), which leads to the following subproblem:
(P1.3) max_w,δ δ
s.t. (<ref>), (<ref>),(<ref>),
where (<ref>) is non-convex w.r.t. w. Thus, we adopt the SCA technique to relax it. Specifically, for the given w^i ∈ℂ^N in the i-th iteration of SCA, since G(w,x,ϕ _l) is convex w.r.t. w, we can construct the following linear surrogate function G̅(w,x,ϕ _l|w^i) to globally approximate G(w,x,ϕ _l) by applying the first-order Taylor expansion at w^i:
G(w,x,ϕ _l) ≥G̅(w,x,ϕ _l|w^i)
Δ = 2 Re{(w^i)^Hα(x, ϕ _l)α(x,ϕ _l)^Hw}- G(w^i,x,ϕ _l),∀ l.
Hence, for the given w^i ∈ℂ^N in the i-th iteration of SCA, w can be optimized by solving the following problem:
(P1.3.1) max_w,δ δ
s.t. G̅(w,x,ϕ _l|w^i) ≥δ ,∀ l,
(<ref>), (<ref>),
where (<ref>) is a linear constraint and (<ref>) and (<ref>) are convex quadratic constraints w.r.t. w. Thus, (P1.3.1) is a convex problem, which can be solved via existing solvers, e.g., CVX.
§.§ Optimization of x with Given h and w
With given h and w, we aim to optimize x in (P1), thus yielding the following subproblem:
(P1.4) max_x,δ δ
s.t. (<ref>),(<ref>),(<ref>),(<ref>),
where (<ref>) and (<ref>) are non-convex constraints w.r.t. x. Hence, we relax them by adopting the SCA technique. For ease of exposition, we define ϑ _lΔ = 2π/λcos (ϕ _l). Therefore, G(w,x, ϕ _l) can be further expressed as
G(w,x,ϕ _l)=∑_n = 1^N ∑_m = 1^N κ _n,mcos (f_l(x_n,x_m)), ∀ l,
where f_l(x_n,x_m) Δ = ϑ _l(x_n - x_m) - (∠w_n - ∠w_m).
Since G(w,x,ϕ _l) is neither convex or concave w.r.t. f_l(x_n,x_m), we can construct a surrogate function to locally approximate it based on the second-order Taylor expansion. Specifically, for a given ℓ_0 ∈ℝ, the second-order Taylor expansion of cos(ℓ) can be expressed as
cos(ℓ).= cos (ℓ_0)-sin (ℓ_0)(ℓ-ℓ_0) -1/2cos (ℓ_0)(ℓ-ℓ_0)^2.
Since cos (ℓ_0) ≤ 1 and (ℓ-ℓ_0)^2≥ 0, we can construct the concave surrogate function ρ̂(ℓ|ℓ_0) to approximate cos(ℓ) as
cos (ℓ) ≥ρ̂(ℓ|ℓ_0) Δ = cos (ℓ_0)-sin (ℓ_0)(ℓ-ℓ_0) -1/2(ℓ-ℓ_0)^2.
Since G(w,x,ϕ _l) is neither convex or concave w.r.t. x, we can construct a convex surrogate function to locally approximate it based on the second-order Taylor expansion similar to Section III-A. Then, for the given x^iΔ = [x_1^i,x_2^i,...,x_N^i]^T in the i-th iteration of SCA, by letting ℓ←f_l(x_n,x_m) and ℓ_0←f_l(x_n^i,x_m^i) in ρ̂(ℓ|ℓ_0) as shown in (<ref>), the surrogate function that provides a global lower-bound for G(w,x,ϕ _l) can be constructed as
G(w,x,ϕ _l) ≥ ∑_n = 1^N ∑_m = 1^N κ _n,mρ̂(f_l(x_n,x_m)|f_l(x_n^i,x_m^i))
Δ = 1/2x^TA_lx + b_l^Tx + c_l,∀ l,
where A_l ∈ℝ^N× N, b_l ∈ℝ^N, and c_l ∈ℝ are given by
A_lΔ = - 2ϑ_l^2(γdiag(w̅) - w̅w̅^T), ∀ l,
b_l[n] Δ = 2ϑ _l^2∑_m = 1^N κ _n,m(x_n^i - x_m^i)
- 2ϑ _l∑_m = 1^N κ _n,msin (f_l(x_n^i,x_m^i), ∀ l,
c_lΔ = ∑_n = 1^N ∑_m = 1^N κ _n,mcos (f_l(x_n^i,x_m^i))
+ ϑ _l∑_n = 1^N ∑_m = 1^N κ _n,msin (f_l(x_n^i,x_m^i))(x_n^i - x_m^i)
- 1/2ϑ _l^2∑_n = 1^N ∑_m = 1^N κ _n,m(x_n^i - x_m^i)^2, ∀ l,
with w̅Δ = [|w_1|,|w_2|,...,|w_n|]^T and γΔ = ∑_n = 1^N |w_n|. Note that A_l can be proven to be a negative semi-definite (NSD) matrix <cit.>. Thus, (<ref>) is relaxed to be convex w.r.t. x, that is,
1/2x^TA_lx + b_l^Tx + c_l≥δ, ∀ l.
On the other hand, since (<ref>) has a similar structure as (<ref>), we can relax it by modifying the procedure of constructing the relaxed convex constraint as given in (<ref>). As a result, the surrogate function that provides a global upper-bound for G(w,x,θ _k) can be constructed as
G(w,x,θ _k) ≤ ∑_n = 1^N ∑_m = 1^N κ _n,mρ̃( f_k(x_n,x_m)|f_k(x_n^i,x_m^i))
Δ = 1/2x^TÃ_kx + b̃_k^Tx + c̃_k, ∀ k,
where Ã_k∈ℝ^N × N, b̃_k∈ℝ^N, and c̃_k ∈ℝ are given by
Ã_kΔ = 2φ _k^2(γdiag(w̅) - w̅w̅^T), ∀ k,
b̃_k[n] Δ = - 2φ _k^2 ∑_m = 1^N κ _n,m(x_n^i - x_m^i)
- 2φ _k∑_m = 1^N κ _n,msin (f_k(x_n^i,x_m^i), ∀ k,
c̃_kΔ = ∑_n = 1^N ∑_m = 1^N κ _n,mcos (f_k(x_n^i,x_m^i))
+ φ _k∑_n = 1^N ∑_m = 1^N κ _n,msin (f_k(x_n^i,x_m^i))(x_n^i - x_m^i)
+ 1/2φ _k^2∑_n = 1^N ∑_m = 1^N κ _n,m(x_n^i - x_m^i)^2, ∀ k.
Note that Ã_k can be rigorously proven to be a positive semi-definite matrix <cit.>. Thus, (<ref>) can be relaxed as a convex constraint w.r.t. x:
1/2x^TÃ_kx + b̃_k^Tx + c̃_k≤η, ∀ k.
Notice that G(w,x,θ _k) must be larger than 0. However, the relaxed form of G(w,x,θ _k) as given in (<ref>) can not guarantee this requirement. Thus, the following constraint should be satisfied:
1/2x^TȦ_kx + ḃ_k^Tx + ċ_k≥ 0, ∀ k.
where Ȧ_k∈ℝ^N × N, ḃ_k∈ℝ^N, and ċ_k ∈ℝ are given by
Ȧ_kΔ = -2φ _k^2(γdiag(w̅) - w̅w̅^T), ∀ k,
ḃ_k[n] Δ = 2φ _k^2 ∑_m = 1^N κ _n,m(x_n^i - x_m^i)
- 2φ _k∑_m = 1^N κ _n,msin (f_k(x_n^i,x_m^i), ∀ k,
ċ_kΔ = ∑_n = 1^N ∑_m = 1^N κ _n,mcos ( f_k(x_n^i,x_m^i))
+ φ _k∑_n = 1^N ∑_m = 1^N κ _n,msin (f_k(x_n^i,x_m^i))(x_n^i - x_m^i)
- 1/2φ _k^2∑_n = 1^N ∑_m = 1^N κ _n,m(x_n^i - x_m^i)^2, ∀ k.
Note that Ȧ_k can also be proven to be a NSD similar to A_l. Therefore, in the i-th iteration of SCA, x can be optimized by solving the following optimization problem:
(P1.4.1) max_x,δ δ
s.t. (<ref>),(<ref>), (<ref>), (<ref>),(<ref>).
Since (<ref>) and (<ref>) are linear constraints and (<ref>), (<ref>) and (<ref>) are convex quadratic constraints w.r.t. x, (P1.4.1) is a convex problem and can be efficiently solved by CVX.
§.§ Overall Algorithm and Complexity Analysis
The overall algorithm for solving (P1) is summarized in Algorithm 1. Let I_h, I_w and I_x denote the number of iterations for solving (P1.2.1), (P1.3.1), and (P1.4.1), respectively. In each iteration, h, w and x are alternatively optimized using the interior-point method, and thus their individual complexity can be represented as O(I_h(L+K+1)^3ln(1/ς)), O(I_w(L+K+1)N^3.5ln(1/ς)) and O(I_x(2N+2K+L-1)N^3.5ln(1/ς)), respectively, with ς being the pre-specified precision. Hence, the total computational complexity is O(I(I_h(L+K+1)^3+I_w(L+K+1)N^3.5+I_x(2N+2K+L-1)N^3.5)log(1/ς)) with I denoting the number of iterations for iteratively solving (P1.2.1), (P1.3.1), and (P1.4.1).
§ NUMERICAL RESULTS
In the simulation, unless otherwise specified, we set N=8, K=2, L=2, η = 0.1, H_0 = 10m, λ = 0.1m, D_0 = λ/2, D = 8D_0, and ς = 10^-3. The locations of SUs and PUs are set to s = [-11.91,5.77]^Tm and p = [-56.71,17.32]^Tm. w^0 and x^0 can be initialized by referring to <cit.>, while h^0 can be initialized via h^0 = ∑_n = 1^N∑_m = 1^N∑_l = 1^L h_n,m,l^0/LN^2. Specifically, h_n,m,l^0 can be obtained via solving the following equality:
tan ( - χ _n,ms_l/√(z_n,m,l) - ϖ _n,m) = (s_l^2 - 2(h_n,m,l^0)^2)(z_n,m,l)^1/2/χ _n,ms_l(h_n,m,l^0)^2,
where z_n,m,lΔ = h_n,m,l^0+s_l^2. In addition, we compare the proposed scheme with 3 benchmarks named UMA-AH, FPA, MA, whose details are given as: 1) UMA-AH: The initial UAV height of the proposed algorithm is set arbitrarily, i.e., h^0 = H_0; 2) FPA: The height of the UAV and the positions of the MAs are fixed; 3) MA<cit.>: The height of the UAV is fixed while the AWV and APV are alternatively optimized.
In Fig. <ref>, we show the max-min beamforming gain versus iteration numbers for the proposed scheme and its variants. Here, `AH/AW' is abbreviated for arbitrary height/AWV. Notably, the proposed UMA scheme initiates from an exceedingly close-to-optimal starting point and exhibits swift convergence towards the full beamforming gain across different η values. For example, the proposed UMA scheme only takes about 1/5 iteration numbers of the UMA-AWAH scheme under η = 0.1. The UMA-AW approach experiences a marginal decrement in max-min beamforming gain under η = 0.05 and a slightly prolonged convergence period, which, however, still surpasses the UMA-AH and UMA-AHAW methods in terms of convergence speed and overall system performance. Consequently, we deduce from Fig. <ref> that the proposed height initialization technique (e.g., UMA and UMA-AW) contributes significantly to enhancing convergence speed and elevating the max-min beamforming gain.
In Fig. <ref>, we demonstrate the beamforming gain for the 1-st SU versus UAV height with N=6 and N=10 for the FPA and MA benchmarks. The beamforming gain fluctuates sharply with the UAV height in the FPA scheme, which, however, is much more gentle in the MA's case. This is because the varying UAV height changes the relative positions between the UAV and SU, thus affecting the steering vector as defined in (<ref>). It is also observed from Fig. <ref> that the MA scheme can achieve the full beamforming gain for the 1-st SU within a specific height range, e.g., h ∈ [10.5, 12.5] for N = 10, while the FPA scheme can only achieve the beamforming gain at a certain point, e.g., h = 12.5 for N = 10. This indicates that by adjusting the UAV height based on the MA scheme, it is highly likely to identify an optimal UAV position, thereby enabling all SUs to achieve the full beamforming gain.
Fig. <ref> presents the comparison of beam patterns with different benchmarks. We can see from Fig. <ref> that the beamforming gain for the two PUs can be well restrained under the pre-determined threshold for the four considered schemes. Moreover, we can see that the proposed scheme achieves the full beamforming gain for the two SUs (i.e., G(w,x,ϕ _l)=8, l ∈{1,2}) while the UMA-AH, MA, FPA counterparts achieve a beamforming gain of 7.75, 7.57, and 4.47, respectively.
This is because the proposed UMA scheme can flexibly adjust the steering vector by exploiting the new degree of freedom provided by the UAV height adjustment.
§ CONCLUSIONS
In this letter, we investigated a UMA system to enhance the achievable beamforming gain for SUs by exploiting the UAV mobility and local MA movement. A low-complexity alternating optimization algorithm was devised to obtain a near-optimal solution to the formulated non-convex optimization problem. Numerical results demonstrated that the proposed UMA scheme outperformed its UMA-AH, MA and FPA counterparts, which could achieve the full beamforming gain for all SUs while mitigating the interference towards PUs simultaneously with reduced computational complexity thanks to the proposed UAV height initialization technique.
99
Wang
C. Wang et al., “AI-empowered fluid antenna systems: Opportunities, challenges, and future directions,” IEEE Wireless Commun., early access, pp. 1-8, Jul. 2024.
Ma3
W. Ma, L. Zhu, and R. Zhang, “Movable antenna enhanced wireless sensing via antenna position optimization,” 2024, arXiv:2405.01215.
Ma2
W. Ma, L. Zhu, and R. Zhang, “MIMO capacity characterization for movable antenna systems,” IEEE Trans. Wireless Commun., vol. 23, no. 4, pp. 3392-3407, Apr. 2024.
Wang2
H. Wang et al., “Movable antenna enabled interference network: Joint antenna position and beamforming design,” IEEE Wireless Comm. Lett., early access, pp. 1-5, Jul. 2024.
Zhu2
L. Zhu et al., “Movable-antenna enhanced multiuser communication via antenna position optimization,” IEEE Trans. Wireless Commun., vol. 23, no. 7, pp. 7214-7229, Jul. 2024.
Zhu3
L. Zhu, W. Ma, and R. Zhang, “Modeling and performance analysis for movable antenna enabled wireless communications,” IEEE Trans. Wireless Commun., vol. 23, no. 6, pp. 6234-6250, Jun. 2024.
Wei
X. Wei et al., “Joint beamforming and antenna position optimization for movable antenna-assisted spectrum sharing,” IEEE Wireless Commun. Lett., early access, pp. 1-5, Jul. 2024.
Ma
W. Ma et al., “Multi-beam forming with movable-antenna array,” IEEE Commun. Lett., vol. 28, no. 3, pp. 697-701, Mar. 2024.
Tang
X. -W. Tang et al., “3D trajectory planning for real-time image acquisition in UAV-assisted VR,” IEEE Trans. Wireless Commun., vol. 23, no. 1, pp. 16-30, Jan. 2024.
Wu
Q. Wu et al., “A comprehensive overview on 5G-and-beyond networks with UAVs: From communications to sensing and intelligence,” IEEE J. Sel. Areas Commun., vol. 39, no. 10, pp. 2912-2945, Oct. 2021.
IEEEtran
|
http://arxiv.org/abs/2409.03558v1 | 20240905141816 | Magnetic Field Alignment Relative to Multiple Tracers in the High-mass Star-forming Region RCW 36 | [
"Akanksha Bij",
"Laura M. Fissel",
"Lars Bonne",
"Nicola Schneider",
"Marc Berthoud",
"Dennis Lee",
"Giles A. Novak",
"Sarah I. Sadavoy",
"Thushara G. S. Pillai",
"Maria Cunningham",
"Paul Jones",
"Robert Simon"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
0000-0001-7505-5223]Akanksha Bij
Department of Physics, Engineering Physics and Astronomy, Queen’s University, Kingston, ON K7L 3N6, Canada
0000-0002-4666-609X]Laura M. Fissel
Department of Physics, Engineering Physics and Astronomy, Queen’s University, Kingston, ON K7L 3N6, Canada
0000-0002-0915-4853]Lars Bonne
SOFIA Science Center, NASA Ames Research Center, Moffett Field, CA 94045, USA
0000-0003-3485-6678]Nicola Schneider
I. Physik. Institut, University of Cologne, Zülpicher Str. 77, D-50937 Cologne, Germany
Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), 1800 Sherman Avenue, Evanston, IL 60201, USA
Engineering + Technical Support Group, University of Chicago, Chicago, IL 60637, USA
0000-0002-3455-1826]Dennis Lee
Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), 1800 Sherman Avenue, Evanston, IL 60201, USA
Department of Physics & Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA
0000-0003-1288-2656]Giles A. Novak
Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), 1800 Sherman Avenue, Evanston, IL 60201, USA
Department of Physics & Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA
0000-0001-7474-6874]Sarah I. Sadavoy
Department of Physics, Engineering Physics and Astronomy, Queen’s University, Kingston, ON K7L 3N6, Canada
0000-0003-2133-4862]Thushara G. S. Pillai
MIT Haystack Observatory, 99 Millstone Road , Westford, MA, 01827
0000-0001-7020-6176]Maria Cunningham
School of Physics, University of New South Wales, Sydney NSW 2052, Australia
0000-0001-9429-9135]Paul Jones
School of Physics, University of New South Wales, Sydney NSW 2052, Australia
0000-0003-2555-4408]Robert Simon
I. Physik. Institut, University of Cologne, Zülpicher Str. 77, D-50937 Cologne, Germany
Akanksha Bij
[email protected]
§ ABSTRACT
We use polarization data from SOFIA HAWC+ to investigate the interplay between magnetic fields and stellar feedback in altering gas dynamics within the high-mass star-forming region RCW 36, located in Vela C. This region is of particular interest as it has a bipolar region powered by a massive star cluster which may be impacting the surrounding magnetic field. To determine if this is the case, we apply the Histogram of Relative Orientations (HRO) method to quantify the relative alignment between the inferred magnetic field and elongated structures observed in several datasets such as dust emission, column density, temperature, and spectral line intensity maps. The HRO results indicate a bimodal alignment trend, where structures observed with dense gas tracers show a statistically significant preference for perpendicular alignment relative to the magnetic field, while structures probed by photo-dissociation region (PDR) tracers tend to align preferentially parallel relative to the magnetic field. Moreover, the dense gas and PDR associated structures are found to be kinematically distinct such that a bimodal alignment trend is also observed as a function of line-of-sight velocity. This suggests that the magnetic field may have been dynamically important and set a preferred direction of gas flow at the time that RCW 36 formed, resulting in a dense ridge developing perpendicular to the magnetic field. However on filament-scales near the PDR region, feedback may be energetically dominating the magnetic field, warping its geometry and the associated flux-frozen gas structures, causing the observed the preference for parallel relative alignment.
§ INTRODUCTION
Observations and simulations suggest that star formation occurs when density fluctuations undergo gravitational collapse in molecular clouds <cit.>. The interstellar magnetic field is thought to influence the structure and evolution of these molecular clouds through regulating the rate and efficiency at which gas is converted into pre-stellar structures by providing support against collapse and/or directing gas flow <cit.>. In the vicinity of massive stars, stellar feedback in the form of winds, outflows, and radiation pressure can further alter the dynamical and chemical evolution of the molecular cloud <cit.>.
Stellar feedback has also been observed to reshape magnetic field geometries around expanding ionized bubbles <cit.>, which is consistent with predictions from magnetohydrodynamic (MHD) simulations <cit.>. However the combined impact of both magnetic fields and stellar feedback in high-mass star forming regions remains poorly understood due to various constraints. For instance, simulating the effect of stellar feedback on the parent molecular cloud requires complex sub-grid physics and demanding computational resources <cit.>. Furthermore measuring the magnetic field strength through observations is challenging.
While numerous observational techniques such as Zeeman splitting <cit.>, Faraday rotation <cit.> and the Davis-Chandrasekhar-Fermi (DCF) method <cit.> applied to polarized light have been used in the past <cit.>, each technique has limitations and/or only provides partial information about the magnetic field structure.
Of all methods to study magnetic fields, polarized dust emission is the most commonly used observational tracer in dense molecular clouds. The plane-of-sky magnetic field orientation can be inferred from the linearly polarized emission of non-spherical dust grains, which are thought to align their long axes perpendicular to the local magnetic field lines on average <cit.>. Dust polarization angle maps can therefore be used as a proxy for the magnetic field orientation weighted by density, dust grain efficiency, and dust opacity. Various comparisons between the orientation of the magnetic field lines to the orientation of molecular cloud structures have been studied to gain insight into the role of the magnetic field in the star-forming process <cit.>.
A numerical method known as the Histogram of Relative Orientations (HRO) was developed by <cit.> to statistically characterize this comparison by measuring the relative alignment between the magnetic field orientation, and the orientation of iso-contours of elongated structures measured from a gradient field. Several HRO studies have found that the relative alignment between interstellar structures and the plane-of-sky magnetic field orientation is dependent on density and column density <cit.>. Most notably, <cit.> implemented the HRO method for 10 nearby (< 400 pc) molecular clouds using polarimetry data with a resolution of 10 at 353 GHz from the Planck satellite. They found that the overall alignment of elongated structures transitioned from either random or preferentially parallel relative to the magnetic field at lower column densities, to preferentially perpendicular at higher column densities, with the switch occurring at different critical column densities for each cloud. In simulations this signature transition to perpendicular alignment for an increasing column density has been seen for strong magnetic fields that are significant relative to turbulence and able to influence the gas dynamics (i.e. dynamically important)
<cit.>.
The HRO method has since been applied to younger and more distant giant molecular clouds, such as Vela C (distance of ∼ 900 pc), whose magnetic field morphology was inferred by the BLASTPol (Balloon-borne Large-Aperture Submillimetre Telescope for Polarimetry) instrument at 250, 350 and 500 μm <cit.>. The BLASTPol-inferred magnetic field was compared to column densities derived from Herschel observations by <cit.>, and to the integrated intensities of molecular gas tracers by <cit.>. Both studies found a similar tendency for elongated structures to align preferentially parallel relative to the magnetic field for low column densities or low-density gas tracers, which then switched to preferentially perpendicular for higher column densities or high-density gas tracers. However with a full width at half maximum (FWHM) resolution of ∼3, the BLASTPol observations are only able to probe the Vela C magnetic field geometry on cloud-scales (> 1 pc).
In this work, we extend these HRO studies to filament-scales (∼0.1–1 pc) by using higher resolution polarimetry data from the Stratospheric Observatory for Infrared Astronomy (SOFIA) High-resolution Airborne Wide-band Camera (HAWC+) instrument at 89 μm (Band C) and 214 μm (Band E), with angular resolution of 7.8 (0.03 pc) and 18.2 (0.08 pc), respectively <cit.>. Moreover, we focus on the role of magnetic fields in high-mass star formation by targeting the densest region within Vela C, known as RCW 36 (visual extinction of > 100 mag) which is within a parsec of a ionizing young (∼ 1 Myr) OB cluster responsible for powering a bipolar nebula within the region <cit.>.
Previous HRO studies have mostly compared the relative orientation of the magnetic field to molecular gas structures. Since this study aims to understand the role of stellar feedback, we wish to additionally apply the HRO analysis to structures associated with the photodissociation region (PDR). The PDR is the interfacing boundary between the ionized region and surrounding molecular cloud where far-ultraviolet (FUV) photons with energies in the range of 6–13.6 eV dominate and dissociate H_2 and CO molecules <cit.>. Several recent region studies have used observations of since it traces the PDR <cit.>. For RCW 36, the bipolar region was investigated by <cit.> who examined the kinematics of 158 μm and 63 μm data from the SOFIA legacy project FEEDBACK <cit.>.
We build upon these studies by applying the HRO method to multiple complementary observations tracing column density, temperature, molecular gas, as well as the PDR, to construct a more complete picture of how the magnetic field is affecting star formation within RCW 36. The paper is organized as follows: Section <ref> describes the observations. Section <ref> details the physical structure of the RCW 36 and its magnetic field morphology, including noteworthy regions. Section <ref> discusses the HRO method, with the results presented in Section <ref>. In Section <ref>, we interpret the results and compare this work to other studies. Finally in Section <ref>, we summarize our main conclusions.
§ DATA
In this study we use several types of data for the HRO analysis, which can be separated into three categories: polarized emission from dust (Section <ref>), thermal continuum emission from dust (Section <ref>), and spectroscopic lines (Section <ref>), each of which are summarized in Table <ref>. The observations from SOFIA HAWC+ and ALMA are presented for the first time. Figure <ref> shows the new HAWC+ and ALMA data and Figure <ref> shows archival data. A detailed overview of the each dataset is provided in the following subsections.
§.§ Dust Polarization Maps
The linearly polarized intensity P can be found from the linear polarization Stokes parameters Q and U using:
P = √(Q^2 + U^2),
while the polarization fraction is given by p = P/I, where I is the total intensity.
The polarized intensity and polarization fraction are both constrained to be positive quantities which results in a positive bias at low total intensities <cit.>. This can be corrected for with `debiasing' <cit.>, using:
P_db = √(Q^2 + U^2 - 1/2(δ Q^2 + δ U^2)),
where δ Q and δ U are the measurement uncertainties in Q and U, respectively. The polarization angle Ê can be calculated using Ê=0.5 arctan(U,Q). The orientation of the plane-of-sky magnetic field is then inferred to be orthogonal to Ê such that:
B̂_⊥=1/2arctan(U,Q) + π/2,
where an angle of 0^∘ points towards Equatorial north and increases eastward.
§.§.§ SOFIA HAWC+
In this paper we publish, for the first time, observations of RCW 36 using publicly available archival data from HAWC+ <cit.>, the far-IR polarimeter onboard SOFIA. The RCW 36 region was observed by SOFIA/HAWC+ on 6 and 14 June 2018 as part of the Guaranteed Time Observing (GTO) program (AOR: 70_0609_12), in both Band C (89 μm) and Band E (214 μm), at nominal angular resolutions at FWHM of 7.8 and 18.2, respectively. The observations were done using the matched-chop-nod method <cit.> with a chopping of frequency of 10.2 Hz, chop angle of 112.4^∘, nod angle of -67.5^∘ and chop throw of 240. Each observing block consisted of four dithered positions, displaced by 12 for Band C and 27 for Band E. The total observation time for Band C and Band E was 2845s and 947s, respectively. The total intensity maps for each band are shown in the top row of Figure <ref>.
To reduce this data, we used the HAWC+ Data Reduction Pipeline which is described in detail in <cit.> and <cit.> and summarized here as follows. The pipeline begins by demodulating the data and discarding any data points affected by erroneous telescope movements or other data acquisition errors. This demodulated data is then flat-fielded to calibrate for gain fluctuations between pixels and combined into four sky images per independent pointing. The final Stokes I, Q and U maps are generated from these four maps after performing flux calibrations accounting for the atmospheric opacity and pointing offsets. Next, the polarization is debiased (see Equation <ref>) and the polarization percentage and the polarization angle are calculated.
A χ^2 statistic is then computed by comparing the consistency between repeated measurements to estimate additional sources of uncertainties such as noise correlated across pixels which can lead to an underestimation of errors in the IQU maps <cit.>. This underestimation can be corrected for using an Excess Noise Factor (ENF) given by ENF = √(χ^2/χ_theo) where χ_theo is theoretically expected and χ^2 is measured. The ENF is estimated in the HAWC+ Reduction Pipeline by fitting two parameters I_0 and C_0 using
ENF = C_0√(1 + (I/5I_0)^2),
where I is the total intensity (Stokes I) of a pixel in units of Jy pixel^-1. The errors of the IQU maps are then multiplied by the ENF. The values of I_0, C_0 used for I, Q, and U maps in Band C and E are summarized in Table <ref>. We note that the Stokes I errors for Band E were a special case where the Pipeline ENF fitting routine failed (giving a value of I_0=0 which resulted in a non-physical ENF), likely due to large intensities from bright emission. In this case we forced the ENF to be about 1 (by manually setting I_0 to be a sufficiently large number such as 100 and C_0=1) such that the errors were neither under nor overestimated.
After correcting the errors, the pipeline rejects any measurements falling below the 3-σ cutoff in the degree of polarization p to associated uncertainty σ_p (p < 3σ_p) which roughly corresponds to a 10^∘ uncertainty in the polarization angle.
After running the pipeline, we also applied a 3-σ signal-to-noise threshold on the total intensity flux and polarized flux to further remove noisy polarization vectors. As a final diagnostic we checked for potential contamination from the reference beam position due to dithering the data in Chop-Nod polarization observations <cit.>. For this test we used Herschel far-IR intensity maps (described in Section <ref>) since they cover both the RCW 36 region and the surrounding Vela C cloud-scale region, which include the HAWC+ chop reference beam positions that are outside the HAWC+ maps. We used Herschel maps of comparable wavelengths to the HAWC+ data (PACS 70 μm to compare with HAWC+ 89 μm and PACS 160 and SPIRE 250 μm to compare with HAWC+ 214 μm) and found the ratio of the total intensity of the HAWC+ region compared to the chop reference beam region in the Herschel data. To estimate the ratio of polarization flux, we conservatively assume that there is a 10% polarization at the reference beam positions and remove points where the estimated polarized flux in the reference beam is more than 1/3 times the polarized flux in the HAWC+ map.
§.§ Dust Emission Maps
§.§.§ ALMA
In this work, we present two new interferometric data sets from the Atacama Large Millimeter/submillimeter Array (ALMA) and Atacama Compact Array (ACA).
The first dataset includes observations of dust continuum and line emission in Band 6 (1.1–1.4 mm) using the ACA with 7-m dishes (ID 2018.1.01003.S, Cycle 6, PI: Fissel, Laura). The observations took place from 22 April to 14 July of 2019 with a continuum sensitivity of 2.15 mJy beam^-1. The configuration resulted in a minimum baseline of 9 m and a maximum baseline of 48 m. The angular resolution is approximately 4.9 and the maximum recoverable scale is 28, which corresponds to spatial scales ranging from ∼0.02 to 0.12 pc (4410 to 25200 au) using a distance estimate of ∼ 900 pc for Vela C from <cit.>. The imaged area was 108 × 324, with 87 mosaic pointings and a mosaic spacing of 21.8. The average integration time was 20 minutes per mosaic pointing.
The second ALMA program used the 12-m array in the C43-1 configuration to observe both polarization mosaics of some of the dense cores identified in the ACA data, as well as larger spectroscopic and continuum observations with the same correlator configuration as the ACA observations (ID 2021.1.01365.S, Cycle 8, PI: Bij, Akanksha). The spectroscopic and continuum observations took place on 19–24 March 2022 with a continuum sensitivity of 0.2 mJy beam^-1. The configuration resulted in a minimum baseline of 14.6 m and a maximum baseline of 284 m. The angular resolution is approximately 1.2 and the maximum recoverable scale is 11.2, which corresponds to ∼0.0052 to 0.05 pc (1080 to 10080 au) resolution at the distance to Vela C. The imaged area was 42 × 85 in size, with 26 mosaic pointings and a mosaic spacing of 12.3. The average integration time was 9.5 minutes per mosaic pointing.
In this work, we present only the total intensity dust continuum maps from both datasets, as delivered by the QA2 (Quality Assurance 2) process, with no further data reduction performed. These maps are shown in the bottom row of Fig. <ref>. We do not analyze the spectral data and polarization mosaics from these observations in this work, as further data reduction is required and left for future studies.
§.§.§ Herschel SPIRE and PACS
To study the cloud structures probed by thermal dust emission, we use publicly available archival maps from the Herschel Space Observatory, which observed Vela C on 18 May 2010 <cit.> as part of the Herschel OB Young Stars (HOBYS) key programme <cit.>. The observations were conducted using the SPIRE instrument at 500, 350, and 250 μm <cit.>, and the PACS instrument at 70 and 160 μm <cit.>. Additionally, we include a Herschel-derived temperature map at an angular resolution of 36 and a 18 column density map. The column density map is derived from a spectral energy distribution fit to the 160, 250, 350 and 500 μm flux maps, following the procedure described in detail in Appendix A of . We show the 250 μm, 70 μm, and temperature maps in panels b, c, and d of Figure <ref>, respectively.
§.§.§ Spitzer IRAC
To trace warmer dust grains, we use archival mid-IR maps from the Spitzer Space Telescope, obtained from the publicly available ISRA NASA/IPAC Infrared Science Archive[Available at https://irsa.ipac.caltech.edu/about.html]. Spitzer observed RCW 36 in May 2006, employing the four-channel camera IRAC to capture simultaneous broadband images at channels 1–4, covering bands centered at 3.6, 4.5, 5.8, and 8.0 μm, respectively <cit.>. IRAC uses two 5.2 × 5.2 fields of view, where one field simultaneously images at 3.6 and 5.8 μm and the other at 4.5 and 8.0 μm. All four detector arrays are 256 × 256 pixels, with 1.2 square pixels. This dataset was published in <cit.>. In this work, we only use data from 3.6 μm (channel 1) and 4.5 μm (channel 2) for our HRO analysis, as channels 3 and 4 have artifacts and saturation. Channels 1 and 2 have resolutions of 1.66 and 1.78, respectively. We show the Spitzer 3.6 μm map in panel j of Figure <ref>.
§.§ Atomic and Molecular Line Maps
The gas structure of the region is also of significant interest in understanding the dynamic importance of the magnetic field and how it has been affected by stellar feedback. To this end, we use a myriad of archival spectroscopic line data to probe different chemical, thermal and density conditions within the RCW 36 region. Table <ref> summarizes the lines of interest, including their transitions, rest frequencies, velocity resolution, and the channels used to make the integrated intensity maps. For our analysis of the spectroscopic data, we use both an integrated intensity map for a wide velocity range (Column 5 of Table <ref>) as well as channel maps over narrower velocity ranges (Column 6 of Table <ref>). Our spectral data cube analysis is described in Sections <ref> and <ref>.
§.§.§ SOFIA upGREAT FEEDBACK
To trace the PDR, we use a 158 μm map of the ^3P_3/2→ ^3P_1/2 transition (at native resolution of 14.1) and a 63 μm map of the ^3P_1→ ^3P_2 transition (at native resolution of 6.3) from the SOFIA FEEDBACK C^+ legacy survey <cit.>. The survey was conducted by the upGREAT (upgraded German REceiver for Astronomy at Terahertz frequencies) heterodyne spectrometer <cit.> onboard the SOFIA aircraft <cit.> on 6 June 2019 from New Zealand. The upGREAT receivers use a low frequency array (LFA) to cover the 1.9-2.5 THz band with 14 pixels and a high frequency array (HFA) covering the 4.7 THz line with 7 pixels. The on-the-fly observing strategy and data reduction for the survey is discussed in <cit.>. In this work, we use the data reduced by <cit.> who smoothed the map to 20 and map to 30 to reduce noise. They also applied principal component analysis (PCA) to identify and remove systematic components of baseline variation in the spectra. Both maps are 14.4 × 7.2 in size, with a spectral binning of 0.2 km/s, for which the typical rms noise is 0.8-1.0 K for and ∼0.8-1.5 K for <cit.>. The integrated intensity maps for and and have been integrated from -20 to +20 km/s are shown in panels k and l of Fig. <ref>, respectively.
§.§.§ APEX LAsMA
To trace the molecular gas regions in RCW 36, we use observations of ^12CO (3–2) and ^13CO (3–2) obtained on 27 September 2019 and 21 July 2021 with the heterodyne spectrometer LAsMA (Large APEX Sub-Millimetre Array), which is a 7 pixel receiver on the APEX telescope <cit.>. The maps were scanned in total power on-the-fly mode and are sized 20 × 15, with a beamsize of 18.2 at 345.8 GHz. <cit.> reduced the data to produce the baseline-subtracted spectra presented here, which have a spectral resolution of 0.2 km/s, pixel size of 9.1 and rms noise of 0.45 K. The integrated intensity maps have been integrated from -20 to +20 km/s for ^12CO and ^13CO are shown in panels e and f of Fig. <ref>, respectively.
§.§.§ Mopra
In our analysis, we also utilize complementary molecular line surveys from the 22-m Mopra Telescope, which observed the large-scale dense gas over the entire Vela C molecular cloud from 2009 to 2013. In this work, we use only the (1–0) transitions of C^18O as well as HNC and N_2H^+ which were originally presented in <cit.>. The C^18O observations were performed by scanning long rectangular strips of 6 height in the galactic latitude and longitude directions using Mopra's fast-scanning mode. The HNC and N_2H^+ observations used overlapping 5 square raster maps. The data reduction procedure, performed by <cit.>, includes bandpass correction using off-source spectra, bandpass polynomial fitting and Hanning smoothing in velocity. The resulting FWHM angular resolution and velocity resolution is 33 and 0.18 km/s for C^18O, and 36 and ∼0.2 km/s for both HNC and N_2H^+. The integrated intensity maps for HNC, N_2H^+ and C^18O have been integrated over the velocity range given in Column 5 of Table <ref> and are shown in panels g–i of Fig. <ref>, respectively.
§ RCW 36 STRUCTURE
In this section, we give an overview of the morphological structure and magnetic field geometry of RCW 36 on varying spatial scales based on previous studies of the region, as well as inferences based on our observations. Figure <ref> showcases various continuum and polarization observations for RCW 36 that will be used to describe its general structure in the following subsections.
§.§ Cloud-Scale
On cloud scales of > 1 pc, the RCW 36 region is located within the Vela C giant molecular cloud. Vela C consists of a network of filaments, ridges and nests, which were identified by <cit.> using Herschel data. The densest and most prominent of the ridges is the Centre-Ridge, with column densities of > 30, and a length of roughly ∼10 pc <cit.>. The Center-Ridge contains the RCW 36 region. It has a bipolar nebula morphology <cit.> with two fairly symmetric lobes oriented in the east-west direction that are traced well by the green PACS 70 μm emission in the lower-left panel of Figure <ref> (see also the dotted green ellipses). This bipolar nebula is roughly centered around a young (1.1 ± 0.6 My) massive cluster with two late-type O-type stars and ∼350 members <cit.>. The position of the most massive star (spectral type O9V) is indicated by a white star-shaped marker in Fig. <ref>. The ionizing radiation from this cluster is powering an expanding gas shell traced by Hα (shown in blue, Fig. <ref>, left panel) <cit.>.
Bipolar regions are of great interest because, though they have been observed in other high-mass star forming regions such as S106 <cit.> and G319.88+00.79 <cit.>, they seem to be more rare than single bubbles <cit.>.
Within the bipolar cavities, <cit.> identified blue-shifted shells with a velocity of 5.2±0.5 km/s, likely driven by stellar winds from the massive cluster. Additionally, they find diffuse X-ray emission (observed with the Chandra X-ray Observatory) in and around the RCW 36 region which is tracing hot plasma created by the winds. However, <cit.> estimate that the energy of the hot plasma is 50–97% lower than the energy injected by stellar winds and reason that the missing energy may be due to plasma leakage, as has been previously suggested for RCW 49 <cit.>.
The magnetic field geometry on > 1 pc cloud scales is traced by the BLASTPol 500 μm polarization map <cit.> which has a FWHM 2.5 resolution, corresponding to 0.65 pc at the distance of Vela C. The BLASTPol magnetic field orientation is shown by vectors in top-left panel of Fig. <ref>, which follow a fairly uniform east-west morphology that is mostly perpendicular to the orientation of the dense ridge. However, around the north and south `bends' of the bipolar structure, the magnetic field lines also appear to bend inward towards the center, following the bipolar shape of the structure.
§.§ Filament-Scale
The middle panels of Figure <ref> highlight structures on filament scales of ∼0.1–1 pc and the cyan contours in all panels represent Herschel-derived column densities to show the filament. At the waist of the bipolar nebula, <cit.> identify a ring-like structure that extends ∼1 pc in radius and is oriented perpendicular to the bipolar nebula lobes, in the north-south direction (labeled in yellow in both the left and middle panels of Fig. <ref>). The majority of the dense material, as traced by the column density contours, is contained within this ring. <cit.> also model the kinematics of the ring and find that the north-eastern (NE) half is mainly blue-shifted while the south-western (SW) half is red-shifted, consistent with an expanding cloud with speeds of 1–2 km/s.
To trace the ionized gas, i.e. we use archival data from the SuperCOSMOS H-alpha Survey <cit.>. From the SuperCOSMOS map, we note that the eastern side of the ring is seen in absorption, signifying that it is in front of the ionizing gas and associated massive star cluster. Whereas, emission is seen across the western region of the ring and therefore this part of the ring is likely behind the cluster.
The highest column density contours are observed within the SW half of the ring, where most of the next-generation star formation appears to be taking place. We henceforth refer to this region as the `Main-Fil' (labeled in white, middle panel, Fig <ref>). <cit.> estimate that the mass per unit length of the Main-Fil region is 400±85 M_⊙ pc^-1. The Main-Fil is seen to host multiple star-forming cores and/or clumps which are shown by the black ALMA Band 6 continuum contours in the middle panel of Fig <ref>.
Several diffuse filamentary structures traced by ^12CO and (shown in red and blue, respectively in middle panel of Fig. <ref>) can also be observed in the ambient cloud surrounding the ring. <cit.> have reasoned that due to the curved shape of these filaments, they are not part of the larger expanding shells but may have been low density filaments originally converging towards the centre dense ridge (similar to the converging filaments seen in Musca, B211/3, and DR 21; ) that have instead been swept away at velocities > 3 km/s due to stellar feedback.
The magnetic field geometry on filament scales, traced by SOFIA/HAWC+ 214 μm is fairly consistent with the E-W geometry seen on cloud scales, with some interesting exceptions. The most striking deviation is the region located just east of the ionising stars, hereafter referred to as `Flipped-Fil' (labeled in Fig <ref>, middle and right panels). Here the magnetic field morphology, as traced by SOFIA/HAWC+, deviates from the E-W trend and abruptly flips almost by 90^∘ to follow a more N-S configuration. This geometry appears to follow the elongation of a lower density filament traced by ^12CO (red, middle panel), (blue, middle panel), and ^13CO emission (blue, right panel).
§.§ Core-Scale
The right panels of Figure <ref> show emission on sub-clump and core scales (< 0.1 pc) in RCW 36. These data reveal complex substructure within the Main-Fil region. The Main-Fil is clumpy with several bright rims, voids, and pillar-like structures identified by <cit.> (see their Figure 3). This matches our ALMA band 6 continuum observations as shown by the five near-round clumps and associated elongated pillars, as seen in the right panel Fig. <ref>.
<cit.> recognized that the bright rims appear near the end of the pillar-like structures. The bright-rims are traced in the right panel of Fig.<ref> by Spitzer/IRAC 3.6 μm emission, which mainly traces hot dust, found at the edges of the PDR <cit.>. These rims appear to wrap around the cold ALMA clumps, without covering them completely, in a manner resembling bow-shocks (though actual bow-shocks are unlikely in this region). These bright rims are of great interest in this work and are therefore collectively labeled as `Bent-Fils' as they will be referred to in later sections. There is a prominent northern Bent-Fil and southern Bent-Fil shown by the two yellow-dotted ovals in the lower right panel of Fig. <ref>. The curved morphology of the Bent-Fils is noted by <cit.> to be likely due to tracing the inner border of the dense ring which is being progressively photoionzed by the star cluster. Interestingly, the HAWC+ magnetic field morphology seems to follow the Bent-Fil features, deviating once again from the general E-W cloud scale magnetic field.
§ METHODS
§.§ Histogram of Relative Orientations
In this section we discuss the procedure of the Histogram of Relative Orientation (HRO) method <cit.> which computes the relative angle
ϕ(x,y) between the gradient vector field of a structure map M(x,y) and the plane-of-sky magnetic field orientation at each pixel. The steps for this procedure are outlined in the following subsections.
§.§ Preparing the Structure Map
In this work, we apply two different methods to obtain a two-dimensional structure map M(x,y). In the first approach, henceforth referred to as Single Map HRO (described further in <ref>), we compare the orientation of local structure at every location using one map M(x,y), for each of the datasets listed in Table <ref>, to the magnetic field orientation measured by dust polarization as was done by previous HRO studies <cit.>. In the second approach, applied only to the spectroscopic data cubes listed in Table <ref>, we slice the spectral line cube into multiple velocity slabs v_i and compare the orientation of structures in the integrated intensity of each slab M(x,y)_i to the inferred magnetic field. This quantifies the relative alignment as a function of line-of-sight velocity, and will therefore be referred to as the Velocity Dependent HRO (see <ref>).
§.§.§ Single Map HRO
The dust emission, column density, and temperature maps are already in the 2-D spatial format and are thus used directly as the M(x,y) structure map in the Single Map HRO analysis. For the spectroscopic cubes, we generate a single integrated line intensity map as M(x,y) (following the procedure of ). To calculate the integrated line intensity, a velocity range v_0–v_1 is selected for each molecular line over which the the radiation temperature T_R in a given velocity channel v is integrated, using
M(x,y) = ∫_v_0^v_1 T_R dv.
The velocity ranges used to calculate the integrated intensity map in the Single Map HRO analysis for each spectral cube are specified in Column 5 of Table <ref>.
§.§.§ Velocity Dependent HRO
Additionally for the molecular line data, we also perform a Velocity Dependent HRO analysis. Here we slice a selected velocity range v_0–v_1(specified in Column 6 of Table <ref>) into narrower velocity slabs with a width of 1 km/s. We increment the center velocities v_i of each slab by 0.5 km/s, such that v_i={v_0+0.5, v_0+1, …, v_1-1, v_1-0.5}. For every velocity v_i in the set, we generate an integrated intensity map M(x,y)_i using:
M(x,y)_i = ∫_v_i-0.5^v_i+0.5 T_R dv.
The 1 km/s width of the slabs are chosen to be roughly a factor of 5 larger than the ∼0.2 km/s velocity resolution of the data cubes (listed in Column 4 of Table <ref>) such that enough velocity channels are included in the integrated intensity map. This ensures that there is sufficient signal-to-noise in each slab and small local fluctuations are averaged over. The HRO analysis is then repeated for each M(x,y)_i in the set.
§.§ Projection and Masking
To directly compare the structure map M(x,y) and the plane-of-sky magnetic field map the next step is to ensure that both maps share the same spatial coordinate grid such that there is one-to-one mapping between the pixels. To do this, the map with the coarser pixel scale is projected on to the grid of the map with the finer pixel scale (i.e., if the M(x,y) map has a lower resolution than the map then M(x,y) is projected on the map). The pixel sizes of each dataset are given in Table <ref> for comparison. When the data has lower resolution of the two, rather than directly projecting the orientation of the magnetic field lines inferred from Eq. <ref>, we instead project the Stokes Q and U intensity maps separately and then recalculate the inferred . This avoids an incorrect assignment of vector orientation angles to the newly sized pixels.
Next, a 3-σ signal-to-noise cut is applied to the data points in the structure map M(x,y). For the Single Map HRO analysis, all of the M(x,y) maps of the RCW 36 region are above this threshold for every point that overlaps with the data, with the exception of the ALMA continuum maps for which the signal is concentrated near the dense clumps. For the Velocity Dependent HRO analysis, the M(x,y)_i integrated intensity map of each velocity slab is masked individually.
§.§ Calculating the Relative Orientation Angle
To determine the orientation of elongated structures in M(x,y), we calculate the direction of the iso-contours ψ (x,y) (which is by definition perpendicular to the gradient vector field ∇M), given by:
ψ=arctan( δ M / δ x/δ M / δ y),
where ψ is calculated at each pixel (x,y). The partial derivatives are calculated by convolving M(x,y) with Gaussian derivative kernels G, using
δ M/δ x = M(x,y) ∗δ/δ x G(x,y),
and similarly δM/δ y = M(x,y) ⋆δ G(x,y) / δ y. This reduces noise and avoids erroneous relative angle measurements due to map pixelization. The size of the Gaussian kernels in angular units θ_G is chosen to be one third of the FWHM angular resolution θ_beam of the M(x,y) map, using θ_G = θ_beam / 3. If this kernel size θ_G is less than 3 pixels, then a minimum kernel size of 3 pixels is used instead. A summary of all the kernel sizes in angular units θ_G and pixel units l_G is provided in Columns 5 and 6, respectively, of Table <ref>. The same smoothing lengths listed in Table <ref> for the molecular line data are applied for the Velocity Dependent HRO analysis.
The relative angle ϕ(x,y) between the iso-contour direction ψ(x,y) and the plane-of-sky magnetic field can then be computed with:
ϕ≡arctan( |ψ×B̂_⊥|/ψ·B̂_⊥).
The resulting ϕ falls within the range [0^∘, 180^∘], but since ϕ measures only the orientation and not direction, the angles ϕ and 180-ϕ are redundant. The range can therefore be wrapped on [0^∘, 90^∘] as we are only concerned with angular distance, such that ϕ=0^∘ (and equivalently ϕ=180^∘ before wrapping) corresponds to the local structures being aligned parallel relative to the magnetic field orientation, while ϕ=90^∘ corresponds to perpendicular relative alignment. A histogram can then be used to combine the relative angle measurements across all pixels in order to summarise the overall trend within the map. We place the ϕ(x,y) measurements into 20 bins over [0^∘, 90^∘], where each bin is of 4.5^∘ in size.
§.§ Projected Rayleigh Statistic
While the histogram is a useful tool for checking if there is a preference towards a particular relative angle, we can go a step further and quantify the statistical significance of such a preference by calculating the Projected Rayleigh statistic (PRS) (as described in ). The PRS is a modified version of a classic Rayleigh statistic which tests for a uniform distribution of angles using a random walk. The classic Rayleigh statistic characterizes the distance Z from the origin if one were to take unit-sized steps in the direction determined by each angle. Given a set θ_i of n angles within the range [0^∘, 360^∘], this distance Z can be calculated as follows:
Z = (Σ_i^n cosθ_i)^2 + (Σ_i^n sinθ_i)^2/n,
where n is the number of data samples. To use the set of relative angles ϕ(x,y) in the range [0^∘, 90^∘] determined from HROs, we can map each angle θ=2ϕ. The range of possible Z then is [0, n], where Z=0 is expected if the angles θ_i are distributed randomly and Z=n is expected if all angles are the same. Any significant deviation from the origin would signify that the angles θ_i have a directional preference and are non-uniform. While this statistic is useful for testing for uniformity, it cannot differentiate between the preference for parallel versus perpendicular alignment, which is what we would like to measure in the context of HROs. To achieve this, <cit.> modify this statistic by calculating only the horizontal displacement in the hypothetical random walk:
Z'_x = Σ_i^n cosθ_i/√(n/2).
Now a parallel relative angle ϕ_i=0 will map to θ=2ϕ=0 and give a positive cos(0)=1 contribution to , while a perpendicular relative angle ϕ_i=π/2 will map to θ=2ϕ=π and give a negative cos(π)=-1 contribution to . Therefore, a statistic of ≫ 0 indicates strong parallel alignment, while ≪ 0 indicates strong perpendicular alignment. Since the value of is within the range [-√(2n), +√(2n)], the statistic can be normalized by √(2n) to give a measure of the degree of alignment:
Z'_x = Z'_x/√(2n),
where a value of Z'_x=±1 would correspond to perfectly parallel or perpendicular alignment.
If the n data points are independent, then should have an uncertainty of 1. However, most of the maps used in this HRO study are over-sampled and adjacent pixels are not entirely independent. Since the magnitude of is proportional to the number of data points n^1/2 and the relative alignment ϕ is measured at each pixel (x,y), oversampling within the map can result in a misleadingly large magnitude. To determine the statistical significance of we follow the methodology in <cit.> and correct for oversampling by repeating the HRO analysis on 1000 independent white noise maps M_WN, smoothed to the same resolution as M(x,y), and compared to the magnetic field orientation.
The white noise maps M_WN are generated to be the same size as the real data (M(x,y)), using Gaussian noise with a mean and standard deviation of M(x,y). In these maps the orientation of the gradient will be a uniformly random distribution. The white noise M_WN map then follows the same projection procedure that was applied to M(x,y) in Section <ref> followed by the same mask which was applied to the real data M(x,y). We calculate the corresponding PRS, Z'_WN for each white noise map and determine the mean ⟨ Z'_WN⟩ and standard deviation σ_Z'_WN of the PRS in the 1000 runs. The value of σ_Z'_WN estimates the amount of oversampling in the map. We can then correct the PRS for oversampling using:
Z_x = Z^'_x/σ_Z^'_WN.
After correction, the error on the corrected PRS is now =1, and a magnitude of | Z_x| > 3 is considered statistically significant. The number of independent samples can be estimated as
n_ind = n/(σ_Z'_WN)^2.
For the Velocity Dependent HRO analysis, a corrected PRS Z_x_i is measured for each integrated intensity map M(x,y)_i found at the velocity slab centered at the velocity v_i. This generates a PRS as a function of velocity.
§ RESULTS
§.§ HRO between Band C and Band E
In this section, we use the HRO method to compare the magnetic field orientations inferred by SOFIA at 89 μm (Band C) to 214 μm (Band E). Figure <ref> shows the relative angles between the Band C (B̂_⊥ C) and Band E (B̂_⊥ E) magnetic field vectors at each pixel, calculated using Equation <ref>. We find that the relative angles are near parallel ϕ∼ 0^∘ at most locations in RCW 36, signifying that the magnetic field orientations in the two different bands are highly consistent. This gives a mean relative angle of ⟨ϕ⟩ = 6.6^∘ as well as a a large positive PRS of Z_x=69.8 and normalized statistic of Z'_x=0.95, indicating a strong preference for parallel relative alignment. A discussion of this result is given in Section <ref>.
Since the magnetic field orientations in the two bands are very similar, we present only the Band C HRO results in the main text and defer the Band E results to Appendix <ref>. The Band C Single Map and Velocity Dependent HRO results are presented in Sections <ref> and <ref>, respectively.
§.§ Single Map HRO Results
Table <ref> summarizes the Single Map HRO results, where the SOFIA/HAWC+ Band C data has been used to infer the magnetic field orientation. Most tracers have a negative PRS (), indicating a statistical preference for perpendicular alignment. There are also some notable exceptions that have a positive PRS, indicating a preference for parallel alignment. We discuss the results from the various tracers below.
The single-dish dust emission maps show a distinct variation in the magnitude of the PRS values with wavelength. The top panel of Figure <ref> shows the oversampling-corrected values, which indicate the statistical significance of the PRS, while the bottom panel shows the normalized uncorrected values, which indicate the degree of alignment. Both panels show that and are negative for the sub-mm and far-IR Herschel and SOFIA data indicating a preference for perpendicular alignment, and positive for the mid-IR Spitzer data, indicating a preference for parallel alignment. All the maps have statistically significant values (i.e. | Z_x| > 3, where = 1). A notable trend is seen for the normalized statistic where the values roughly increase (i.e becomes less negative) as the wavelength decreases, suggesting that successively more structures within the maps align parallel to the magnetic field at shorter wavelengths. In comparison, the trend for the oversampling-corrected statistic decreases from 500–250 μm and peaks in magnitude at 214 μm. This is because the oversampling correction factor () is proportional to the number of independent samples in the masked area and lower values are expected for the same degree of alignment at longer wavelengths with larger beams, where there are fewer pixels over the same area (see Equation <ref>). We also ran Monte Carlo simulations to test whether measurement uncertainties in the magnetic field orientation could affect our measured PRS values. We find that the uncertainty in the relative angles ϕ(x,y) has a negligible effect on the PRS, resulting in in an uncertainty of ±0.2 for and ±0.002 for . These tests and a discussion of our error propagation methods are described in Appendix <ref>.
Figure <ref> summarizes the oversampling-corrected and normalized PRS for the total integrated intensity spectral line maps. Unlike the values for the dust map shown in Fig. <ref>, many of the spectral line intensity maps show no overall preference for alignment, or only show a statistically insignificant alignment trend. These maps show different alignment preferences relative to magnetic field in different regions, or overlapping along the line of sight. In Section <ref>, we will show that some of these structures that overlap in the integrated intensity map can be decomposed into different line of sight velocity channels. In some cases, particularly for the Mopra observations, the low values are also in part due to lower resolution and higher noise levels of the spectroscopic data. Overall, we note that all gas tracers have a negative , signifying a preference for perpendicular alignment, with the exception of which has a positive .
In Figure <ref> we identify which structures are aligned with the magnetic field for a select number of datasets. Similar plots for the remaining datasets in Table <ref> are shown and discussed in Appendix <ref>, Figures <ref>–<ref>. The right column shows the structure map M(x,y) overlaid with the magnetic field orientation. The middle column shows the relative angle ϕ(x,y) calculated at each location in the region, where purple (ϕ∼0^∘) is associated with local parallel alignment and orange (ϕ∼90^∘) is associated with local perpendicular alignment relative to the magnetic field. The left column summarizes the alignment trend using a histogram of the relative angles.
From Fig. <ref>, we first compare the relative angle maps for dust emission at wavelengths of 500 μm and 89 μm. We note that for both maps, the majority of the relative angles are near-perpendicular and are concentrated at the left and right sides of the dust map, which correspond to the east and west halves of the dense molecular ring labeled in Figure <ref>. This is consistent with the visual observation that the ring is elongated approximately along the north-south direction which is oriented roughly perpendicular to the mostly east-west magnetic field morphology from HAWC+ Band C observations. Both 500 μm and 89 μm wavelengths also show near-parallel relative angle measurements (ϕ∼ 0^∘) within the roughly N-S oriented Flipped-Fil structure, oriented parallel to the local N-S magnetic field. The main difference between the 500 μm and 89 μm, however appears to be the south Bent-Fil structure (labeled in Figure <ref>). This structure is traced at the shorter 89 μm wavelength, but not at 500 μm. Since the Bent-Fil structures are elongated E-W, parallel to the local magnetic field, this results in the 89 μm having an overall lower magnitude, indicating less of a statistical preference for perpendicular alignment relative to the magnetic field. This general trend is noted for all other sub-mm and far-IR the dust maps from 350–70 μm as well (see Appendix <ref>, Fig. <ref>), where the emergence of the southern Bent-Fil structure at wavelengths < 160 μm results in less negative values as observed in Fig. <ref>.
However unlike the far-IR and sub-mm dust maps, the 3.6 μm Spitzer data is less sensitive to the high column density ring structure and instead predominantly traces emission near the north and south Bent-Fils which are oriented parallel to the E-W magnetic field. The lack of perpendicular relative angles from the dense ring results in an overall positive .
The ALMA continuum maps show no significant preference for either parallel or perpendicular alignment. This is likely because ALMA interferometer is resolving out many of the large-scale dense ring, Main-Fil, Flipped-Fil and Bent-Fils structures (see full discussion in Appendix <ref> and Figure <ref>).
Examining the molecular gas maps, we find that ^13CO, which is sensitive to intermediate density gas, is able to trace both the dense molecular ring and the south Bent-Fil structure, resulting in a value comparable to the 89 μm dust map.
The relative angle map shows that the E-W Bent-Fil structures contribute about the same number of parallel aligned relative orientations (ϕ∼ 0^∘) as the perpendicular ϕ∼ 90^∘ relative orientations near the N-S ring. This results in a PRS (Z_x=+0.5) that is close to 0 and has neither a statistical preference for perpendicular nor parallel alignment relative to the magnetic field. Distinctively, the histogram of also does not peak at near-parallel or near-perpendicular angles, but rather close to ϕ∼ 40^∘. Though it is not statistically significant, the positive result of is interesting as it is in contrast with the negative results for all other of all other spectral line tracers, which predominantly trace the molecular dense ring (Fig. <ref>). Furthermore, we see that the emission in the integrated intensity map correlates with the Spitzer emission, which probes warm dust likely found near PDRs. It is therefore noteworthy that the results for both the and Spitzer maps are positive, indicating that structures associated with PDRs have a preference towards parallel alignment relative to the magnetic field.
In summary, we find a fairly bimodal trend in the Single Map HRO results. Maps which predominately trace the high-column density ridge and ring structure (such as longer wavelength dust maps and high-density gas tracers) show an overall preference for perpendicular alignment relative to the magnetic field. Whereas maps which trace more diffuse structures near or within the PDR (such as mid-infrared dust maps and ) show more of a tendency towards parallel alignment relative to the magnetic field. Maps which show a combination of the two types of structures (such as 160–70 μm dust maps and low-to-intermediate density gas tracers) show both regions of parallel and perpendicular alignment relative to the local magnetic field, resulting in a final PRS of lower magnitude. We discuss some caveats and considerations of our HRO analysis in Appendix <ref>. In the next section, we discuss the HRO analysis for different line-of-sight velocity ranges in the spectral line cubes. We use this Velocity Dependent HRO approach to examine the relationship between the orientation of the different line-of-sight gas structures and the magnetic field.
§.§ Velocity Dependent HRO
In this section we present the results of the Velocity Dependent HRO analysis which measures the PRS of a spectral cube as a function of line-of-sight velocity, using the method described Section <ref>. Figure <ref> shows the corrected PRS results for different spectral lines as a function of velocity. We note that while the magnitude of the is not always statistically significant (< 3-σ) over the velocity range for all tracers, the overall trend is interesting and consistent with the Single Map HRO results. The intermediate and dense gas tracers, C^18O, HNC, N_2H^+ typically have statistically significant negative values, especially around 5–8 km/s, implying a preference for perpendicular alignment. This velocity range matches the mean line-of-sight velocity of the cloud of around 7 km/s <cit.>. In contrast, the PRS results switch from negative values around 5–6 km/s to statistically significant positive values 9–11 km/s, indicating a preference for parallel alignment at higher line-of-sight velocities.
Figure <ref> also demonstrates the limitations of using a single integrated intensity map for a spectroscopic cube in the HRO analysis, as was done in Section <ref>. Since there can be multiple overlapping elongated structures at different line-of-sight velocities, measuring the PRS from only one integrated intensity map may result in a loss of information on the alignment preferences of kinematically distinct structures. To differentiate which structures are being observed at different velocities, Figures <ref>–<ref> show channel emission maps from 4–10 km/s.
In the ^13CO channel maps (shown in Fig. <ref>), we notice that emission from the northeastern half of the ring structure is most prominent at ∼4–6 km/s, while the southwestern section of the ring is most prominent at ∼6–8 km/s. Since the ring including the Main-Fil is oriented N-S, approximately perpendicular to the E-W magnetic field, the overall at these velocities is preferentially perpendicular and therefore negative. However, the area of the northeastern region of the ring is smaller and contains fewer HAWC+ Band C polarization detections, leading to less ϕ(x,y)∼90^∘ pixels at 4–5 km/s compared to larger southwestern component of the ring, resulting in a less negative . At line-of-sight velocities > 8 km/s, the gas traces the Flipped-Fil and north Bent-Fil, causing a weak preference for parallel alignment around 10 km/s.
Similarly, in the channel maps (Fig. <ref>) we notice that the eastern half of the ring can be seen in the 4–5 and 5–6 km/s maps, followed by the western half of the ring at 6–8 km/s along with the southern Bent-Fil, all of which result in an overall negative and therefore preferentially perpendicular, negative for these velocity channels. At higher velocities we see emission from the Flipped-Fil and northern Bent-Fil at 8–10 km/s, resulting in an overall positive in these velocity channels. Since the Bent-Fil features are the more prominent in than other gas tracers, it has the largest positive magnitude.
In contrast, HNC (Fig. <ref>) which is tracing denser gas, mostly shows emission tracing the dense ring (eastern half at 4–6 km/s and western half at 6–8 km/s) has a mostly negative . The channel maps for the rest of the molecular lines are shown in Appendix <ref>, Figures <ref>–<ref>, all of which show emission from the dense ring from 4–8 km/s, and the Flipped-Fil and Bent-Fil features from 8–10 km/s.
This switch from negative to positive PRS as a function of line-of-sight velocity is consistent with our Single Map HRO findings. In Section <ref> we noted a bimodal trend in the PRS, where high column density gas tracers showed a preference of perpendicular relative alignment while tracers associated with the PDR and warmer dust showed more parallel relative orientations. From the Velocity Dependent HRO analysis, we now learn that the dense gas and PDR structures are also kinematically distinct, such that the same PRS bimodality is also observed as a function line-of-sight velocity. In the next section, we interpret these results and suggest potential physical mechanisms that may be causing the observed trends.
§ DISCUSSION
The goal of this section is to better understand the physical processes behind the observed magnetic field geometry and morphology of the star-forming structures within the RCW 36 region. We are particularly interested in understanding the energetic impact of the magnetic field and stellar feedback in shaping the gas dynamics. To do this, we examine the magnetic field observations inferred from HAWC+ in Section <ref>, followed by an interpretation of the HRO results and discussion of the origins of the Flipped-Fil structure in Sections <ref> and <ref>, and finally comment of the energetic balance of the region in Section <ref>.
§.§ The Magnetic Field Structure of RCW 36
In this section we discuss the polarization data from SOFIA/HAWC+ in more detail to try and infer the density scales for which the RCW 36 magnetic field is being traced i.e., the population of the dust grains which contribute to the majority of the polarized emission. To do this, we first estimate the optical depth of the dust emission in Section <ref> and compare the polarization data and magnetic field morphologies at the different HAWC+ wavelengths in Section <ref>.
§.§.§ Optical Depth of Dust Emission
To better understand the location of the dust grains contributing to SOFIA/HAWC+ polarized emission maps, we estimate the optical depth τ_ν to check whether the magnetic field is being inferred from the average column of material along the entire line of sight or if it is tracing only the outer surface layer of an opaque dust cloud. The full method is discussed in Appendix <ref> and summarized here. We use τ_ν = N_H_2μ m_H R_dgκ_ν, where N_H_2 is the column density, μ is the mean molecular weight, m_H is the mass of Hydrogen, R_dg is the dust-to-gas ratio, and κ_ν is the dust opacity. We adopt the same dust opacity law (given in Appendix <ref>) as in previous HOBYS and Herschel Gould Belt Survey <cit.> studies <cit.>. The opacity law is independent of temperature and assumes a dust-to-gas fraction of 1%. We use a Herschel column density map derived by <cit.> which has angular resolution of 36, and is different than the 18 column density map listed in Table <ref> used for the HRO analysis. We choose to use the 36 column density map since the resolution matches the temperature map. The assumed dust opacity law from <cit.> and spectral index is also consistent with <cit.> and <cit.>.
We estimate that the optical depth for 214 μm (Band E) is optically thin τ_ν≪ 1 everywhere within RCW 36 (see Fig. <ref> in Appendix <ref>). At 89 μm, we also find that the emission is fairly optically thin τ_ν < 1 at 89 μm (Band C) for most regions, except for certain locations within the Main-Fil where the τ_ν can reach values of ∼1.4. It should be noted that these optical depth estimates are uncertain due to the difference in resolution and the possibility of emission from very small dust grains (see Appendix <ref> for details). As a first approximation however, we find that for most regions in RCW 36 the dust emission should be optically thin in HAWC+ Band C and E, and we should therefore be able to probe the full dust column.
§.§.§ Magnetic Field Comparison
In this section, we discuss the wavelength dependence of the polarization data from HAWC+ since the Band C (89 μm) and Band E (214 μm) may be sensitive to different dust grain populations. Emission at 89 μm is typically more sensitive to warm (T ≥ 25 K) dust grains and less sensitive to cold (T ≤ 15 K) dust grains. This is in contrast to 214 μm, which can also probe the magnetic field orientation in colder, more shielded dust columns. However, in Section <ref>, we used the HRO method to statistically show that the magnetic field morphologies inferred from Band C and Band E are almost identical.
This high degree of similarity could suggest that the Band E observations may be measuring polarized radiation mostly emitted by warm dust grains, similar to Band C. Alternatively the magnetic field morphology in regions where the dust grains are warmer (T > 25 K) may be similar to the field morphology over a wider range of dust grain temperatures. In either case, the Band C polarization data is tracing dust grains with higher temperatures which, in a high-mass star-forming region like RCW 36, are likely being heated from the radiation of the massive stellar cluster. This warm dust is therefore probably located near the expanding gas shell and associated PDR. This is also supported by the observation that dust polarized intensity of Band C appears to correlate with the PDR-tracing and anti-correlate with the ALMA ACA continuum which traces cold dense cores (as shown in Appendix <ref>). Therefore the HAWC+ magnetic field is likely weighted towards the surface of the cloud near the PDR, rather than the colder denser dust structures.
Aside from dust temperature, the similarity between Band C and Band E magnetic field orientations may further indicate that the magnetic fields are likely being traced at comparable scales and densities in the two bands. Moreover, a consistent magnetic field morphology can be expected at the different wavelengths if the dust emission is optically thin, as previously suggested.
One noteworthy difference between the Band C and Band E datasets, however, was found by comparing the total polarized intensity given by Eq. <ref> in Band C (P_C; smoothed to the resolution of Band E) to Band E (P_E). Figure <ref> shows that the ratio of P_C/P_E is close to unity for the majority of RCW 36 except for certain regions. These regions have higher polarized intensities in the Band C map than they do in Band E by a factor of ∼2–4. Interestingly, the regions also overlap with where the HAWC+ magnetic field is seen to deviate from the general E-W trend of the BLASTPol magnetic field, i.e the Flipped-Fil and north Bent-Fil. This may be because the dust grains traced by the Band C map produce radiation with a higher degree of linear polarization, due to higher grain alignment efficiency based on a change in temperature and/or emissivity. A similar analysis has been performed by <cit.> and <cit.> who also compared the polarization ratio in different bands. Radiative alignment torques, which are thought to be responsible for the net alignment of the dust grains with their short axes parallel to the magnetic field, require anisotropic radiation fields from photons of wavelengths comparable or less than the grain size <cit.>. In this case, we may expect to see the polarization efficiency increase towards regions where the dust has been heated by the young star cluster, such as the PDR as was noted for the Bent-Fils. Another possibility is that the magnetic field lines are more ordered in the gas traced by the warm dust which is being preferentially traced in Band C. More ordered fields could mean less cancellation of the polarized emission and therefore a higher polarized intensity in comparison to a sight-line with more tangled fields. The geometry of the region is also a consideration. The warm dust structures could be inclined at a different angle compared to the cooler layers, as the dust polarized emission is only sensitive to the plane-of-sky magnetic field component.
§.§ Interpretation of HRO Results
§.§.§ Preferentially Perpendicular Alignment for Dense Tracers
In Section <ref>, we found that the structure maps M(x,y) which predominantly trace dense structures such as the ring and Main-Fil showed a statistically significant preference for perpendicular alignment relative to the filament-scale (7.8 FWHM) magnetic field probed by HAWC+ Band C. This result is consistent with previous large-scale HRO studies which compared the alignment of structures within the Central-Ridge of Vela C relative to the cloud-scale magnetic field probed by BLASTPol at 250, 350 and 500 μm (2.5–3 FWHM) <cit.>. These studies found that the relative alignment between large-scale structures in the Vela C cloud and the magnetic field is column density and density dependent.
<cit.> showed that for both the entire Vela C cloud and the Centre-Ridge region (containing RCW 36), the relative alignment trend transitions from preferentially parallel or no preferential alignment at low column densities, to preferentially perpendicular at high column densities. They find that the transition occurs at a threshold column density of N_H∼ 10^21.8. Additionally, <cit.> compared the orientation of the magnetic field inferred from BLASTPol 500 μm to integrated line intensity maps of different molecular lines tracing low, intermediate and high density structures averaged over the entire Vela C Cloud. They found that the low density gas tracers were more likely to align parallel to the magnetic field while intermediate to high density tracers were more likely to align perpendicular, with the transition occurring at a density of n_H_2∼ 10^3 cm^-3. This signature transition to preferentially perpendicular alignment at a critical column density has been observed for several other molecular clouds as well <cit.>.
In this work, we do not see this transition as a function of column density and only observe a preference for perpendicular relative alignment for different column density bins. This could be because RCW 36 region is the densest region within Vela C (with N_H≳ 10^22.4) and most of its structures are above the critical column density.
We also compare our work to magnetic field studies done on comparable small-scales. <cit.> find that the Musca filament is oriented roughly perpendicular to the surrounding magnetic field morphology, as traced by SOFIA/HAWC+ 214 μm observations. Moreover, <cit.> applied the HRO method to dense cores in the Ophiuchus molecular cloud and found similar results of preferential perpendicular alignment between high column density, elongated filament and core-scale structures in ρ Oph A and ρ Oph E relative to the magnetic field traced by SOFIA/HAWC+ 154 μm observations. The prevalence of this perpendicular relative alignment trend across different star-forming regions in varying molecular cloud environments suggests that shared physical processes may be underlying the observations.
Possible interpretations of such processes have been explored by comparing observations to simulations. For instance, <cit.> propose that the degree of magnetization of a cloud impacts the trend in relative alignment, where the high-magnetization case specifically reproduces the transition from preferentially parallel to preferentially perpendicular at a critical density. Other studies such as <cit.> reason that the preferentially parallel relative alignment occurs in magnetically dominated (sub-Alfvénic) gas, while preferentially perpendicular relative alignment occurs in turbulence dominated (super-Alfvénic) gas with the transition occurring when the kinetic energy due to gravitational contraction becomes larger than the magnetic energy. This connection to the energy balance is consistent with <cit.>, who demonstrate that a transition from parallel to perpendicular relative alignment can occur as a result of convergent velocity flows, which could be due to gravitational collapse. They also find that the transition in alignment can occur when the large-scale magnetic field is strong enough to impose an anisotropic velocity field and set an energetically preferred direction in the gas flow. However, simulations also caution that projection effects are an important consideration in the interpretation of HRO results as <cit.> showed that the relative orientation trends also strongly dependent on viewing angle.
Based on these studies, we propose that the large-scale magnetic field surrounding RCW 36 may have been dynamically important during it's formation, allowing gas to flow preferentially parallel to the E-W magnetic lines. This may have resulted in the formation of an elongated molecular gas sheet or filament (currently the Centre-Ridge) since material could have been inhibited in collapsing perpendicular to the magnetic field lines in the N-S direction. As the region went on to form stars, <cit.> suggest that ionizing radiation from the massive star cluster would have then reshaped the surrounding gas into a bipolar nebula, forming a ring of dense material at the center as an region expanded into the elongated structure. Both the BLASTPol and HAWC+ maps show that the magnetic field lines pinch near the waist of the bipolar nebula which could be evidence that the ram pressure may be overpowering the local magnetic pressure in that region, as the magnetic field lines are being warped along with the flux-frozen gas. So while the magnetic field may have set a preferred direction of gas flow during the formation of Centre-Ridge and Main-Fil, it may no longer be energetically significant across all of the RCW 36 region since the birth of the massive stars.
§.§.§ Parallel Alignment for PDR Tracers
Section <ref> also showed that some regions and tracers had a preferential parallel alignment between elongated structures and the inferred magnetic field. The decrease in the statistic magnitude || from dust map wavelengths shorter than 214 μm, was found to be largely due to the gradual emergence of the north and south Bent-Fil features (labeled in Fig. <ref>), which are elongated along the orientation of the HAWC+ Band C magnetic field lines. The emergence of Bent-Fils towards shorter wavelength (70–214 μm) Herschel and SOFIA dust maps, implies that these features are likely tracing cloud structures with warmer dust populations, near the PDR. The north and south Bent-Fils are also traced by the Spitzer mid-infrared 3.6–4.5 μm maps, which are sensitive to emission from hot dust found near the PDR.
The observation of a preferential parallel relative alignment between the direction of elongation of the Bent-Fils and the local Band C magnetic field orientation can be explained by the coupling of the gas and the magnetic field. We propose that the stellar feedback in the form of ionizing radiation from the high-mass cluster may be warping the flux-frozen gas, thereby dragging the magnetic field lines along with it. The higher resolution and/or shorter wavelengths of the SOFIA/HAWC+ observations are able to trace the regions where the magnetic field orientation appears to be altered from the otherwise uniform east-west geometry traced by 500 μm BLASTPol observations. The altered field lines follow the warped morphology seen for the bright-rimmed Bent-Fil regions traced by hot dust and and PDR tracers.
Furthermore, the Velocity Dependent HRO results (see Section <ref>) show that the Flipped-Fil and Bent-Fil structures had a line-of-sight velocity of ∼8–10 km/s, while the ring and Main-Fil structures were seen at velocities of ∼5–7km/s. If the Bent-Fil features are in fact being warped by expansion pressures from the ionization front, then it may be expected that these features have different velocities as compared to the dense structures which may be more shielded. <cit.> estimate an expansion velocity of 1–2 km/s for the dense molecular ring and and expanding shells in the bipolar cavities with velocities of ∼5 km/s. However, expansion is only one explanation and there are others plausible reasons as to why dense ring and PDR regions have different line-of-sight velocities such as rotations, tidal forces, etc. If the magnetic field lines are indeed being altered by the radiation from the massive stars, then this may suggest that the magnetic field pressure may not be sufficient to the support the cloud structures against the kinetic energy injected by stellar feedback.
While the Flipped-Fil also shows a strong preference for parallel alignment relative to the local magnetic field and similar line-of-sight velocities as the Bent-Fils, it is not as clear at this stage whether the Flipped-Fil is an irradiated structure associated with warped gas near the PDR. Unlike the Bent-Fils, the Flipped-Fil is not preferentially observed at shorter wavelength dust maps but rather, appears faintly in dust emission across the wavelengths 500–70 μm (see Fig. <ref> and Appendix <ref>, Fig. <ref>). Furthermore, the Flipped-Fil is not traced by the Spitzer maps (see Fig. <ref> and Appendix <ref>, Fig. <ref>), which may be expected if the structure was associated with warmer dust grains. A full discussion of the origins of the Flipped-Fil is presented in the next section.
§.§ Origins of the Flipped-Fil
One region of particular interest throughout this study has been the Flipped-Fil (labeled in Fig. <ref>) due to the N-S orientation of the magnetic field lines locally within the filament, which is in stark contrast with the general E-W orientation of the surrounding HAWC+ Band C magnetic field morphology. While the magnetic field lines appear to deviate slightly from the E-W trend in several regions such as that Bent-Fils, the Flipped-Fil region is the most striking feature as the magnetic field lines appear to change direction more abruptly and are almost orthogonal to the magnetic field of the surroundings.
Observational effects like projection may be contributing to the abrupt 90^∘ change in 2-D orientation, which may not be as drastic in 3-D. A change in the grain alignment mechanism of the dust grains could also cause the near-discontinuous behaviour of the Flipped-Fil if the reference direction for grain alignment changed from the magnetic field to the radiation field, as has been theorized for other high-mass star forming regions such as the Orion Bar <cit.>.
The change in magnetic field orientation within the Flipped-Fil can also be explained through physical origins. One plausible formation scenario was presented by <cit.> studying the magnetic field morphology of the Pillars of Creation seen in M16, which resembles the morphology of the Flipped-Fil. The scenario <cit.> is summarized here. An ionization front fueled by photon flux from a massive radiating star or cluster is envisioned to approach molecular gas, which may have regions of varying density. The gas being dragged by the ionization front may bow-around the an over-density to form an elongated pillar. The flux-frozen magnetic field lines within the pillar would then follow the gas motion and end up perpendicular relative to the background magnetic field orientation. Such a structure could remain stable as the compressed magnetic field lines would provide support against radial collapse since gas flow perpendicular to the field lines would be inhibited. The pillar may gradually erode in the length-wise direction however, as gas flow parallel to the field lines would still be allowed.
While such a physical model may be applicable to a structure similar to the Flipped-Fil, there are obvious differences between our observations and the Pillars of Creation. In spectral line data of the Flipped-Fil is observed at line-of-sight velocities of ∼8–10 km/s, which is red-shifted compared to the northern and southern halves of the dense ring. This arrangement could have occurred if the expansion of the region swept up the Flipped-Fil and pushed it behind the massive cluster such that it is currently at a further distance away from us and thus receding at a faster line-of-sight velocity than the main ridge. It is also difficult to distinguish from a 2-D projection on the plane-of-sky if the Flipped-Fil is indeed a pillar and column-like structure or whether it is a ridge of material. Moreover, while The Pillars of Creation are photoionized columns. It is not immediately obvious if the Flipped-Fil is directly associated with the PDR as it is not seen in the Spitzer maps which trace hot dust, but is seen in the and integrated intensity maps, which traces irradiated dense gas. The lack of mid-infrared emission towards the Flipped-Fil is likely not due to absorption from foreground structures as the region is associated with emission (see Fig. <ref>). Additionally, at a column density of ∼ 13, the Flipped-Fil is traced by low and intermediate gas tracers such as ^12CO and ^13CO indicating that it is has molecular gas component, but is not quite dense or cold enough to be traced as clearly in N_2H^+ and HNC.
The Flipped-Fil thus shows clear differences from the Bent-Fils which are likely associated with the warm dust structures and traced by shorter wavelength maps (3.5–160 μm) as well as PDR tracers such as and . The Pillars of Creation formation scenario suggested for the Flipped-Fil can also be applicable to the Bent-Fils. In this picture, the star-forming clumps seen in ALMA ACA continuum data (see Fig. <ref>) could be the over-dense structures envisioned Figure 5 of <cit.>, around which the bright rimmed Bent-Fil structures are being bowed around. The orientation of the bow-shapes then may suggest the direction of these ionization fronts. This model is similar to <cit.> who suggest from comparisons of numerical simulations by <cit.> that the bright rims or what we call `Bent-Fils' are the result of density enhancements in thin shells due to gas compression around the pillar-like structures.
While this pillar formation and similar origin scenarios for the Flipped-Fil and Bent-Fils are certainly plausible, there is insufficient evidence for it to be the favoured explanation. Higher resolution infrared observations may help distinguish these structures better, giving more insight to their morphology and origin.
§.§ Energetic Balance
In this section we examine the energetic balance of the RCW 36 region in light of the new HAWC+ polarization observations presented in this work.
The suggestion that the flux-frozen magnetic field lines are transitioning from their mostly E-W cloud-scale geometry to align parallel with feedback-associated structures implies that the magnetic field pressure must be less than the ram pressure. This change in morphology indicates that the magnetic field is being altered by feedback as it is unable to support the cloud structures from warping. Setting the ram pressure in equilibrium with the magnetic field pressure would therefore give an upper-limit on the magnetic field strength. <cit.> estimate the ram pressure energy density within the ring (labeled in Fig. <ref>) to be u_ram = 0.41–3.67× 10^-10 erg cm^-3. Assuming equipartition, we set the ram pressure energy density u_ram equal to the magnetic energy density u_B so that:
u_ram≥ u_B = B^2/8π(cgs),
and the the upper-limit on the magnetic field strength is estimated to be B=33–99 μG. This is lower than the magnetic field strength of 120 μG estimated by <cit.> for the Centre-Ridge using the Davis-Chandrasekhar–Fermi method. Furthermore, our estimate is also lower in comparison to similar high-mass star-forming regions. For instance, DR 21 is measured to have a magnetic field strength of 130 μG <cit.> and RCW 120 is estimated to have 100 μG <cit.>. Since our upper-limit is crude and based on the assumption that the feedback is ram pressure dominated, we may be underestimating the magnetic field strength.
<cit.> also calculate that the turbulent energy density within the ring is u_turb= 4.1–5.1× 10^-10 erg cm^-3 which is comparable to the ram pressure energy density and our estimated upper-limit for the inferred magnetic field energy density. However, the magnetic field energy is likely not much weaker than the turbulent energy since a fairly ordered (rather than tangled) magnetic field geometry is observed in RCW 36, which is a signature of sub- or trans-Alfvénic conditions (where u_B≥ u_turb) <cit.>. If the turbulent energy was dominant, we would expect the magnetic field orientation to have more random variations, which would decrease any alignment trend and result in values with smaller magnitudes than our current measurements. Alternatively, the effects of turbulence on the magnetic field morphology may not be visible on the spatial scales probed by SOFIA/HAWC+ if the size of the turbulent eddies are smaller than the size of the beam, such that the polarization component from turbulent motion cancels out along the line-of-sight to give the appearance of low dispersion, as demonstrated in <cit.>. On filament-scales, the ordered magnetic field observations from HAWC+ suggest a near-equipartition between the magnetic field energy and turbulent energy, with the ram pressure from stellar feedback dominating in certain regions.
This interpretation is different from the large-scale HRO results using BLASTPol, which suggested that the magnetic field may have been dominating the energetic balance, setting a preferred direction of gas motion on cloud scales for the dense Centre-Ridge to form preferentially perpendicular relative to the magnetic field (see Section <ref>). This indicates that the dynamic importance of the magnetic field may be scale dependent in this region or that the energetic balance has changed since the formation of the original generation of stars which is currently driving the feedback within RCW 36. It should also be noted that, since the filament-scale magnetic field traced by HAWC+ Band C is weighted towards warm dust, the magnetic field may only be less dynamically important near the PDR. Whether this is the case within the cold dense star-forming clumps remains unclear. Presumably gravitational in-fall will also be a strong contributing force to the energetic budget on core scales. A more in-depth analysis of the energetic balance within the cores and clumps, polarization data at longer wavelengths and higher resolutions, such as the polarization mosaics from our ALMA 12-m Band 6 program, is needed. We leave the analysis of that dataset for future work.
§ CONCLUSION
The motivation of this work was to better understand the combined influence of stellar feedback and magnetic fields on high-mass star formation. To do this, we targeted the extensively studied region RCW 36 in the Vela C giant molecular cloud, which has been previously observed using many different tracers. Adding to this suite of complimentary data, we presented new, higher resolution observations of the magnetic field morphology inferred from SOFIA/HAWC+ linearly polarized dust emission maps at 89 and 214 μm at filament-scales as well as ALMA Band 6 continuum 1.1-1.4 mm data at clump scales.
We then employed the Histogram of Relative Orientations (HRO) method to compare the orientation of the HAWC+ magnetic field to the orientation of physical structures in RCW 36 as traced by 7 spectral lines and dust emission and continuum maps ranging from 3.6 μm to 1.4 mm, for a multi-scale, multi-wavelength study. Comparing our HRO results to previous larger cloud-scale studies and simulations, we discussed the implications of our findings on the energetic importance of the magnetic field in RCW 36. The main conclusions of this analysis are:
* We find that the inferred filament-scale magnetic field from HAWC+ generally matches the east-west morphology of the cloud-scale magnetic field inferred from BLASTPol, except for a few notable regions of interest. One exception we call the Flipped-Fil region, where the field switches to a roughly north-south orientation and the other exception we call the Bent-Fils region, where the field follows a bent shape around star forming clumps. We also find that the magnetic field morphologies inferred by Band C (89 μm) and Band E (214 μm) are highly similar, indicating that they may be tracing similar dust grain populations, scales and densities.
* The HRO analysis between the inferred magnetic field and single intensity maps show differences in orientation between dense gas tracers and PDR tracers. Structures observed in dense gas tracers show a preference for perpendicular alignment relative to the magnetic field, whereas the tracers of warm dust and the PDR show a preference for parallel relative alignment. The aforementioned Flipped-Fil region, however, tends to be preferentially parallel in most tracers for which it is well detected, indicating that this is a special case.
* Repeating the HRO analysis for different of line-of-sight velocities in the spectroscopic data cubes shows that the relative alignment of structures also varies with velocity. Structures associated with dense gas show a preference for perpendicular alignment relative to the magnetic field at line-of-sight velocities of 4–7 km/s, while structures associated with the PDR show a preference for parallel alignment at velocities of 8–11 km/s. This technique allows us to disentangle otherwise overlapping structures in the single integrated intensity map.
* The finding the dense ridge of RCW 36 is oriented perpendicular to the magnetic field is consistent with previous cloud-scale HRO studies of the Centre-Ridge within Vela C <cit.>. Comparing this result to studies which applied the HRO method to synthetic observations of MHD simulations <cit.> suggests that the magnetic field may have been dynamically important on cloud scales when the dense ridge of RCW 36 region first formed, however this may no longer be the case after the formation of the massive stars.
* The HRO results from the warm dust and PDR tracers suggest that the magnetic field lines are perhaps being altered near the ionization front such that they align parallel relative to gas warped by stellar feedback. This could indicate that ram pressure and radiation from the nearby massive cluster may be dominating the energetic balance on filament-scales. This is potentially causing the flux-frozen magnetic fields to be bent in directions which follow the elongation of the bright-rimmed Bent-fil structures. The parallel relative alignment observed for the Flipped-Fil may have resulted from a formation scenario similar to what has been suggested by <cit.> for the Pillars of Creation where gas bows around an over-density creating a pillar-like structure in which the local magnetic field is rotated orthogonally in comparison to the background magnetic field orientation.
In conclusion, the SOFIA/HAWC+ polarization data provided new insights into the RCW 36 region, particularly regarding how the magnetic field may have been altered near the PDR region due to ionization from the massive stellar cluster. The filament-scale HRO analysis highlighted structures showing parallel alignment relative to the local magnetic field which were not observed in previous HRO cloud-scale studies. This altered magnetic field near the PDR may impact the formation of next-generation stars by influencing gas dynamics. Thus comparing the magnetic field from higher resolution, shorter wavelength polarization data to PDR tracers may offer useful insight when studying the impact of feedback on the magnetic field in other high mass star-forming regions as well.
§ ACKNOWLEDGMENTS
We thank the referee for their discerning feedback which has greatly improved the presentation of this work. This research has made use of data from the Herschel Gould Belt survey (HGBS) project (http://gouldbelt-herschel.cea.fr). The HGBS is a Herschel Key Programme jointly carried out by SPIRE Specialist Astronomy Group 3 (SAG 3), scientists of several institutes in the PACS Consortium (CEA Saclay, INAF-IFSI Rome and INAF-Arcetri, KU Leuven, MPIA Heidelberg), and scientists of the Herschel Science Center (HSC). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2018.1.01003.S, ADS/JAO.ALMA#2021.1.01365.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSTC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This study was partly based on observations made with the NASA/DLR SOFIA. SOFIA is jointly operated by the Universities Space Research Association Inc. (USRA), under NASA contract NNA17BF53C, and the Deutsches SOFIA Institut (DSI), under DLR contract 50 OK 0901 to the University of Stuttgart. upGREAT is a development by the MPIfR and the KOSMA/University Cologne, in cooperation with the DLR Institut für Optische Sensorsysteme. Financial support for FEEDBACK at the University of Maryland was provided by NASA through award SOF070077 issued by USRA. The FEEDBACK project is supported by the BMWI via DLR, project number 50 OR 2217 (FEEDBACK-plus). The BLASTPol telescope was supported by through grant numbers NNX13AE50G, 80NSSC18K0481,
NAG5-12785, NAG5-13301, NNGO-6GI11G, NNX0-9AB98G, the Illinois Space Grant Consortium, the Canadian Space Agency, the Leverhulme Trust through the Research Project Grant F/00 407/BN, the Natural Sciences and Engineering Research Council of Canada, the Canada Foundation for Innovation, the Ontario Innovation Trust, and the US National Science Foundation Office of Polar Programs. L.M.F acknowledges support from the National Science and Engineering Research Council (NSERC) through Discovery Grant RGPIN/06266-2020, and funding through the Queen’s University Research Initiation Grant. G.N. is grateful for financial support for this work from NASA via award SOF06-0183 issued by USRA to Northwestern University. N.S. and R.S. acknowledge support by the SFB 1601, sub-project B2, funded by the DFG. T.G.S.P gratefully acknowledges support by the National Science Foundation under grant No. AST-2009842 and AST-2108989 and by NASA award #09-0215 issued by USRA.
SOFIA, ALMA, BLAST, APEX, Herschel, Spitzer, Mopra
Astropy <cit.>, NumPy <cit.>, Matplotlib <cit.>, Scipy <cit.>, APLpy <cit.>, Spectral-Cube <cit.>
§ HRO RESULTS USING HAWC+ BAND E
In this section, we present the HRO Single Map results (method described in Section <ref>) using the HAWC+ Band E data to infer the magnetic field orientation. Table <ref> gives the corrected and uncorrected PRS values for the different tracers. We find that the general trend of the Single Map results from the HAWC+ Band E data are fairly similar to the results found for Band C (see Table <ref>). The consistency of the results is due to the similarity of the magnetic field morphologies traced in Band C and Band E (see Figure <ref>).
Figure <ref> shows the PRS () for the different single-dish dust map wavelengths. In both Band C and Band E HRO analyses, the resulting sign (positive or negative) of as a function of dust wavelength is the same. The longer wavelength (500–70 μm) dust maps show a statistically significant (| Z_x| > 3) preference for perpendicular alignment, while the Spitzer maps show a statistically significant preference for parallel alignment. Similar to Band C, the Band E HRO results also show insignificant values for the ALMA continuum data. Furthermore, the Band E Single Map HRO results for the column density, temperature, and atomic and molecular lines are all consistent with the Band C results. Most gas tracers show a preference for perpendicular alignment, with the exception of , ^12CO, and which have an overall statistically insignificant . The only difference between Band C and Band E results is the magnitude of the values. This is mostly due to the difference in resolution of the Band C and Band E data, since the magnitude of depends on the number of independent samples (as discussed in Section <ref> and <ref>).
Thus the Band E HRO results similarly find that tracers which are mostly sensitive to the N-S dense molecular ring and Main-Fil features show a preference for perpendicular alignment, while the tracers mostly sensitive to the E-W Bent-Fil features show a preference for parallel alignment. The interpretations of the results for Band C presented in Section <ref> are therefore also applicable to Band E.
§ UNCERTAINTY ESTIMATION FOR THE PRS
In Section <ref>, we discussed how the oversampling-corrected Projected Rayleigh statistic is expected to have an uncertainty of 1 <cit.>. These uncertainties do not, however, account for the measurement uncertainties in the magnetic field orientation and the structure map M(x,y), which may contribute additional sources of error in the HRO analysis. In this section, we perform Monte Carlo tests to propagate measurement uncertainties to the , , and calculations.
To estimate the impact of the uncertainties on the PRS due to measurement uncertainties, we repeat the HRO analysis for 1000 magnetic field map iterations (+ ), where a magnetic field orientation error term is added to the measured HAWC+ 89 μm (Band C) polarization angles. The error is drawn from a normal distribution centered at 0 with a standard deviation equal to the polarization angle error, which is estimated from the HAWC+ Data Reduction Pipeline (discussed in Section <ref>). The uncertainty of the uncorrected statistic is then determined from the distribution of values. We perform this test for two selected maps, the HAWC+ Band C intensity and the integrated intensity map. We choose these maps since they have different alignment trends. The Band C intensity has a strong preference for perpendicular alignment (Z_x=-9.3), while has no clear statistical preference for parallel or perpendicular alignment (Z_x=0.5). For both maps, the Monte Carlo analysis with uncertainties result in and distributions with standard deviations of σ_Z'_x=0.2 and σ_Z'_x=0.002. The mean of the distributions is the same values as the Single Map HRO and values, which did not include measurement uncertainties (given in Table <ref>).
To test whether measurement errors σ_M in the intensity maps M(x,y) could increase the uncertainty of our HRO analysis, we ran a Monte Carlo simulation which generates 1000 structure map iterations (M(x,y) + σ_M). We select the Band C intensity map for M(x,y) since the uncertainties in total intensity have been estimated by the HAWC+ Data Reduction pipeline. We then smooth the maps by the Gaussian gradient kernel (using the method described in Section <ref> with the same kernel sizes listed in Table <ref>) and calculate the relative angles for each iteration. We find that the standard deviation of the PRS values for this test to be to be σ_Z'_x=0.8 and σ_Z'_x=0.006. We note that the uncertainties in the statistics are slightly larger in comparison to the polarization angle error propagation. However, neither the magnetic field orientation uncertainties nor the polarization angle uncertainties have a large impact on the final results.
The Monte Carlo tests that have been presented in this section thus far have assumed that each pixel samples the probability distribution function of independently from neighbouring pixels. However, since the FWHM beam area spans many pixels, the measurement errors are correlated between adjacent pixels. To estimate the uncertainties on the oversampling corrected , we re-calculate the PRS using only independent relative angle pixels (i.e one pixel per FWHM beam area). Using this approach, our Monte Carlo test with measurement uncertainties () gives a standard deviation of σ_Z_x=0.2 for both the Band C and intensity maps, which is the same as the distribution in found in the over-sampled case. The Monte Carlo test with M(x,y) measurement uncertainties (σ_M) gives a standard deviation of σ_Z_x=0.3 for Band C intensity. In all cases, , and have standard deviations less than 1. Therefore, the uncertainties in the PRS statistics are primarily due to the distribution in the relative orientation angles sampled at different locations in the map, rather than by measurement uncertainties in the maps or inferred magnetic field orientation.
§ SINGLE MAP HRO
§.§.§ Dust Emission
In this section, we show the remainder of the relative angle maps and histograms from the Band C (89 μm) Single Map HRO analysis that were not presented in Figure <ref>.
Figure <ref> shows the results for the 350–70 μm dust maps. We see that all maps trace the east and west halves of the N-S oriented dense ring which contribute most of the of the perpendicular ϕ(x,y) measurements. Comparing the different wavelengths maps, it can be seen that the emission from the longer wavelengths at 350 μm (first row) and 250 μm (second row) trace the ring structure most closely, particularly the denser western half which includes the Main-Fil, as outlined by the column density contours. This results in the higher degree of perpendicular alignment relative to the magnetic field, as signified by the higher magnitude values for the longer wavelength dust maps in Table <ref>.
One similarity for all dust maps at 350–70 μm in Fig. <ref> is the alignment measured for the Flipped-Fil (labeled in Fig. <ref>). All relative angle maps find ϕ∼ 0^∘ within the Flipped-Fil structure, even though the region is not always particularly noticeable in the intensity maps. This result of a mostly parallel relative alignment within the Flipped-Fil is consistent with the visual observation of the filament being elongated approximately in the direction of the N-S local magnetic field orientation, as seen in the middle and right panels of Fig. <ref>. This is in contrast to the otherwise E-W orientation of the overall magnetic field morphology in the surrounding RCW 36 region.
The main difference between the different wavelengths is the emergence of the bright-rimmed Bent-Fil structures (labeled in Fig. <ref>), particularly the south Bent-Fil which begins to appear over the south-western region of the dense ring at 160 μm and become increasing prominent at 70 μm. The Band C HAWC+ magnetic field lines are observed to largely follow the geometry of these Bent-Fils in the direction of their E-W elongation, resulting in increasingly parallel relative orientations at 160 μm and 70 μm, which decreases the overall magnitude of the to be less negative than at 350–214 μm. This trend can be also be seen in the left column HROs, which show a decreasing histogram density near ϕ∼ 90 ^∘ for the shorter wavelengths.
We contrast these results to the Spitzer data at 4.5 μm, shown in the two row of Figure <ref>. Similar to the 3.6 μm map shown in Fig. <ref>, the 4.5 μm map is also highly sensitive to the E-W Bent Fils structure, but does not trace the dense molecular ring. Since the Bent-Fils are oriented roughly parallel to the E-W magnetic field, this results in a statistically significant positive . In Section <ref>, we made note of the apparent correlation between the emission of the Spitzer 3.6 μm map and the total integrated intensity. Here, we once again note the similarities between the Spitzer emission and the total integrated intensity map (shown in the second row of Fig. <ref>), which is also a PDR tracer like . The HRO results for the data are further discussed in Section <ref>.
We now analyze the HRO results of the Herschel-derived column density and temperature maps, shown in Figure <ref>. Similar to longer wavelength dust maps, the column density map is also mostly sensitive to the dense molecular ring elongated in the roughly N-S oriented, particularly the western half containing the Main-Fil. This results in a majority of locally perpendicular ϕ(x,y) angles relative to the E-W magnetic field morphology, giving an overall large negative . In contrast, the temperature map shows only a slight preference for perpendicular relative alignment as it appears to be mostly tracing the bipolar morphology of the region. The HAWC+ magnetic field follows the curvature of the bipolar nebula and thus results in more parallel relative angles between the magnetic field and structures in the temperature map and a lower magnitude .
Figure <ref> shows the results for the ALMA data. The HRO analysis for both the ALMA 12-m and ACA maps finds a statistically insignificant PRS of Z_x∼ 0, implying that the structures traced by ALMA do not have a preferred direction of orientation relative to the HAWC+ Band C inferred magnetic field. ALMA being an interferometer, resolves out much of the large scale structure such as the dense ring, Main-Fil, Flipped-Fil and Bent-Fils, which are the main features observed by the other dust maps. The HAWC+ Band C data is also likely not probing the magnetic field within the dense cores detected by ALMA as is further discussed in Appendix <ref>. This may explain why there is no correlation between the structures traced by the ALMA data and the magnetic field orientation inferred by SOFIA/HAWC+. Furthermore, there were not enough ALMA data points which were above a 3-σ signal-to-noise threshold that also overlapped with the HAWC+ vectors, to produce a robust PRS measurement. An improved HRO study would compare the structures traced by the ALMA continuum maps to the magnetic field on similar core-scales, such as that inferred from ALMA polarization mosaics. This is outside the scope of this paper.
§.§.§ Spectral lines
Figure <ref> shows the Single Map HRO for the different molecular gas tracers. We compare these results with the atomic gas data of in Figure <ref> and in Figure <ref>. For the atomic gas, the and data from SOFIA probe the transition from molecular to ionized gas in the PDR. The regions of parallel (purple) and perpendicular (orange) relative orientation angle in the relative angle map is similar to . The main difference between the two is that <cit.> traces more of the E-W oriented Bent-Fils in the west half of the ring as compared to the . This may be because the Bent-Fils features in the western half are more diffuse, while tends to trace denser PDR regions <cit.> and hotter gas (typically ∼200 K). As such, emission appears to better trace the N-S Main-Fil, resulting in a slight overall preference for perpendicular alignment as compared to .
Next, we examine the Single Map HRO results for the molecular line data shown in Figure <ref>. The high-density gas tracers such as HNC and N_2H^+ <cit.> clearly trace the densest N-S region in the west half of the ring where the alignment of the gas structures with respect to the magnetic field is mostly perpendicular within the Main-Fil column density contours, resulting in a negative value. The C^18O data traces intermediate densities <cit.> similar to ^13CO, but the C^18O Mopra data has a lower signal-to-noise ratio and lower resolution, resulting in a lower than the APEX ^13CO data. Interestingly we find that the south Bent-fil traced by ^13CO and HAWC+ 89 μm intensity (shown in Fig. <ref>), and can be somewhat seen in the maps of C^18O and HNC, but is not seen in the N_2H^+ map which tends to trace only high density and colder molecular gas. These observations are contrasted with the ^12CO integrated intensity map which traces even lower density molecular gas and shows very different structure compared to the other spectral lines. While it also maps parts of the ring, ^12CO shows bright emission around the Flipped-Fil and the north Bent-Fil which are elongated along the direction of the magnetic field lines resulting in an overall weak preference for perpendicular alignment relative to the magnetic field (Z_x = -2.2). Though ^12CO is detected throughout the entire RCW 36 region, it is optically thick at the densest regions. Spectra of ^12CO, ^13CO, and for the Flipped-Fil are given in Appendix <ref> for reference (for spectra in other regions, see Fig. 2 of ).
§.§.§ Caveats and Considerations
Finally, we make note of some important considerations in our HRO analysis. We note that smoothing can reduce the number of data points near the boundaries of the map. For instance in the HAWC+ 89 μm map (shown in Fig. <ref>), the Gaussian kernel smoothing removes some of the relative angle measurements near the south edge of the map boundary which is not the case for the Herschel 70 μm which covers a larger area on the sky (see Figure <ref>). Another caveat to note for all our HRO analysis plots, is that some of the ϕ∼0^∘ relative angle points are due to the gradient amplitude approaching zero as the gradient changes direction at the peak of the iso-contours. For example in Fig. <ref> at the center of the highest column density contour within the Main-Fil, a thin row of purple ϕ∼0^∘ pixels can be seen along the N-S direction where we would expect the gradient to change direction. This is most obvious for the 350 μm and 250 μm relative angle maps. Since there is a small percentage of the total ϕ(x,y) pixels in the relative angle map which are subjected to this effect, the impact on the resulting is insignificant. Furthermore it should also be mentioned that in addition to the hot dust detected by Spitzer, the instrument is also clearly detecting starlight from the massive stellar cluster. Since this emission is not extended but rather from point sources, it is generally not elongated in a particular direction and therefore also does not significantly affect .
§ VELOCITY DEPENDENT HRO
In this section, we show the velocity channel maps for the gas tracers not included in Figures <ref>–<ref>. For reference, Figure <ref> shows the spectra for ^12CO, ^13CO, and at the Flipped-Fil region. Figures <ref>–<ref> show the integrated intensities for 1 km/s wide velocity slabs from 4–10 km/s for ^12CO, , C^18O, and N_2H^+, respectively. Similar to the main text, we note that the dense ring is traced at line-of-sight velocities of 4–7 km/s, while the Bent-Fils and Flipped-Fil are seen at 8–10 km/s.
§ OPTICAL DEPTH OF HAWC+ DATA
Dust emission can be optically thin (τ≪ 1), such that the plane-of-sky magnetic field orientation is inferred from the average emission by all dust grains along the line of sight. Alternatively, the emission can be optically thick (τ > 1), such that only the flux from the outer surface layer of the cloud is traced. Understanding the optical depth helps identify the location of the dust grains dominating the emission, whether they are within a translucent cloud, or on the surface of an opaque cloud. To estimate the optical depth of RCW 36 at the HAWC+ wavelengths, 89 μm and 214 μm we use Equation <ref> from the main text.
The Herschel-derived column density (36 FWHM) map is used N_H_2. The 36 column density map is chosen over the 18 version as this resolution matches the temperature map. We use the dust opacity law from <cit.> that was applied by <cit.> to originally derive the Vela C column density map, which is:
κ_λ = 0.1 ×(λ/300 μ m)^-β
where we have used β=2, to match <cit.>. The resulting estimates for optical depth τ are shown in Figure <ref>. From the colorbar, we see that at the longer HAWC+ wavelength of 214 μm (Band E), the dust emission is optically thin τ < 1 everywhere in the region. At the shorter wavelength in HAWC+ at 89 μm (Band C), we estimate the emission is roughly optically thin everywhere, except at the brightest peaks near the clumps, where τ∼ 1.4. While optically thick emission at an observing wavelength 89 μm would not be unexpected, in this case the maximum optical depth is still relatively close to the τ∼ 1 surface, indicating that the emission is only moderately thick, rather than very thick (e.g for say τ∼ 10). This means that we are missing some flux at the brightest peaks, but not too much.
However, we note that since we are using a 36 column density map to extrapolate the optical depth of the Band C emission, which has a native resolution of 8, the actual optical depth at 8 could be higher than what is estimated from 36 FWHM map (shown in Fig. <ref>) at the smallest scales closest to the brightest peaks. Therefore, an optical depth of τ∼ 1.4 at these points may be an underestimation. Another caveat is that a singular mass-weighted average dust temperature is assumed when generating the Herschel column density map <cit.>. This does not account for temperature gradients along the line of sight. If the 89 μm dust is probing a population of warmer dust grains, that only exist near the surface of the region, then that dust will not be probing the entire column traced by the Herschel column density map. Furthermore, the emission at 89 μm may also be tracing very small dust grains (VSGs) <cit.>, which do not emit at sub-millimeter wavelengths but can at 70–100 μm <cit.>. These grains are stochastically heated and are not in equilibrium, which makes inferring their properties difficult. The 160–500 μm emission used to generate the column density map are likely tracing emission from the larger dust grains, meaning the column density map derived from these wavelengths will not include VSGs.
§ CORRELATION OF POLARIZED DUST EMISSION WITH OTHER TRACERS
In this section, we compare the 89 μm polarized emission to the other dust emission, column density, temperature and molecular line maps listed in Table <ref>. These tracers probe different physical properties of the gas, and a strong correlation between the polarized emission and a particular dataset may imply the magnetic field is primarily being traced in regions with similar density, temperature, chemical and excitation conditions.
We do this by individually overlaying contours of the different tracers on the Band C total polarization intensity map and visually comparing the emission. Figure <ref> shows that the 89 μm polarized emission correlates well with the integrated intensity, shown in the left panel. This further reinforces our previous assertion that the polarization data is preferentially tracing the magnetic field from the warm dust located near the PDR, where [CII] is abundant. This is contrasted to the apparent anti-correlation of the Band C polarized data observed for the ALMA ACA continuum data, as can be seen in the right panel of Fig. <ref>. The ALMA clumps appear to be located in areas where there is a lack of polarized intensity. This finding is consistent with Band C emission being sensitive to warmer dust, rather than the cold dense structures traced by ALMA. Polarization measurements at longer wavelengths and higher resolution (e.g. with ALMA) would be needed to probe the magnetic field within these colder dense structures.
aasjournal
|
http://arxiv.org/abs/2409.03552v1 | 20240905141257 | Associated varieties of simple affine VOAs $L_k(sl_3)$ and $W$-algebras $W_k(sl_3,f)$ | [
"Cuipo Jiang",
"Jingtian Song"
] | math.QA | [
"math.QA",
"math.RT",
"17B67, 17B69"
] |
Created by ctex v0.2.13, don't edit!
#1#1
Res
wt
Δ
δ
β
⋯
ϵ
t
W
J
0in
≤⩽
≥⩾
𝔸
ℂ
ℚ
ç̧̃
ch
ℙ
ℝ
𝕍
𝕏
ℤ
𝕋
ℕ
1
łλ
tr
Res
End
Aut
mod
depth
Hom
Irr
im
⟨
⟩
ω
g
g
h
n
g
h
t
≀ρ
sl
𝒲
α
β̱
grV^k(g)
gr^pV^k(g)
ϵ
Δ
𝒞l
W
Δ
W
𝔥
łλ
ω
𝔽
ℱield
ω
𝒲
V
Q
equationsection
theoremTheorem[section]
prop[theorem]Proposition
lem[theorem]Lemma
cor[theorem]Corollary
remark[theorem]Remark
conjecture[theorem]Conjecture
question[theorem]Question
definition
defn[theorem]Definition
example[theorem]Example
namelist[1]
#1
Associated varieties of simple affine VOAs L_k(sl_3) and W-algebras W_k(sl_3,f) [Supported by China NSF grants No.12171312.]
Cuipo Jiang
and Jingtian Song
School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
§ ABSTRACT
In this paper we first prove that the maximal ideal of the universal affine vertex operator algebra V^k(sl_n) for k=-n+n-1/q is generated by two singular vectors of conformal weight 3q if n=3, and by one singular vector of conformal weight 2q if n≥ 4. We next determine the associated varieties of the simple vertex operator algebras L_k(sl_3) for all the non-admissible levels k=-3+2/2m+1, m≥ 0.
The varieties of the associated simple affine W-algebras W_k(sl_3,f), for nilpotent elements f of sl_3, are also determined.
§ INTRODUCTION
Let V be a vertex algebra. Recall from <cit.> that the
Zhu's C_2-algebra of V is by definition the quotient space
R_V=V/C_2(V) of V,
where C_2(V)=span_{a_-2b| a,b∈ V}.
Then R_V has the Poisson algebra structure given by
a̅·b̅=a_-1b, {a̅,b̅}=a_0b,
for a,b ∈ V, where a̅ := a+C_2(V) is the image of a in R_V.
The associated variety X_V of V is the reduced scheme
X_V=Specm(R_V)
of R_V <cit.>.
This fundamental invariant of V has been extensively studied in <cit.>.
Let be a complex simple finite-dimensional Lie algebra and V^k()
the universal affine vertex algebra at level k ∈. By PBW Theorem, R_V^k() is isomorphic to the polynomial algebra [^*] over ^*.
Thus the variety X_V is the affine space ^* with Kirillov-Kostant Poisson structure.
For a graded quotient V
of V^k(), the variety X_V is a Poisson subscheme of ^*
which is G-invariant and conic, where G is the adjoint group of .
Denote by L_k() the simple quotient of V^k().
Let h be the Coxeter number of , h^∨ the dual Coxeter number, and r^∨ the lacing number of .
It has been proved in <cit.> that
L_k()=V^k() if and only if X_L_k() = ^*. In <cit.>, Gorelik and Kac proved
that V^k()
is not simple if and only if
r^∨(k+h^∨) ∈_≥ 0\{1/m| m∈_≥ 1}.
Therefore,
X_L_k()⊊^*(<ref>) holds.
It is known <cit.> that X_L_k()={0}
if and only if k is a non-negative integer.
For an admissible L_k() <cit.>,
or equivalently, for an admissible number k which by definition satisfies
k+h^∨=p/q, p,q∈_≥ 1, (p,q)=1,p≥h^∨ if (r^∨,q)=1,
h if (r^∨,q) 1,
the variety X_L_k() is the closure of some nilpotent orbit in . In particular, if k is a non-degenerate admissible number, that is,
q≥h if (r^∨,q)=1,
r^∨h^∨ if (r^∨,q)=r^∨,
then X_L_k() is the nilpotent cone 𝒩() <cit.>.
If k=-h^∨,
it follows from <cit.> that
X_L_k() is also the nilpotent cone.
It was observed in <cit.>
that
there are
cases when L_k() is
non-admissible
and
X_L_k() is the closure of some nilpotent orbit,
which provide examples for the conjecture in physics <cit.> that,
in view of the 4D/2D duality,
there should be
a large list of non-admissible simple affine vertex algebras
whose associated varieties are the closures of some nilpotent orbits.
There are also cases where X_L_k()
is neither ^* nor contained in the nilpotent cone 𝒩().
It was proved in <cit.> that X_L_-1(sl_n), n≥ 4, X_L_-m(sl_2m), m≥ 2, X_L_2-r(so_2r), r∈ 2_++1, are closures of sheets. In <cit.>, it was shown that the associated variety of a simple affine vertex algebras is contained in the closure of the Dixmier sheet when a chiralization of generalized Grothendieck's simultaneous resolution exists.
The variety X_L_-5/2(sl_4) of L_-5/2(sl_4) was given in <cit.>, which turns out to be the closure of a Jordan class of sl_4.
In general,
the problem of determining X_L_k() is widely open. Up to now, the variety X_L_k() has been completely determined for all k∈ only when =sl_2.
In this paper, we will determine X_L_k(sl_3) for all non-admissible numbers k=-3+2/2m+1, m≥ 0. Then together with the known results X_L_k()
for the critical case and admissible cases given in <cit.>, we will obtain a complete characterization of X_L_k(sl_3) for all k∈.
Another question we are concerned about in this paper is the characterization of the maximal ideal of V^k(sl_n), where k=-n+n-1/q, n≥ 3, q≥ 2, (n-1, q)=1.
If q=1, that is, k=-1, the maximal ideal of V^-1(sl_3) is generated by two singular vectors of conformal weight 3 <cit.>, and the maximal ideal of V^-1(sl_n) is generated by one singular vector of conformal weight 2 <cit.>.
By using the character formulas given in <cit.>, and the results for the k=-1 case in <cit.>, we prove our first main result as follows (also see Theorem <ref>).
Let k=-n+n-1/q with n≥ 3, q≥ 2, (n-1,q)=1. Then
R chL_k(sl_n)=
∑_w∈ W∑_γ∈ Q
(γ|Λ̅_n-1)≥ 0, (γ|Λ̅_1))≥ 0(-1)^l(w)e^wt_qγ(kΛ_0+≀)-≀.
Furthermore, we have
* If k=-3+2/2m+1 with m≥ 1, then the maximal ideal of V^k(sl_3) is generated by two singular vectors v^1 and v^2 of weights kΛ_0-3(2m+1)δ+2α_1+α_2 and kΛ_0-3(2m+1)δ+α_1+2α_2, respectively.
* If k=-n+n-1/q with n≥ 4, q≥ 2, and (q,n-1)=1, then the maximal ideal of V^k(sl_n) is generated by one singular vector w^1 of weight kΛ_0-2qδ+_1+2_2+⋯+2_n-2+_n-1.
Let θ be the highest root of .
Our second main result is (also see Theorem <ref>):
The variety X_L_-1(sl_3) is the closure of the Dixmier sheet 𝕊_l_1=G.ℂ^*λ, i.e.,
X_L_-1(sl_3)=𝕊_ l_1= G.ℂ^*λ=G.ℂ^*λ∪𝒪_min,
where l_1=h⊕ e_θ⊕ f_θ, λ=h_1-h_2, and 𝒪_min is the minimal non-zero nilpotent orbit of sl_3(ℂ). In particular, X_L_-1(sl_3) is irreducible and
X_L_-1(sl_3)=5.
For k=-3+2/2m+1, m≥ 1, we have our third main result as follows (see also Theorem <ref>).
For k=-3+2/2m+1, m≥ 1, the variety
X_L_k(sl_3)= G.(ℂ^*(h_1-h_2)+f_θ).
The proof of Theorem <ref> is much more complicated than the k=-1 case. The main difficulty is that although we could prove that the maximal ideal of L_k(sl_3) is generated by two singular vectors v^1 and v^2 of conformal weight 3(2m+1) by Theorem <ref>, it is almost impossible to write these singular vectors out as sums of PBW terms for general k=-3+2/2m+1, m≥ 1. This means that we could not use their structure, as what we do with the k=-1 case, to obtain the zero locus of the graded ideal ℐ_k of the C_2-algebra R_V^k(sl_3) determined by these two singular vectors. Our main idea is to analyze coefficients of some critical leading terms. Then further prove that
X_L_k(sl_3)= G.(ℂ^*(h_1-h_2)+f_θ). Our method here provides a possible way to determine varieties of a simple affine vertex operator algebra without knowing the exact structure of its singular vectors. We will study the associated varieties of the simple vertex operator algebras V^k(sl_n) for n≥ 4 and general non-admissible numbers k in subsequent papers.
Representations of L_k(sl_3) have been extensively studied in <cit.>, etc.
We here have the following conjecture for L_k(sl_3) (also see Conjecture <ref>).
Let k=-3+2/2m+1 with m≥ 0. Then
{L(tΛ̅_1-2i/3Λ̅_2), L(tΛ̅_2-2i/3Λ̅_1), L(Λ̅_1-(t+2i/3+1)Λ̅_2), t∈, i=0,1,⋯, 2m}
provides a complete list of irreducible A(L_k(sl_3))-modules in the category 𝒪.
When m=0, the conjecture is true by <cit.>. When m=1, we could verify the conjecture by computer programming.
Let f be a nilpotent element of =sl_3. Then f is minimal or regular. For k∈, let W^k(sl_3,f) be the universal affine W-vertex algebra associated to f <cit.>. Denote by W_k(sl_3,f) the simple quotient of W^k(sl_3,f).
Let f be a minimal nilpotent element of =sl_3. If k=-3+2m+1/2, m≥ 1, it was shown in <cit.> that X_W_k(sl_3,f)={0} and W_k(sl_3,f) is rational. If k=-1, it has been proved in <cit.> that W_-1(sl_3,f) is isomorphic to the rank one Heisenberg vertex operator algebra. So X_W_-1(sl_3,f) is one-dimensional.
We have the following result for k=-3+2/2m+1, m≥ 1 (also see Theorem <ref>).
Let k=-3+2/2m+1, m≥ 1, and f a minimal nilpotent element of sl_3. Then
X_W_k(sl_3,f)={[[ a b 3(μ^2-a^2); 0 -2aμ c; 1 0 a ]]| μ, a,b,c∈, bc=2(a-μ)(2a+μ)^2 }.
In particular, X_W_k(sl_3,f)=3. Furthermore, W_k(sl_3, f) is not quasi-lisse.
Let f be a regular nilpotent element of =sl_3. If k=-3+p/q, p,q≥ 3, (p,q)=1, by <cit.>, X_W_k(sl_3)={0} and W_k(sl_3,f) is rational. Our fifth main result is given as follows (also see Theorem <ref>).
Let k+3=2/2m+1 or 2m+1/2, m≥ 1, and f a regular nilpotent element of sl_3. Then
X_W_k(sl_3)={[[ 0 3/4μ^2 1/2μ^3; 2 0 3/4μ^2; 0 2 0 ]]| μ∈}.
In particular, X_W_k(sl_3)=1. Furthermore, W_k(sl_3,f) is not quasi-lisse.
The rest of this paper is organized as follows. In Section 2, we recollect some concepts and results that will be needed later. In Section 3, we study singular vectors and maximal ideals of V^k(sl_n) for k=-n+n-1/q with n≥ 3, q≥ 1, (n-1,q)=1. Section 4 is dedicated to determine the associated varieties of L_k(sl_3) for k=-3+2/2m+1, m≥ 0. In Section 5, we obtain the associated varieties of the simple affine W-algebras W_k(sl_3,f).
§.§ Acknowledgements
We would like to appreciate Tomoyuki Arakawa, Dražen Adamović, and Anne Moreau for valuable discussions and comments. In particular, the first author is grateful to Dražen Adamović for showing her the very helpful result of Theorem 8.2 in his paper <cit.> joint with Pierluigi Möseneder Frajria and Paolo Papi, and to Tomoyuki Arakawara for telling her the papers <cit.> and <cit.>, and for the discussions.
§ PRELIMINARIES
§.§ C_2-algebras and associated varieties
Let V be a vertex algebra over ℂ <cit.>. Recall that a subset S of V is called a strongly generating set of V if V is linearly spanned by elements of the form:
{x^1_-m_1⋯ x^s_-m_s 1| x^i∈ S, m_i∈ℤ_+, s≥ 0, 1≤ i≤ s}.
Furthermore, if S is a finite set, then V is called finitely strongly generated by S.
For a vertex algebra V, define C_2(V)=span_{a_-2b| a,b∈ V} and
R_V= V/C_2(V).
Then R_V is a Poisson algebra <cit.> with relations:
a̅·b̅ = a_-1 b, {a̅, b̅}= a_0b,
for a, b ∈ V, where a̅ = a+C_2(V). It is easy to see that V is finitely strongly generated
if and only if R_V is finitely generated. V is called C_2-cofinite if R_V is finite-dimensional <cit.>.
Let V be a finitely strongly generated vertex algebra. By definition <cit.>, the
associated variety of V
is
the reduced scheme
X_V= Specm(R_V).
Recall from <cit.> that V is called lisse if X_V={0}. We have the following result.
Let V be a finitely strongly generated vertex algebra. Then V is lisse if and only if V is C_2-cofinite.
Since R_V is a finitely generated Poisson algebra, the variety X_V is a union of symplectic leaves <cit.>.
Let V be a vertex algebra. Then V is
called quasi-lisse if X_V has finitely many symplectic leaves.
Let V be a finitely strongly generated vertex algebra. If V is quasi-lisse, then V has finitely many ordinary modules.
§.§ Affine vertex algebras
Let be a finite-dimensional simple Lie algebra over ℂ with the following normalized bilinear form:
(·|·)=1/2h^∨× Killing form of .
Let
=[t,t^-1]⊕ K be the associated affine Kac-Moody algebra
with the commutation relations:
[x⊗ t^m,y⊗ t^n]=[x,y]⊗ t^m+n+m(x|y)δ_m+n,0K,
[K,]=0,
where x,y∈ and m,n∈.
For x ∈ and m ∈, we will write x(m) for x
⊗ t^m.
For k ∈, set
V^k()=U()⊗ _U( [t]⊕ K)_k,
where _k is the one-dimensional representation of [t]⊕ K
on which K acts as multiplication by k and [t] acts trivially.
By PBW
Theorem,
we have
V^k()≅ U(⊗ t^-1[t^-1]) = U(t^-1[t^-1]).
The space V^k() is naturally graded,
V^k() =⊕_n∈_≥ 0V^k() _n,
where the grading is defined by
(x^i_1(-n_1)… x^i_r(-n_r) 1) = ∑_i=1^r n_i,
r ≥ 0, x^i_j∈,
with 1 the image of 1⊗ 1 in V^k().
We have V^k()_0= 1,
and we may identify with V^k()_1 via the linear isomorphism
defined by x↦ x(-1) 1.
It is well-known that V^k() has a unique vertex algebra structure
such that 1 is the vacuum vector,
x(z)= Y(x⊗ t^-1,z) =∑_n ∈ x(n) z^-n-1,
and
[T,x(z)]=∂_z x(z)
for x ∈,
where T is the translation operator.
Notice that x(n) acts on V^k() by left multiplication. Then one can view x(n) as an endomorphism of V^k().
The vertex algebra
V^k() is called the universal affine vertex algebra
associated with at level k <cit.>.
If k+h^∨≠0, the vertex algebra
V^k() is a vertex operator algebra by the Sugawara construction.
More specifically, set
S=1/2∑_i=1^d
x_i(-1) x^i(-1) 1,
where d =, and {x_i i=1,…,d} is the dual
basis of a basis {x^i i=1,…,} of
with respect to the bilinear form (·|·).
Then the vector
ω=S/k+h^∨
is a conformal vector of V^k()
with central charge
c(k)=k /k+h^∨.
Any
graded quotient of V^k()
as -module
has the structure of a quotient vertex algebra.
In particular,
the unique simple graded quotient L_k()
is a vertex algebra,
and is called the simple affine vertex algebra associated with at level k.
§.§ Zhu's algebra of affine vertex algebras
For a _≥ 0-graded vertex algebra V=⊕_n≥ 0V_n, let O(V) be the subspace of V linearly spanned by elements
u∘ v=∑_j=0^∞nj u_j-2v,
for u∈ V_n, n∈_≥ 0, v∈ V. The Zhu's algebra <cit.>
A(V)=V/O(V)
is a unital associative algebra with the multiplication induced from
u*v=∑_j=0^∞nj u_j-1v,
for u∈ V_n, n∈_≥ 0, v∈ V. It is known from <cit.> that the Zhu's algebra of V^k() is the universal enveloping algebra U() of . Let ℐ_k be the maximal ideal of V^k(). Then
L_k()=V^k()/ℐ_k.
Denote by I_k the image of ℐ_k in A(V^k()), then
A(L_k())=U()/I_k.
Let be the Cartan subalgebra of , and =n^+++n^- be a fixed triangular decomposition of . Set
U()^={u∈ U() | [h, u]=0, for all h∈}
and let
p: U()^→ U()
be the Harish-Chandra projection map, which is the restriction of the projection map: U()=U()⊕ (^-U()⊕ U()^+)→ U() to U()^. It is known that p is an algebra homomorphism <cit.>. For a two-sided ideal I of U(), the characteristic variety of I is defined as <cit.>
𝒱(I)={λ∈^*|p(λ)=0 for all p∈p(I∩ U()^)}.
For λ∈^*, let L(λ) be the irreducible module of V^k() induced from the simple highest weight -module L(λ̅). Then by <cit.>, L(λ) is a module of L_k() if and only if I_k(L(λ̅))=0. We have the following result from <cit.>.
L(λ) is an L_k()-module if and only if λ̅∈𝒱(I_k).
§.§ Associate graded vertex Poisson algebras of affine vertex algebras
It is known by Li <cit.>
that any vertex algebra V admits a canonical filtration F^∙ V,
called the Li filtration of V.
0.2cm
For a quotient V of V^k(),
F^∙ V is described as follows.
The subspace
F^p V is spanned by the elements
y_1(-n_1-1)⋯ y_r(-n_r-1)1
with y_i∈,
n_i∈_≥ 0, n_1+⋯ +n_r≥ p.
We have
V=F^0V⊃ F^1V⊃⋯, ⋂_pF^pV=0,
TF^pV⊂ F^p+1V,
a_nF^qV⊂ F^p+q-n-1V for a∈ F^pV, n∈,
a_nF^qV⊂ F^p+q-nV for a∈ F^pV, n≥ 0.
Here we set F^pV=V for p<0.
Let gr^FV=⊕_p F^pV/F^p+1V be the associated graded vector space.
The space gr^FV is a vertex Poisson algebra by
σ_p(a)σ_q(b)=σ_p+q(a_-1b),
Tσ_p(a)=σ_p+1(Ta),
σ_p(a)_nσ_q(b)=σ_p+q- n(a_nb)
for a,b∈ V,
n≥ 0,
where σ_p F^p(V)→ F^pV/F^p+1V is the principal symbol map.
In particular, the space
gr^F V is a [t]-module by the correspondence
[t]∋ x(n)⟼σ_0(x)_∈(gr^F V)
for x∈, n≥ 0.
The filtration F^∙ V
is compatible with the grading:
F^pV=⊕_n∈_≥ 0F^pV_n,
where
F^p V_n := V_n ∩ F^p V.
§.§ Zhu's C_2-algebras and associated varieties
of affine vertex algebras
Let F^∙V be given as above.
We have <cit.>
F^p V = span_{ a_-i-1 b a ∈ V, i ≥ 1, b ∈ F^p-i V }
for all p ≥ 1. In particular,
F^1 V = C_2(V), R_V = V/C_2(V) = F^0 V / F^1 V ⊂ gr^FV.
It is easily seen that
F^1 V^k() = C_2(V^k()) = t^-2[t^-1] V^k().
The following map defines an isomorphism of Poisson algebras
[ [^*] ≅ S() ⟶ R_V^k(),; ∋ x ⟼ x (-1) 1
+ t^-2[t^-1] V^k(). ]
So
R_V^k()≅[^*]
and
X_V^k()≅^*.
More generally, let V be a quotient of V^k()
by an ideal N, then
we have
R_V≅[^*]/N
as Poisson algebras,
where N is the image of N in R_V^k()=[^*].
Then X_V is just the zero locus of N in
^*.
It is a closed G-invariant conic subset of ^*. Identifying ^* with via the bilinear form (·|·),
one may view X_V as a subvariety of .
§.§ Sheet and Jordan classes
In this subsection, we recall Jordan classes and sheets of a semisimple Lie algebras following <cit.>.
Let g be as above. Denote by 𝒮 (resp. 𝒩) the set of semisimple (resp. nilpotent) elements of g.
For x∈g, set 𝔤^x={y∈g|[x, y]=0}, and denote by x_s and x_n its semisimple and nilpotent components, respectively.
For x,y∈g, we say that x and y are G-Jordan equivalent if there exists α∈ G such that
g^y_s=g^α(x_s)=α(g^x_s), y_n=α(x_n).
This defines an equivalence relation on g. The equivalence class of x, denoted by J_G(x), is called the Jordan class of x in g. A Jordan class is clearly a G-stable set.
For any Lie subalgebra u⊂g, set
z(u)={x∈u|[x, y]=0, y∈u}.
Denote by u^reg the set of elements y∈u such that the dimension of ^y is minimal. We have the following lemma.
Let x∈g.
* We have
J_G(x)=G(z(g^x_s)^reg+x_n).
* J_G(x) is irreducible in g.
* J_G(x) is locally closed in g, so it is a subvariety of g.
To a Jordan class J, let x∈ J, and l=g^x_s. Then l is a Levi subalgebra of g. Let 𝕆_ l be the nilpotent orbit in l of x_n. The pair (l, 𝕆_l) does not depend on x∈ J up to G-conjugacy, and there is a one-to-one correspondence between the set of pairs (l, 𝕆_l) and the set of Jordan classes.
0.3cm
Let n∈ℕ. Denote
g^(n)={x∈g| g^x=n}.
For n∈, an irreducible component of g^(n) is called a sheet of g.
A sheet of is a disjoint union of Jordan classes. So a sheet 𝕊 contains a unique dense open Jordan class J. The datum (l, 𝕆_ l) of the Jordan class J is called the datum of 𝕊. Then
𝕊=J, 𝕊=(J)^reg.
Let 𝕊 be a sheet with datum (l, 𝕆_ l), then the induced nilpotent orbit Ind_ l^(𝕆_ l) of from 𝕆_ l in l is the unique orbit contained in 𝕊. The rank of 𝕊 with datum (l, 𝕆_ l) is by definition
r(𝕊):=𝕊- ( Ind_ l^(𝕆_ l))=z( l).
A sheet 𝕊 with datum (l, 𝕆_ l) is called a Dixmier sheet if 𝕆_ l={0}.
§.§ Affine W-algebras
Let f be a nilpotent element of . By the Jacobson-Morozov Theorem,
there is an sl_2-triple (e,h,f) of .
Recall that the Slodowy slice 𝒮_f is the affine space f+^e.
It has a natural Poisson structure induced from that of ^* <cit.>.
The embedding span_{e,h,f}≅sl_2 ↪
exponentiates to a homomorphism
SL_2 → G. By restriction to the one-dimensional
torus consisting of diagonal matrices, we obtain
a one-parameter subgroup ρ^* → G.
For t∈^* and x∈, set
ρ̃(t)x := t^2ρ(t)(x).
We have
ρ̃(t)f=f, and
the ^*-action of ρ̃ stabilizes 𝒮_f.
Moreover, it contracts to f on 𝒮_f, that is, for all x∈^e,
lim_t→ 0ρ̃(t)(f+x)=f.
The following proposition is well-known.
The morphism
θ_f G ×𝒮_f ⟶,
(g,x) ⟼ g.x
is smooth onto a dense open subset of ^*.
Let W^k(,f) be the affine W-algebra associated with
a nilpotent element f of
defined by the generalized quantized Drinfeld-Sokolov reduction:
W^k(,f)=H^0_DS,f(V^k()).
Here H^∙_DS,f(M) denotes the BRST
cohomology of the generalized quantized Drinfeld-Sokolov reduction
associated with f ∈𝒩() with coefficients in
a V^k()-module M <cit.>.
Recall that we have a natural isomorphism
R_W^k(,f)≅[𝒮_f] of Poisson algebras <cit.>, so that
X_W^k(,f)= 𝒮_f.
We write W_k(,f) for the unique simple (graded) quotient of
W^k(,f). Then X_W_k(,f)
is a ^*-invariant Poisson
subvariety of the Slodowy slice 𝒮_f.
Let 𝒪_k be the category 𝒪 of
at level k.
We have a functor
𝒪_k⟶ W^k(,f)-Mod
, M⟼
H^0_DS,f(M),
where
W^k(,f)-Mod denotes the category
of W^k(,f)-modules.
The full subcategory of 𝒪_k consisting of
objects M on which acts locally finitely is denoted by KL_k.
Note that both V^k() and L_k() are objects of KL_k.
Let
f_θ be a root vector of the highest root θ of . Then
* H_DS,f_θ^i(M)=0 for all i 0, M∈𝒪_k.
In particular, the functor
𝒪_k⟶ W^k(,f_θ)-Mod
, M⟼
H^0_DS,f_θ(M),
is exact.
* H^0_DS,f_θ(L(λ))≠ 0 if and only if λ(_0^∨)∉_≥ 0, where _0^∨=-θ^∨+K. If this is the case, H^0_DS,f_θ(L(λ)) is a simple W^k(,f_θ)-module.
Let k, f be arbitrary. Then
* H_DS,f^i(M)=0 for all i 0, M∈KL_k.
In particular, the functor
KL_k⟶ W^k(,f)-Mod
, M⟼
H^0_DS,f(M),
is exact.
* For any quotient V of V^k(),
X_H^0_DS,f(V)=X_V∩𝒮_f.
In particular,
H_DS,f^0(V)
0 if and only if
G.f⊂ X_V.
§ MAXIMAL IDEALS OF V^K(SL_3) FOR K+N=N-1/Q, N≥ 3, Q≥ 2, (Q,N-1)=1
0.2cm
Let , and V^k() be the same as in Section 2. We first recall the following result from <cit.>.
For k∈,
V^k()
is not simple if and only if
r(k+h^∨) ∈ℚ_≥ 0\{1/m| m∈ℤ_≥ 1},
where
r is the lacing number of .
By Theorem <ref>, V^k(sl_n) is not simple if and only if k=-n+p/q, where p∈_≥ 2, q∈_≥ 1, and (p,q)=1.
0.2cm
In this section, we will prove that, for k=-3+2/2m+1, m∈_≥ 1, the maximal ideal of V^k(sl_3) is generated by two singular vectors of conformal weight 3(2m+1), and for n≥ 4, k=-n+n-1/q, q≥ 2, (q,n-1)=1, the maximal ideal of V^k(sl_n) is generated by a singular vector of conformal weight 2q. We reach our results by using the Kashiwara-Tanisaki characters <cit.> (see also <cit.>), the character formulas for the k=-1 case given in <cit.>, and the results established in <cit.>.
§.§ Kashiwara-Tanisaki Characters
In this subsection, we will introduce the Kashiwara-Tanisaki characters following <cit.>. Let Δ, Δ be the root systems of and with respect to and , respectively. Denote by δ the positive
imaginary root such that any imaginary root is an integral multiple of δ.
Let W and be the Weyl groups of and , and Q and the root lattices of and , respectively. Let {_1,…,_l} be the simple root system of , and {_0,_1,…,_l} the simple root system of . For a real root ∈, set ^∨=2/(|). Let ρ∈ h^* and ≀∈^* be such that
(ρ|_i^∨)=1, (≀|_0^∨)=0, (≀|_i^∨)=1, 1≤ i≤ l.
Recall that the twisted (or shifted) action of on ^* is defined as follows
w∘λ=w(λ+≀)-≀,
where w∈ and λ∈^*. Notice that if w∈ W, w∘λ=w(λ+ρ)-ρ.
For λ∈^*, set
(λ)={α∈^re|(λ+≀|^∨)∈},
_0(λ)={α∈^re|(λ+≀|^∨)=0}.
Notice that (λ) and _0(λ) are subsystems of ^re. Denote the set of positive roots, the set of negative roots, the set of simple roots and the Weyl group for (λ) by
^+(λ), ^-(λ), Π(λ) and (λ), respectively. Denote those for _0(λ) by ^+_0(λ), ^-_0(λ), Π_0(λ) and _0(λ).
For a real root ∈, denote by s_∈ the corresponding reflection. Then Π(λ) is the set of ∈^+(λ) such that s_(^+(λ)\{})=^+(λ)\{}, and ((λ), S(λ)) is a Coxeter group, where S(λ)={s_: ∈Π(λ)}.
0.2cm
For w∈(λ), denote by l_λ(w) the length of w. Denote the Bruhat ordering of (λ) by ≥_λ. For y,w∈(λ), denote by P^λ_y,w(q)∈[q] the corresponding Kazhdan-Lusztig polynomial <cit.>, and by Q^λ_y,w(q)∈[q] the inverse Kazhdan-Lusztig polynomial defined by
∑_x≤_λy≤_λz(-1)^l_λ(y)-l_λ(x)Q^λ_x,y(q)P^λ_y,z(q)=δ_x,z,
for any x,z∈(λ). Set
𝒞={λ∈^*| (δ|λ+≀)≠ 0 },
𝒞^+={λ∈𝒞|(λ+≀|^∨)≥ 0, for any ∈^+(λ)},
and
𝒞^-={λ∈𝒞|(λ+≀|^∨)≤ 0, for any ∈^+(λ)}.
Let λ∈𝒞. Then _0(λ) is a finite group <cit.>.
For λ∈^*, let M(λ) (resp. L(λ)) be the Verma module (resp. simple module) of with highest weight λ. We have the following result from <cit.>.
<cit.>
* Let λ∈𝒞^+, then for any w∈(λ) which is the longest element of w_0(λ),
ch(L(w∘λ))=∑_w≤_λy∈(λ)(-1)^l_λ(y)-l_λ(w)Q^λ_w,y(1) ch(M(y∘λ)).
* Let λ∈𝒞^-, then for any w∈(λ) which is the shortest element of w_0(λ),
ch(L(w∘λ))=∑_w≥_λy∈(λ)(-1)^l_λ(w)-l_λ(y)P^λ_y,w(1) ch(M(y∘λ)).
§.§ Maximal ideals of V^k(sl_n) for k=-n+n-1/q(q≥ 2, (q,n-1)=1)
In this subsection, we assume that
=sl_n(), and k=-n+n-1/q, q≥ 1, (q, n-1)=1. It is obvious that
Π(kΛ_0)={β_0=qδ-θ, _1, ⋯, _n-1}.
Since (kΛ_0+≀|β_0)=0, it follows that
Π_0(kΛ_0)={β_0},
and
_0(kΛ_0)={e, s_β_0}.
In particular, if q=1, then k=-1 and β_0=_0. It follows that
(-Λ_0)=, _0(-Λ_0)={e, s__0}.
Let S={s__0, s__1, ⋯, s__n-1}, then (, S) is a Coxeter group.
We have the following lemma.
For any k=-n+n-1/q, q≥ 1, n≥ 3, we have
((kΛ_0), S(kΛ_0))≅ (, S),
as Coxeter groups by σ: (, S)→ ((kΛ_0), S(kΛ_0))
such that
σ s__0=s_β_0, σ(s__i)=σ(s__i), i=1,2,⋯,n-1.
Recall that the defining relations of (, S) are <cit.>
s__i^2=1, i=0,1,⋯,n-1, (s__is__j)^m_ij=1 i,j=0,1,⋯, n-1,
such that
m_01=m_10=m_0,n-1=m_n-1,0=3, m_0j=m_j0=2, j=2,⋯,n-2,
m_i,i+1=m_i+1,i=3, m_ij=m_ji=2, 1≤ i,j≤ n-1, j≠ i+1.
Notice that S={s__0, s__1, ⋯, s__n-1}, S(kΛ_0)={s_β_0, s__1, ⋯, s__n-1}.
Thus it is enough to prove that
(s_β_0s__1)^3=(s__1s_β_0)^3=(s_β_0s__n-1)^3=(s__n-1s_β_0)^3=1,
and
(s_β_0s__j)^2=(s__js_β_0)^2=1, j=2,⋯, n-2.
We only check (s_β_0s__1)^3=1. The computation for
the other cases in (<ref>)-(<ref>) is similar. Notice that
s_β_0s__1s_β_0=s_s_β_0(_1)=s_qδ-θ+_1,
and
s__1s_qδ-θ+_1s__1=s_s__1(qδ-θ+_1)=s_qδ-θ=s_β_0.
Then we have
(s_β_0s__1)^3=s_β_0s__1s_β_0s__1s_β_0s__1=s_β_0s__1s_qδ-θ+_1s__1=s_β_0s_β_0=1.
For γ∈^*, let t_γ∈ End^* be defined by <cit.>
t_γ(λ)=λ+(λ|δ)γ-((λ|γ)+1/2(λ|δ)(γ|γ))δ,
for λ∈^*. Recall from <cit.>, for any x∈, there exists y∈ W and γ∈ Q such that x=yt_γ. We have the following lemma.
For γ∈ Q, σ(t_γ)=t_qγ.
We first consider γ=θ. Notice that for λ∈^*,
s_β_0s_θ(λ)=s_β_0(λ-(λ|θ)θ)=λ+(λ|δ)qθ-((λ|θ)q+((λ|δ)q^2)δ=t_qθ(λ).
This deduces that
s_β_0s_θ=t_qθ.
In particular, if q=1, s__0s_θ=t_θ (see also <cit.>). So by Lemma <ref>,
σ(t_θ)=t_qθ.
Notice that
s__1t_θs__1=t_s__1(θ)=t_θ-_1
and
s__1t_qθs__1=t_s__1(qθ)=t_q(θ-_1).
Thus
σ(t_θ-_1)=t_q(θ-_1).
Considering s__2t_θ-_1s__2 and s__2t_q(θ-_1)s__2, we deduce that
σ(t_θ-_1-_2)=t_q(θ-_1-_2).
Similarly, we have for 4≤ j≤ n-1,
σ(t__j+⋯_n-1)=t_q(_j+⋯_n-1).
Notice that for , β∈ Q, t_t_β=t_+β. Then the lemma follows from (<ref>)-(<ref>).
Let Λ_i∈^* be such that
Λ_i(_j^∨)=δ_ij, i,j=0,1,⋯,n-1,
We have the following character formula from <cit.>, <cit.>.
Let n≥ 3, and let L(Λ) be an irreducible level k=-1 sl_n-module with highest weight
Λ=-(1+s)Λ_0+sΛ_n-1 ( resp. Λ=-(1+s)Λ_0+sΛ_1), s∈_≥ 0.
Then the character of L(Λ) is given by the following formula:
R chL(Λ)=
∑_w∈ W∑_γ∈ Q
(γ|Λ̅_n-1(resp. Λ̅_1))≥ 0(-1)^l(w)e^wt_γ(Λ+≀)-≀.
where
R=∏_∈_+(1-e^-)^ dim_.
In particular, if Λ=-Λ_0,
R chL(-Λ_0)=
∑_w∈ W∑_γ∈ Q
(γ|Λ̅_n-1)≥ 0, (γ|Λ̅_1))≥ 0(-1)^l(w)e^wt_γ(-Λ_0+≀)-≀.
We also need the following result from <cit.>, <cit.>, <cit.>, <cit.>.
Let λ∈𝒞^+, and w∈(λ). If L(μ) is a subquotient of M(w∘λ), then there exists y∈(λ) such that μ=yw∘λ.
0.2cm
Recall the following results from <cit.>, <cit.>, and <cit.>.
* <cit.> Let
[ u^1= [-e__1(-1)^2e__2(-1)+e__1(-1)e__1+_2(-1)h_2(-1)+e__1+_2(-1)^2f__2(-1)] 1,; ; u^2= [e__1(-1)e__2(-1)^2+e__2(-1)e__1+_2(-1)h_1(-1); -2e_α_2(-1)e__1+_2(-2)
-e__1+_2(-1)^2f__1(-1)] 1. ]
Then u^1, u^2 are singular vectors of V^-1(sl_3).
* <cit.> Let
u=e_θ(-1)e__2+⋯_n-2(-1) 1-e__1+⋯_n-2(-1)e__2+⋯_n-1(-1) 1.
Then u is a singular vector of V^-1(sl_n), n≥ 4.
* <cit.> The ideal of V^-1(sl_3) generated by u^1 and u^2 is the maximal ideal of V^-1(sl_3).
* <cit.> The ideal generated by u in (2) is the maximal ideal of V^-1(sl_n), n≥ 4.
We are now in a position to state the main result of this section.
Let k=-n+n-1/q, n≥ 3, q≥ 2, (n-1,q)=1. Then
R chL_k(sl_n)=
∑_w∈ W∑_γ∈ Q
(γ|Λ̅_n-1)≥ 0, (γ|Λ̅_1))≥ 0(-1)^l(w)e^wt_qγ(kΛ_0+≀)-≀.
Furthermore, we have
* If k=-3+2/2m+1, m≥ 1, then the maximal ideal of V^k(sl_3) is generated by two singular vectors v^1 and v^2 of weights kΛ_0-3(2m+1)δ+2α_1+α_2 and kΛ_0-3(2m+1)δ+α_1+2α_2, respectively.
* If k=-n+n-1/q satisfying n≥ 4, q≥ 2, and (q,n-1)=1, then the maximal ideal of V^k(sl_n) is generated by a singular vector w^1 of weight kΛ_0-2qδ+_1+2_2+⋯+2_n-2+_n-1.
Let =sl_n, n≥ 3.
Notice that if q=1, then k=-1 and ((-Λ_0), S(-Λ_0))=(, S). So ((-Λ_0), S(-Λ_0))≅ ((kΛ_0), S(kΛ_0)), k=-n+n-1/q, q≥ 2, (q,n-1)=1.
By Lemma <ref>,
and the definitions of P^λ_y,w(q) and Q^λ_y,w(q) <cit.>, we have for y,w∈,
w≥_-Λ_0y if and if σ(w)≥_kΛ_0σ(y),
,
l_-Λ_0(y)=l_kΛ_0σ(y), l_-Λ_0(w)=l_kΛ_0σ(w),
and
P^-Λ_0_y,w(1)=P^kΛ_0_σ(y),σ(w)(1), Q^-Λ_0_y,w(1)=Q^kΛ_0_σ(y),σ(w)(1).
It is obvious that -Λ_0, kΛ_0∈𝒞^+, for k=-n+n-1/q, q≥ 2, (q,n-1)=1. Notice that for any w∈(-Λ_0) which is the longest element of w_0(-Λ_0), σ(w) is the longest element of σ(w)_0(kΛ_0). Then by Theorem <ref> and (<ref>)-(<ref>),
ch(L(σ(w)∘ kΛ_0))=∑_w≤_-Λ_0y∈(-Λ_0)(-1)^l_-Λ_0(y)-l_-Λ_0(w)Q^-Λ_0_w,y(1) ch(M(σ(y)∘ (kΛ_0))),
which is equivalent to
R ch(L(σ(w)∘ kΛ_0))=∑_w≤_-Λ_0y∈(-Λ_0)(-1)^l_-Λ_0(y)-l_-Λ_0(w)Q^-Λ_0_w,y(1)e^σ(y)∘ kΛ_0.
Then by Lemma <ref>, Lemma <ref>, and Theorem <ref>,
R chL(kΛ_0)=
∑_w∈ W∑_γ∈ Q
(γ|Λ̅_n-1)≥ 0, (γ|Λ̅_1))≥ 0(-1)^l(w)e^wt_qγ(kΛ_0+≀)-≀.
Furthermore, (<ref>) together with Lemma <ref> means that L(y∘ (-Λ_0)) for some y∈(-Λ_0) is isomorphic to a subquotient of M(w∘ (-Λ_0)) if and only if L(σ(y)∘ kΛ_0) is a subquotient of M(σ(w)∘ kΛ_0).
Case 1 n=3, k=-3+2/2m+1, m≥ 1. Let u^1 and u^2 be the singular vectors given in (1) of Theorem <ref>. The weights of u^1 and u^2 with respect to are -Λ_0-3δ+2_1+_2 and -Λ_0-3δ+_1+2_2, respectively. It is easily checked that
s__2∘ (t__1(-Λ_0+≀)-≀)=-Λ_0-3δ+2_1+_2,
and
s__1∘ (t__2(-Λ_0+≀)-≀)=-Λ_0-3δ+_1+2_2.
Thus V^k(sl_3) has two singular vectors v^1 and v^2 of wights s__2∘ (t_(2m+1)_1(kΛ_0+≀)-≀) and s__1∘ (t_(2m+1)_2(kΛ_0+≀)-≀), respectively. It is easy to check that
s__2∘ (t_(2m+1)_1(kΛ_0+≀)-≀)=kΛ_0-3(2m+1)δ+2_1+_2,
and
s__1∘ (t_(2m+1)_2(kΛ_0+≀)-≀)=kΛ_0-3(2m+1)δ+_1+2_2,
By (3) of Theorem <ref>, the ideal generated by u^1 and u^2 is the maximal ideal of V^-1(sl_3). It follows that the ideal generated by v^1 and v^2 is the maximal ideal of V^k(sl_3). We complete the proof of (1).
0.2cm
Case 2 n≥ 4, k=-n+n-1/q, q≥ 2, (q, n-1)=1. Let u be the singular vector in (2) of Theorem <ref>. The weight of u with respect to the Cartan subalgebra is
-Λ_0-2δ+θ+β, where β=_2+⋯ +_n-2. It is easily checked that
-Λ_0-2δ+θ+β=(s_θs__1s__n-1)∘ (t_-β(-Λ_0+≀)-≀).
Then V^k(sl_n) has a singular vector w^1 of weight (s_θs__1s__n-1)∘ (t_-qβ(kΛ_0+≀)-≀).
It can be checked directly that
(s_θs__1s__n-1)∘ (t_-qβ(kΛ_0+≀)-≀)=kΛ_0-2qδ+θ+β.
By (4) of Theorem <ref>, the ideal generated by u is the maximal ideal of V^-1(sl_n). It follows that the ideal generated by w^1 is the maximal ideal of V^k(sl_n).
The method here can be used to the subregular case for of other types.
§ VARIETIES OF L_K(SL_3) FOR K=-3+2/2M+1, M≥ 0
Let and V^k() be as above in Section 3.
For x∈ V^k(g), we denote by x̅ the image of x in R_V^k(). Recall that R_V^k()≅[^*] by
x_1(-1)⋯ x_n(-1) 1↦ x_1⋯ x_n,
for x_1,⋯,x_n∈.
As in Section 2, we denote by ℐ_k the maximal ideal of V^k(), then
L_k()=V^k()/ℐ_k
and
R_L_k()=[^*]/ℐ_k,
where ℐ_k is the image of ℐ_k in R_V^k().
§.§ The associated variety of L_-1(sl_3)
Let Δ be the root system of sl_3(ℂ) and
{h_1, h_2, e__1, e__2, e__1+_2, f__1,f__2, f__1+_2}
be a Chevalley basis of sl_3(ℂ) with structure constants c_,, for ,∈Δ such that
c__1,_2=1.
Then we have
c__2,-_1-_2=c_-_1-_2,_1=1.
0.2cm
Recall from Theorem <ref> <cit.> that
[ u^1= [-e__1(-1)^2e__2(-1)+e__1(-1)e__1+_2(-1)h_2(-1)+e__1+_2(-1)^2f__2(-1)] 1,; ; u^2= [e__1(-1)e__2(-1)^2+e__2(-1)e__1+_2(-1)h_1(-1); -2e_α_2(-1)e__1+_2(-2)
-e__1+_2(-1)^2f__1(-1)] 1. ]
are singular vectors of V^-1(sl_3). Then we have
There exist x,y∈ℐ_-1 such that
[ x≡ -e__1+_2(-1)h_1(-1)h_2(-1) 1-e__1(-1)e__1+_2(-1)f__1(-1) 1; ; -e__1+_2(-1)^2f__1+_2(-1) 1
+e__1(-1)e__2(-1)(2h_1+h_2)(-1) 1; ; +2e__2(-1)e__1+_2(-1)f__2(-1) 1
( F^1(V^-1(sl_3))); ; y≡ h_1(-1)h_2(-1)(h_1(-1)+h_2(-1)) 1+e__1(-1)e__2(-1)f__1+_3(-1) 1; ; -3e__1+_2(-1)f__1(-1)f__2(-1) 1-e__1(-1)f__1(-1)h_1(-1) 1; ; +e__1+_2(-1)f__1+_2(-1)(h_1(-1)+h_2(-1)) 1; ; -e__2(-1)f__2(-1)h_2(-1) 1 ( F^1(V^-1(sl_3))). ]
Let x=f__1(0)u^1, and y=f__1(0)f__1+_2(0)u^1∈ℐ_-1. Then the lemma follows.
Let
[ p_1= -e__1^2e__2+e__1e__1+_2h_2+e__1+_2^2f__2,; ; p_2= e__1e__2^2+e__2e__1+_2h_1-e__1+_2^2f__1,; ; p_3= -e__1+_2h_1h_2-e__1e__1+_2f__1-e__1+_2^2f__1+_2
+e__1e__2(2h_1+h_2)
+2e__2e__1+_2f__2,; ; p_4= h_1h_2(h_1+h_2)+e__1e__2f__1+_3
-3e__1+_2f__1f__2-e__1f__1h_1,; ; +e__1+_2f__1+_2(h_1+h_2)-e__2f__2h_2. ]
Then by Lemma <ref>, p_i∈ℐ_-1, 1≤ i≤ 4.
Denote λ=h_1-h_2. Then
X_L_-1(sl_3)=𝕊_ l_1= G.ℂ^*λ=G.ℂ^*λ∪𝒪_min,
which is the closure of the Dixmier sheet 𝕊_ l_1=G.ℂ^*λ, where l_1=h⊕ e_θ⊕ f_θ. In particular, X_L_-1(sl_3) is irreducible and
X_L_-1(sl_3)=5,
where 𝒪_min is the minimal non-zero nilpotent orbit of sl_3(ℂ).
It is known that for any simple Lie algebra over , is the finite disjoint union of its Jordan classes <cit.>. In particular, for =sl_3(), we have
=G· (^*λ)∪ G· (^*λ+f_θ)∪ G· ( h^reg)∪ G· f_θ∪ G· (f__1+f__2)∪{0},
where G is the connected adjoint group of , and θ=_1+_2.
Notice that
p_1(f__1+f__2)=-1, p_3(tλ+f_θ)=-t^2.
This implies that G· (f__1+f__2) and G· (^*λ+f_θ) could not belong in X_L_-1(sl_3).
Furthermore, for regular semisimple vector h, all
(h_1|h), (h|h_2), (h|h_1+h_2) are non-zero.
So
p_4(h)=(h|h_1)(h|h_2)(h|h_1+h_2)≠ 0.
We see that for any h∈ h^reg, h∉ X_L_-1(sl_3).
We deduce that
X_L_-1(sl_3)⊆G.ℂ^*λ=G.ℂ^*λ∪𝒪_min.
By (3) of Theorem <ref>, the ideal generated by u^1 and u^2 is the maximal ideal of V^-1(sl_3). This means that
X_L_-1(sl_3)= G.ℂ^*λ=G.ℂ^*λ∪𝒪_min.
It follows that X_L_-1(sl_3) is irreducible and X_L_-1(sl_3)=5.
§.§ Associated varieties of L_k(sl_3) for k=-3+2/2m+1, m≥ 1
In this subsection we always assume that k=-3+2/2m+1, m≥ 1.
0.2cm
Recall from Theorem <ref> that
V^k(sl_3) has two singular vectors v^1 and v^2 with weights kΛ_0-3(2m+1)δ+2α_1+α_2 and kΛ_0-3(2m+1)δ+α_1+2α_2, respectively.
Then for i=1,2,
e__1(0)v^i=e__2(0)v^i=f__1(1)v^i=f__2(1)v^i=0.
Notice that each element in V^k(sl_3) is a linear combination of elements of the following form:
z = z^(+) z^(-) z^(0) 1,
with
z^(+) := e__1(-1)^a_11⋯
e__1 (- r_1)^a_1r_1e__2(-1)^a_21⋯
e__2 (- r_2)^a_2r_2
e__1+_2(- 1)^a_31⋯ e__1+_2(- r_3)^a_3r_3 ,
z^(-) : = f__1(-1)^b_11⋯ f__1(- s_1)^b_1s_1f__2(-1)^b_21⋯ f__2(- s_2)^b_2s_2
f__1+_2(- 1)^b_31⋯ f__1+_2(- s_3)^b_3s_3 ,
z^(0) := h_1(- 1)^c_11⋯ h_1(- t_1)^c_1t_1
h_2(- 1)^c_21⋯ h_2(- t_2)^c_2t_2,
where r_1,r_2,r_3, s_1,s_2,s_3,,t_1,t_2
are positive integers, and a_lp,b_ln,c_ij, for l =1,2,3,
p=1,…,r_l, n=1,…,s_l, i = 1, 2, j=1,…,t_i,
are nonnegative integers such
that at least one of them is non-zero. Recall that
depth(z^(+))=∑_i=1^3∑_j=1^r_i-1(j-1)a_ij, depth(z^(-))=∑_i=1^3∑_j=1^s_i-1(j-1)b_ij,
depth(z^(0))=∑_i=1^2∑_j=1^t_i-1(j-1)c_ij;
deg(z^(+))=∑_i=1^3∑_j=1^r_ia_ij, deg(z^(-))=∑_i=1^3∑_j=1^s_ib_ij, deg(z^(0))=∑_i=1^2∑_j=1^t_ic_ij;
depth(z)=depth(z^(+))+depth(z^(-))+depth(z^(0));
deg(z)=deg(z^(+))+deg(z^(-))+deg(z^(0)).
Let V^1 be the subspace of V^k(sl_3) linearly spanned by elements z=z^(+) z^(-) z^(0) 1 of weight kΛ_0-3(2m+1)δ+2_1+_2 such that deg(z^(0))≤ 6m-2 or depth(z^(0))≥ 1. Then we may assume that
[ v^1= ∑_i=0^6m+1a_ie__1(-1)e__1+_2(-1)h_1(-1)^ih_2(-1)^6m+1-i 1; ; +∑_i=0^6m[x_ie__1(-1)^2e__2(-1)+y_ie__1+_2(-1)^2f__2(-1)+b_ie__1(-2)e__1+_2(-1); ; +c_ie__1(-1)e__1+_2(-2)]h_1(-1)^ih_2(-1)^6m-i 1; ; +∑_i=0^6m-1[d_ie__1+_2(-1)e__1+_2(-2)f__2(-1)
+z_ie__1(-1)e__2(-1)e__1+_2(-1); ; f__2(-1)
+l_ie__1(-1)^2e__1+_2(-1)f__1(-1)
+n_ie__1(-2)e__1+_2(-2); ; +k_ie__1(-1)^2e__2(-2)+g_ie__1(-1)e__1(-2)e__2(-1)+p_ie__1+_2(-1)^2f__2(-2); ; +q_ie__1(-1)e__1+_2(-1)^2f__1+_2(-1)+m_ie__1(-1)e__1+_2(-3); ; +r_ie__1(-3)e__1+_2(-1)]h_1(-1)^ih_2(-1)^6m-1-i 1+u^1, ]
where u^1∈ V^1.
0.3cm
The following result comes from Lemma 3.1 of <cit.>.
Not all a_i, x_j, 0≤ i≤ 6m+1, 0≤ j≤ 6m in v^1 are zero.
We now have the following lemma.
a_i-y_i=0, a_6m+1=0, 0≤ i≤ 6m,
-2(i+1)a_i+1+(6m+1-i)a_i+x_i+l_i-1=0, 0≤ i≤ 6m,
(i+1)a_i+1-2(6m+1-i)a_i-2x_i+z_i=0, 0≤ i≤ 6m-1,
(6m+1)a_6m+1-2a_6m-2x_6m=0,
(-4+2/2m+1)a_i+2y_i-b_i-1=0, 0≤ i≤ 6m,
(-4+2/2m+1)a_6m+1-b_6m=0, (-4+2/2m+1)a_0+2y_0=0,
(-4+2/2m+1)a_i-2x_i-c_i-1-c_i=0, 0≤ i≤ 6m,
a_0=y_0=0,
a_i+x_i=0, 0≤ i≤ 6m,
-(i+1)a_i+1-(6m-i)a_i+q_i+q_i-1=0, 0≤ i≤ 6m.
We consider e__1(0)v^1, e__2(0)v^1, f__1(1)v^1, and f__1+_2(1)v^1. By the definition of V^1, there are no monomials
z=z^(+) z^(-) z^(0) 1 in e__1(0)u^1, e__2(0)u^1, f__1(1)u^1, f__2(1)u^1, and f__1+_2(1)u^1 such that deg(z^(0))≥ 6m, depth(z)=0,
and z^(-)=1. Then it is easy to deduce that
the coefficients of e__1(-1)^2e__1+_2(-1)h_1(-1)^ih_2(-1)^6m-i 1 in e__1(0)v^1, 0≤ i≤ 6m, are
-2(i+1)a_i+1+(6m+1-i)a_i+x_i+l_i-1
, 0≤ i≤ 6m.
Then (<ref>) holds.
0.2cm
The coefficients of e__1+_2(-1)^2h_1(-1)^ih_2(-1)^6m+1-i 1 in e__2(0)v^1, 0≤ i≤ 6m, are
-a_i+y_i, 0≤ i≤ 6m,
and the coefficient of e__1+_2(-1)^2h_1(-1)^6m+1-i 1 is a_6m+1. Then (<ref>) follows.
0.2cm
The coefficients of e__1(-1)e__2(-1)e__1+_2(-1)h_1(-1)^ih_2(-1)^6m-i 1, 0≤ i≤ 6m-1, in e__2(0)v^1 are
(i+1)a_i+1-2(6m+1-i)a_i-2x_i+z_i, 0≤ i≤ 6m-1,
and the coefficient of e__1(-1)e__2(-1)e__1+_2(-1)h_1(-1)^6m 1 in e__2(0)v^1 is
(6m+1)a_6m+1-2a_6m-2x_6m.
Then (<ref>) and (<ref>) hold.
0.2cm
The coefficients of e__1+_2(-1)h_1(-1)^ih_2(-1)^6m+1-i 1, 0≤ i≤ 6m+1,
in f__1(1)v^1 are
(-4+2/2m+1)a_i+2y_i-b_i-1, 1≤ i≤ 6m,
and
(-4+2/2m+1)a_6m+1-b_6m(i=6m+1), (-4+2/2m+1)a_0+2y_0(i=0).
Then (<ref>)and (<ref>) hold.
0.2cm
The coefficients of e__1(-1)h_1(-1)^ih_2(-1)^6m+1-i 1, 0≤ i≤ 6m, in f__1+_2(1)v^1 are
(-4+2/2m+1)a_i-2x_i-c_i-1-c_i, 0≤ i≤ 6m.
This imlpies (<ref>).
0.2cm
By (<ref>), (<ref>), and the fact that m≥ 1,
a_0=y_0=0.
Then (<ref>) follows.
0.2cm
Notice that (_2|2_1+_2)=0 and e__2(0)v^1=0. It follows that
f__2(0)v^1=0.
Considering the coefficients of e__1(-1)^2h_1(-1)^ih_2(-1)^6m+1-i, 0≤ i≤ 6m, we obtain that
a_i+x_i=0, 0≤ i≤ 6m,
which is (<ref>).
0.2cm
The coefficients of e__1(-1)e__1+_2(-1)^2h_1(-1)^ih_2(-1)^6-i 1, 0≤ i≤ 6m, in e_+_2(0)v^1 are
-(i+1)a_i+1-(6m+1-i)a_i+y_i+q_i+q_i-1, 0≤ i≤ 6m.
Then (<ref>) follows from (<ref>).
By Lemma <ref>, not all a_i, x_j, 0≤ i≤ 6m+1, 0≤ j≤ 6m are zero, and by Lemma <ref>, a_i=-x_i, 0≤ i≤ 6m, a_6m+1=0. Then not all a_i, 0≤ i≤ 6m+1 are zero. By (<ref>), a_0=0. Thus we may assume that
a_0=⋯=a_r-1=0, a_r≠ 0,
for some r≥ 1.
We have the following lemma.
For k=-3+2/2m+1, m≥ 1,
r=2m.
It can be checked directly that the coefficient of
e__1(-1)e__1+_2(-1)h_1(-1)^r-1h_2(-1)^6m+1-r
in h_1(1)v^1
is
-5ra_r+2r(-3+2/2m+1)a_r+4x_r-1+2y_r-1+2b_r-1+c_r-1+c_r-2-z_r-1+4l_r-2+2(q_r-1+q_r-2),
which should be zero.
By (<ref>) and (<ref>),
x_r-1=y_r-1=0.
Then by (<ref>) and (<ref>), we have
l_r-2=2ra_r, z_r-1=-ra_r.
By (<ref>) and (<ref>),
b_r-1=-4m/2m+1a_r.
By (<ref>) and (<ref>),
c_0=0, c_j+c_j-1=0, 1≤ j≤ r-1.
Then we have
c_0=⋯=c_r-1=0.
By (<ref>),
q_r-1+q_r-2=ra_r.
Thus (<ref>) becomes
(4r/2m+1-8m/2m+1)a_r=0.
This deduces that r=2m.
Let U^0 be the sl_3-module generated by v^1. The following lemma is easy to check.
With respect to the Cartan subalgebra h= h_1⊕ h_2, the weight zero subspace of U^0 is one-dimensional, which is linearly spanned by f__1(0)f__1+_2(0)v^1.
Let W^1 be the subspace of V^k(sl_3) linearly spanned by monomials z=z^(+)z^(-)z^(0) 1 such that z∈ F^0(V^k(sl_3)) and z^(-)≠ 1. Then it can be checked directly that
f__1(0)f__1+_2(0)v^1
=∑_i=2m^6m+1a_ih_1(-1)^i+1h_2(-1)^6m+1-i(h_1+h_2)(-1) 1+w^1,
for some w^1∈ W^1+F^1(V^k(sl_3)). We denote
v^3=f__1(0)f__1+_2(0)v^1,
and without loss of generality we may assume that
a_2m=1.
We have the following lemma.
∑_i=2m^6m+1a_ih_1(-1)^i+1h_2(-1)^6m+1-i(h_1+h_2)(-1) 1=h_1(-1)^2m+1h_2(-1)^2m+1(h_1+h_2)(-1)^2m+1 1.
That is,
[ v^1-e__1(-1)e__1+_2(-1)h_1(-1)^2mh_2^2m+1(h_1+h_2)(-1)^2m; ; +e__1(-1)^2e__2(-1)h_1(-1)^2mh_2(-1)^2m(h_1+h_2)(-1)^2m∈ W^1+F^1(V^k(sl_3)). ]
and
v^3=h_1(-1)^2m+1h_2(-1)^2m+1(h_1+h_2)(-1)^2m+1 1+w^1,
where w^1 is the same as in (<ref>).
It suffices to prove that
∑_i=2m^6m+1a_ih_1(-1)^i+1h_2(-1)^6m+1-i(h_1+h_2)(-1) 1=h_1(-1)^2m+1h_2(-1)^2m+1(h_1+h_2)(-1)^2m+1 1.
Denote
π_2=exp f__2(0)exp(-e__2(0))exp f__2(0),
π_3
=exp f__1+_2(0)exp(-e__1+_2(0))exp f__1+_2(0).
Then by Lemma <ref>, there exists 0≠ c_i∈, i=2,3 such that
π_i(v^3)=c_iv^3.
On the other hand, it is obvious that
π_i(W^1+F^1(V^k(sl_3)))⊆ W^1+F^1(V^k(sl_3)), i=2,3.
So for i=2,3, we have
[ π_i(∑_i=2m^6m+1a_ih_1(-1)^i+1h_2(-1)^6m+1-i(h_1+h_2)(-1) 1); ; = c_i(∑_i=2m^6m+1a_ih_1(-1)^i+1h_2(-1)^6m+1-i(h_1+h_2)(-1) 1). ]
Notice that
[ π_2(∑_i=2m^6m+1a_ih_1(-1)^i+1h_2(-1)^6m+1-i(h_1+h_2)(-1) 1); ; = ∑_i=2m^6m+1a_i(h_1+h_2)(-1)^i+1(-h_2(-1))^6m+1-ih_1(-1) 1. ]
This means that
∑_i=2m^6m+1a_ih_1(-1)^i+1h_2(-1)^6m+1-i(h_1+h_2)(-1) 1
has a factor (h_1+h_2)(-1)^2m+1. Also
[ π_3(∑_i=2m^6m+1a_ih_1(-1)^i+1h_2(-1)^6m+1-i(h_1+h_2)(-1) 1); ; = -∑_i=2m^6m+1a_ih_2(-1)^i+1h_1(-1)^6m+1-i(h_1+h_2)(-1) 1. ]
This implies that ∑_i=2m^6m+1a_ih_1(-1)^i+1h_2(-1)^6m+1-i(h_1+h_2)(-1) 1 has a factor h_2(-1)^2m+1. Thus (<ref>) holds.
For the singular vector v^2, we can similarly obtain the following lemma.
There exists a non-zero number c such that
[ cv^2-e__2(-1)e__1+_2(-1)h_1(-1)^2m+1h_2^2m(h_1+h_2)(-1)^2m; ; -e__1(-1)e__2(-1)^2h_1(-1)^2mh_2(-1)^2m(h_1+h_2)(-1)^2m∈ W^1+F^1(V^k(sl_3)). ]
We are now in a position to give the second main result of this section.
For k=-3+2/2m+1, m≥ 1, X_L_k(sl_3)= G.(ℂ^*(h_1-h_2)+f_θ), where θ=_1+_2.
Notice that
=G·^*(h_1-h_2)∪ G· (^*(h_1-h_2)+f_θ)∪ G· ( h^reg)∪𝒩,
and
G·^*(h_1-h_2)∪𝒩⊆G· (^*(h_1-h_2)+f_θ).
By Theorem <ref>, the ideal generated by v^1 and v^2 is the maximal ideal of V^k(sl_3). Recall that W^1 is the subspace of V^k(sl_3) linearly spanned by monomials z=z^(+)z^(-)z^(0) 1 such that z∈ F^0(V^k(sl_3)) and z^(-)≠ 1. For v∈ V^k(sl_3), denote by v̅ the image of v in R_V^k(sl_3). Then for any w∈ W^1, the
value of w̅ at h_1-h_2+f_θ is zero.
Let h be a semisimple regular element in the Cartan subalgebra h= h_1+ h_2. Then
all the (h|h_1), (h|h_2), and (h|h_1+h_2) are non-zeros. Thus by Lemma <ref>,
h∉ X_L_k(sl_3).
Then by Lemmas <ref>-<ref>, the zero locus of ℐ_k⊆ R_V^k(sl_3) is exactly G· (^*(h_1-h_2)+f_θ). It follows that
X_L_k(sl_3)=G· (^*(h_1-h_2)+f_θ).
Recall that if k∈_≥ 0, X_L_k(sl_3)={0} <cit.>, X_L_k(sl_3)=𝒪_min if k=-3+2m+1/2, m≥ 1 <cit.>, and X_L_k(sl_3)=𝒩 if k=-3 or k=-3+p/q, p,q≥ 3, (p,q)=1 <cit.>, <cit.>. Then we have
* For k∈, the associated variety X_L_k(sl_3) of L_k(sl_3) is one of the following:
^*, {0}, 𝒩, 𝒪_min, G·^*(h_1-h_2), G· (^*(h_1-h_2)+f_θ).
* Let h be a regular semi-simple vector of sl_3, then there is no k∈ such that
X_L_k(sl_3)= G·^*h.
For k=-3+2/2m+1, m≥ 0, let ℐ_k be the maximal ideal of V^k(sl_3), and 𝒱(I_k) be defined as in Section 2. We have the following conjecture.
𝒱(I_k)={tΛ̅_1-2i/3Λ̅_2, tΛ̅_2-2i/3Λ̅_1, tΛ̅_1-(t+2i/3+1)Λ̅_2, t∈, i=0,1,⋯, 2m}.
If m=0, that is k=-1, the conjecture is true by Proposition 5.5 of <cit.>. When m=1, we could verify the conjecture by computer programming.
§ VARIETIES OF SIMPLE AFFINE W-ALGEBRAS W_K(SL_3,F)
We first have the following results from <cit.> and <cit.>.
Let be a simple complex Lie algebra,
f a regular nilpotent element of , and k=-h^∨+p/q a non-degenerate admissible number. Then the simple W-algebra W_k(g, f) is rational and lisse.
Let f be a minimal nilpotent vector of sl_3 and k=-3+2m+1/2, m≥ 1. Then the simple W-algebra W_k(sl_3) is rational and lisse.
Let k=-1 and f a minimal nilpotent element of sl_3. It was proved in <cit.> that H^0_DS,f(L_-1(sl_3))=W_-1(sl_3,f), and is isomorphic to the rank one Heisenberg algebra M(1). So its associated variety is one-dimensional.
Let k=-3+2/2m+1, m≥ 1, and f a minimal nilpotent vector of sl_3, then
*
X_W_k(sl_3,f)={[[ a b 3(μ^2-a^2); 0 -2a c; 1 0 a ]]| μ, a,b,c∈, bc=2(a-μ)(2a+μ)^2 }.
In particular,
X_W_k(sl_3,f)=3.
* W_k(sl_3) is not quasi-lisse.
By Theorem <ref>, for a minimal vector f∈ sl_3, and k+3=2/2m+1, m∈_≥ 0,
H^0_DS,f(L_k(sl_3))=W_k(sl_3, f).
By Theorem <ref> and (<ref>),
X_W_k(sl_3))=X_L_k(sl_3)∩𝒮_f,
for k+3=2/2m+1, m∈_≥ 0. We may assume that f=f_θ. Then
𝒮_f=f_θ+^e_θ,
and
^e_θ=(_1-_2)⊕ e_θ⊕ e__1⊕ e__2.
Then we have
f_θ+^e_θ={[[ a b d; 0 -2a c; 1 0 a ]]| a,b,c,d∈}.
It is obvious that for μ∈,
A_μ=[[ μ 0 0; 0 -2μ 0; 1 0 μ ]]∈G· (^*(h_1-h_2)+f_θ).
In general,
B=[[ a b d; 0 -2a c; 1 0 a ]]∈G· (^*(h_1-h_2)+f_θ)∩𝒮_f_θ
if and only if B is similar to A_μ for some μ∈. Then we deduce that
d=3(μ^2-a^2),
and
bc=2(a-μ)(μ+2a)^2.
Thus (<ref>) and (<ref>) hold.
By Theorem 8.2 of <cit.>, L(tΛ̅_1) and L(tΛ̅_2), t∈_≥ 0, are irreducible ordinary modules of L_k(sl_3). This means that L_k(sl_3) has infinitely many irreducible ordinary modules.
Then by Theorem 5.4 of <cit.>,
for t∈_≥ 0, the minimal quantum hamiltonian reductions ℋ_j_t+1, 1Δ_t+1,1 and ℋ_j_1,t+1Δ_1,t+1
of L(tΛ̅_1) and L(tΛ̅_2) are irreducible ordinary W_k(sl_3,f)-modules,
where
j_t+1,1=t/3, j_1,t+1=2t/3, Δ_t+1,1=t^2+3t/3(k+3)-2t/3, Δ_1,t+1=t^2+3t/3(k+3)-t/3.
This means that W_k(sl_3,f) has infinitely many irreducible modules. By Theorem <ref>, W_k(sl_3,f) is not quasi-lisse.
We now assume that f is a regular nilpotent vector of sl_3. We may assume that
f=[[ 0 0 0; 2 0 0; 0 2 0 ]], e=[[ 0 1 0; 0 0 1; 0 0 0 ]], h=[[ 2 0 0; 0 0 0; 0 0 -2 ]].
Then
^e= e⊕ e_θ,
and
f+^e={[[ 0 a b; 2 0 a; 0 2 0 ]]| a,b∈}.
Denote the associated universal affine W-algebras and the simple quotients by W^k(sl_3) and W_k(sl_3), respectively.
Let k+3=2/2m+1 or 2m+1/2, then
X_W_k(sl_3)={[[ 0 3/4μ^2 1/2μ^3; 2 0 3/4μ^2; 0 2 0 ]]| μ∈}.
In particular, X_W_k(sl_3)=1. Furthermore, W_k(sl_3) is not quasi-lisse.
Since k=-3+2/2m+1, m≥ 1, it follows that
(kΛ_0+≀|)∉, for ∈{-_1+δ, -_2+δ, -(_1+_2)+δ, -(_1+_2)+2δ}.
Then by Theorem 9.1.4 of <cit.>,
H^0_DS,f(L_k(sl_3))=W_k(sl_3, f).
Thus
X_W_k(sl_3))=X_L_k(sl_3)∩𝒮_f.
By the Feigin-Frenkel Langlands duality <cit.>, <cit.>,
W^k(sl_n)≅ W^k'(sl_n),
for k+n=p/q and k'+3=q/p. In particular, for k+3=2/2m+1 and k'+3=2m+1/2,
W_k(sl_3)≅ W_k'(sl_3).
Let
B=[[ 0 a b; 2 0 a; 0 2 0 ]]∈ f+^e.
Then
f_B(λ)=|λ I-B|=|[ λ -a -b; -2 λ -a; 0 -2 λ ]|=λ^3-4aλ-4b.
Notice that
X_L_k(sl_3)= G· (^*(h_1-h_2)+f_θ).
So B∈ X_W_k(sl_3) if and only if B is similar to
A_μ=μ(h_1-h_2)+f_θ=[[ -μ 0 0; 0 2μ 0; 1 0 -μ ]].
for some μ∈. Then it is easy to deduce that B∈ X_W_k(sl_3) if and only if
a=3/4μ^2, b=1/2μ^3.
Thus
X_W_k(sl_3)={[[ 0 3/4μ^2 1/2μ^3; 2 0 3/4μ^2; 0 2 0 ]]| μ∈}.
Let L(tΛ̅_1) and L(tΛ̅_2), t∈_≥ 0 be the irreducible ordinary modules of L_k(sl_3) as in the proof of Theorem <ref>, then by Theorem 5.4 of <cit.>,
for t∈_≥ 0, the principal quantum hamiltonian reductions 𝒲_h_t+1, 1w_t+1,1 and 𝒲_h_1,t+1w_1,t+1
of L(tΛ̅_1) and L(tΛ̅_2) are irreducible ordinary W_k(sl_3)-modules,
where
h_t+1,1=t^2+3t/3(k+3)=h_1,t+1,
and
w_t+1,1=-2t√(3)/3(k+3)^3(2t/3-k-2)(t/3-k-2)=-w_1,t+1.
It follows from Theorem <ref> that W_k(sl_3) is not quasi-lisse.
alpha
|
http://arxiv.org/abs/2409.02271v1 | 20240903200329 | CCAT: Nonlinear effects in 280 GHz aluminum kinetic inductance detectors | [
"Cody J. Duell",
"Jason Austermann",
"James R. Burgoyne",
"Scott C. Chapman",
"Steve K. Choi",
"Abigail T. Crites",
"Rodrigo G. Freundt",
"Anthony I. Huber",
"Zachary B. Huber",
"Johannes Hubmayr",
"Ben Keller",
"Lawrence T. Lin",
"Alicia M. Middleton",
"Colin C. Murphy",
"Michael D. Niemack",
"Thomas Nikola",
"Darshan Patel",
"Adrian K. Sinclair",
"Ema Smith",
"Gordon J. Stacey",
"Anna Vaskuri",
"Eve M. Vavagiakis",
"Michael Vissers",
"Samantha Walker",
"Jordan Wheeler"
] | astro-ph.IM | [
"astro-ph.IM"
] |
The Wanderer: Charting WASP-77A b's Formation and Migration Using a System-Wide Inventory of Carbon and Oxygen Abundances
Maleah Rhem
September 9, 2024
=========================================================================================================================
§ ABSTRACT
Prime-Cam, a first-generation science instrument for the Atacama-based Fred Young Submillimeter Telescope, is being built by the CCAT Collaboration to observe at millimeter and submillimeter wavelengths using kinetic inductance detectors (KIDs). Prime-Cam’s 280 GHz instrument module will deploy with two aluminum-based KID arrays and one titanium nitride-based KID array, totaling ∼10,000 detectors at the focal plane, all of which have been fabricated and are currently undergoing testing. One complication of fielding large arrays of KIDs under dynamic loading conditions is tuning the detector tone powers to maximize signal-to-noise while avoiding bifurcation due to the nonlinear kinetic inductance. For aluminum-based KIDs, this is further complicated by additional nonlinear effects which couple tone power to resonator quality factors and resonant frequencies. While both nonequilibrium quasiparticle dynamics and two-level system fluctuations have been shown to give rise to qualitatively similar distortions, modeling these effects alongside nonlinear kinetic inductance is inefficient when fitting thousands of resonators on-sky with existing models. For this reason, it is necessary to have a detailed understanding of the nonlinear effects across relevant detector loading conditions, including how they impact on on-sky noise and how to diagnose the detector’s relative performance. We present a study of the competing nonlinearities seen in Prime-Cam’s 280 GHz aluminum KIDs, with a particular emphasis on the resulting distortions to the resonator line shape and how these impact detector parameter estimation.
§ INTRODUCTION
Kinetic inductance detectors (KIDs), a type of frequency-division-multiplexed superconducting resonators <cit.>, have become an increasingly popular choice in recent years for observing at millimeter and submillimeter wavelengths <cit.>. The CCAT collaboration's Prime-Cam instrument <cit.> will use KIDs for observing at the Atacama-based Fred Young Submillimeter Telescope (FYST), beginning with three arrays (one TiN and two Al) at 280 GHz. This first module will be followed in the near-term by imaging modules at 350 GHz and 850 GHz, and a spectrometer module, EoR-Spec<cit.>. As we approach deployment, one of the practical challenges for operation is the distinctive nonlinear response of aluminum KIDs. Resonator nonlinearity occurs when the underlying circuit parameters are sensitive to the internal energy of the resonator. When nearing resonance under these conditions, the increasing absorption of microwave probe tone photons leads to a changing resonator profile even at fixed tone powers.
While many strategies for tone power optimization are built around the well-understood nonlinear kinetic inductance of TiN<cit.>, aluminum KIDs can exhibit additional competing nonlinearity as a result of nonequilibrium quasiparticle dynamics <cit.>. In Prime-Cam's 280 GHz aluminum KIDs, these competing nonlinearities distort the resonator line shapes even well-below the onset of bifurcation, skewing resonator fits and increasing the noise penalty for sub-optimal tone powers or tone placement. By using information from the full resonator sweep, we can observe the evolution of this detector nonlinearity and better understand the impact on resonator fits and tone optimization.
§ BACKGROUND
As previously described<cit.>, the first KID arrays for Prime-Cam have already completed production and begun in-lab characterization. The data described here was taken using three witness pixels (five total detectors) fabricated on the same wafer as the first completed Al array in a set-up that allows for measurement with a cryogenic black-body source at a range of base temperatures down to ∼ 58 mK. An extensive amount of data has been acquired with the Al witness pixels under a variety of bath temperature, optical loading, and probe tone power conditions.
All five measured resonators are between 500 MHz and 901 MHz. Measured coupling quality factors (Q_c) are in the range of 18,000 to 36,000, and under designed loading conditions, the total quality factor (Q) values are expected to be in the range of 8,000 to 20,000. As is discussed further in section <ref>, tone power optimization has a significant impact on both the measurable resonator parameters and the observed shape of the resonance circle, particularly at tone powers that are most relevant for operation. Since these effects on the line shape are not easily modeled, they significantly skew any resonator fits to systematically underestimate quality factors and rotate the impedance mismatch angle. Figure <ref> shows how going from low powers to higher powers pushes the KID from deeply in the internal quality factor (Q_i) dominated regime (Q/Q_c < 0.5) through critical coupling (Q/Q_c = 0.5) to the Q_c-dominated regime (Q/Q_c > 0.5).
In addition to the tone-power sensitivity, the Al detectors deviate from the equilibrium Mattis-Bardeen evolution with bath temperature by first increasing in resonance frequency (f_0) and Q_i before turning around to a more standard continuous decrease in both parameters. Figure <ref> shows this resonator response to changing bath temperatures based on well-fit data, which is, by necessity, 15-20 dB below the optimal operating powers of the resonators. This "under-driving" of the resonators shows most significantly in the plots of the internal resonator loss, Q_i^-1 (also sometimes written as tanδ_i), which is much larger and much less consistent between KIDs than it would be under optimal tone powers.
§ DETECTOR NONLINEARITY
When describing resonator behavior, either linear or nonlinear, we are describing a single-pole Lorentzian that is parameterized by the center resonant frequency (f_0) and the total quality factor (Q). For the particular case of a capacitively-coupled resonator, the forward scattering parameter, S_21, takes the form
S_21 = 1 - Q/Q_c1/1 + 2jQf-f_0/f_0= 1 - Q/Q_c1/1 + 2jQx,
where Q_c is the coupling quality factor, f is the probe frequency, and x ≡f-f_0/f_0. In the complex, or Argand, plane this traces out a circle of diameter Q/Q_c that is centered at (1 - Q/2 Q_c, 0). Given the definitions of Q and S_21, we can write the resonator's internal energy, E_r, as
E_r = 2Q^2/Q_c1/1+4Q^2 x^2P_r/2π f_0,
where P_r is the readout tone power. Critically, the probe tone appears here through both the tone power (P_r) and the frequency (through x=f-f_0/f_0). In the case of a linear resonator, the parameters f_0 and Q are stationary and each point on the resonance circle can be mapped back to the same values. If instead there is some dependance of circuit parameters on the internal energy, then we can see non-linear behavior where the underlying parameters are changing along with the tone power or frequency. This can be driven by a number of different underlying physical processes (see <ref>) and can give rise to a wide range of behaviors, many of which are described in . In general, however, we can refer to nonlinearities as either reactive (where f_0 is changing), dissipative (where Q is changing), or both. Reactive nonlinearities shift the position (often just referred to as the phase) on the resonance circle, while dissipative nonlinearities change the diameter of the circle and the circle-phase relationship to f.
§.§ Types of Nonlinearities
The most well-understood form of nonlinearity in KIDs is the nonlinear kinetic inductance<cit.>, which results from a higher-order dependence on current in the kinetic inductance. This is a purely reactive nonlinearity, and, when viewed on the resonance circle at higher powers, appears as a lopsided jump in phase with no change in the circle diameter. The asymmetry is caused by f_0 shifting monotonically to lower frequencies as the resonator energy increases. As f approaches f_0 from below (x<0), x^2 decreases and E_r rises, pulling f_0 down in frequency at an increasing rate until eventually we jump over the center frequency and x^2 begins to increase again. On the high frequency side of the resonance, where x>0, the resonator relaxes back into its higher frequency state as E_r decreases, following the probe tone and causing the phase to evolve more slowly than it would otherwise. Most critically, at high tone powers the nonlinear kinetic inductance causes the resonator state to "bifurcate" as the system discontinuously jumps between different possible resonator states, setting a practical upper limit on tone powers.
Another source of nonlinearity with a well-defined impact is loss to two-level systems (TLS) <cit.> in the dielectric. At very low temperatures and tone powers, TLS fluctuations increase the overall loss, decreasing the Q. Crucially, this effect is reduced at higher temperatures and powers. The latter of these allows for nonlinear feedback. As the power stored in the TLS increases, the Q likewise increases such that the circle diameter reaches a maximum on resonance. This can cause the resonance circle to take on an oblong shape, appearing squashed on the sides. This effect is generally quite small by design, as the total loss is the sum of several contributions, chiefly the coupling loss (loss to the readout line), the loss to the quasiparticle system, and the TLS loss. Detectors are designed to operate in regimes dominated by coupling loss or quasiparticle loss.
Finally, quasiparticle absorption of microwave photons has been shown to lead to strongly nonlinear behavior <cit.>. By driving quasiparticles out of thermal equilibrium, sub-gap microwave photons push the system away from the expected Fermi-Dirac distribution, as well as explicitly altering the density of states. Both of these appear when calculating the AC conductivity from the Mattis-Bardeen equations, allowing for tone power to alter the conductivity in both reactive and dissipative manners. While calculating these effects is beyond the scope of this work, we briefly describe the two scenarios that can be observed, which correspond to an effective "heating" or "cooling" of the quasiparticle system. For more detailed discussion of this approach, refer to . It should be noted that, while these scenarios are observed in to occur at different temperature regimes, they are sufficiently complicated that either "heating" or "cooling" can occur at a wide range of temperatures. In the case of quasiparticle cooling, quasiparticles are excited by the probe tone to recombine more rapidly than they would otherwise, causing a reduction in both loss (increase in Q) and kinetic inductance (increase in f_0). The increase in Q causes the diameter of the circle to increase as you approach resonance, while the increase in f_0 causes the probe frequency to approach resonance more slowly from below and more rapidly from above, leading to an asymmetry. In the case of quasiparticle heating, the excess energy in the quasiparticle system instead suppresses recombination, causing an increase in both loss and inductance in a manner similar to above-gap pair-breaking radiation. Decreasing the Q in this case squashes the resonance circle in the opposite manner to above, meaning that the diameter is at a minimum when on resonance. Decreasing the resonance frequency causes an effect similar to the nonlinear kinetic inductance, resulting in an asymmetry in the resonance circle of the same manner.
Since the kinetic inductance nonlinearity is monotonic and well-understood, we can account for it in a relatively straightforward manner when fitting a resonator<cit.>. This is the case for Prime-Cam's TiN KIDs<cit.>. In cases where multiple observable nonlinearities arise from different, potentially opposing physical effects, this modeling becomes more difficult. For the Al KIDs these effects are severe enough to significantly alter the observed resonator profile and bias fits using either a standard linear model or a model incorporating nonlinear kinetic inductance. As such, it is useful to "unwrap" the resonance circle to show what nonlinearities most significantly impact the observed resonator shape.
§ UNWRAPPING THE RESONANCE CIRCLE
When we take an S_21 trace near resonance, the equation that we are effectively measuring as a function of frequency ω = ω_0 (1 + x) is <cit.>
S_21 = A(ω) e^jθ(ω)(1 - Q/Q_e1/1 + 2 j Q x),
where A(ω)e^jθ(ω) is a frequency-dependent complex normalization and Q_e is a generalized complex version of Q_c that allows for an additional rotation due to mismatches in the resonator and feedline impedance.
Beginning from a measurement of S_21, if we remove the factors that arise from the environment (i.e. the cables, amplifiers, etc.) and the impedance mismatch (the rotation angle of the external quality factor, Q_e), we can arrive back at the pure resonator expression. Shifting our anchor point to the origin for convenience, we can rewrite this expression (which we call ŝ_res) in terms of the two parameters that ought to be linear:
ŝ_res = 1 - S_21, res = Q/|Q_e|1/1 + 2jQω-ω_0/ω_0
= q_r(ω)/1+2jy(ω) .
Here, q_r = Q/|Q_e| is the diameter of the resonance circle (related to the dip depth) and y = Qω-ω_0/ω_0 = Q x is the distance from the center frequency as measured in line-widths. For a linear resonator, q_r will be constant and y will be linear in frequency with the slope set by the Q and the zero value set by ω_0 = 2π f_0. With the environment and impedance mismatch accounted for, we can convert a position in the complex plane to an implied q_r and y, thus, we can use these to fit for Q, f_0, and |Q_e|. From Equation <ref>, we can calculate
y(ω) = -1/2ℑ𝔪(ŝ_res)/ℜ𝔢(ŝ_res)
and
q_r(ω) = [1 + (ℑ𝔪(ŝ_res)/ℜ𝔢(ŝ_res))^2]ℜ𝔢(ŝ_res),
where ℜ𝔢(ŝ_res) and ℑ𝔪(ŝ_res) are the real and imaginary parts of ŝ_res, respectively. Using these expressions to identify trends in Q and f_0 does require a good estimation of and proper accounting for environmental effects. Additionally, the appearance of ℜ𝔢(ŝ_res) in the denominator of both Equation <ref> and Equation <ref> means that the impacts of noise on calculated parameters becomes much larger for smaller resonance circles or farther away from resonance.
One final improvement to separating out reactive and dissipative nonlinearities in these plots is to plot y_c = y/q_r = Q_cω-ω_0/ω_0 rather than y, since Q_c generally should not change with tone power. We can think of this as now measuring the distance in terms of coupling line widths rather than resonator line widths. This is particularly useful in the presence of a dissipative nonlinearity when the resonator is more strongly Q_i-limited than in the data presented here, such as at higher bath temperatures or optical loading.
§ NONLINEARITY MEASUREMENTS
Turning our attention to the data, in Figures <ref> and <ref> we can see the behavior of an Al KID under relevant loading conditions of ∼ 5 pW. Even before moving to the nonlinearity parameters, it is possible to see the distortions in the Argand plane that are signatures of both nonlinear kinetic inductance and quasiparticle “cooling." The resonance circle is expanding and pinching asymmetrically on the sides, while at the highest powers we see nonlinear kinetic inductance causing bifurcation.
Things become more clear in Figure <ref>, which shows q_r and y_c. In y_c it is clear that the coupling Q is not changing dramatically as the resonator is being driven between two parallel states. At low powers, we observe a deformation that is similar to that seen from nonlinear kinetic inductance, except that this nonlinearity is occurring in the opposite direction. At higher powers, we observe the nonlinear kinetic inductance begin to flatten the response and eventually drive the resonant frequency back down in the opposite direction. This is corroborated by the plot of q_r, which shows the resonator beginning in a strongly Q_i-dominated regime and continually driven up into a strongly Q_c-dominated regime. We also see which way the resonance is being pushed by the asymmetry in q_r, where at lower powers there is a gradual increase in Q as we approach the center frequency, followed by a sharper drop off after passing it, as the resonator snaps back to its low power, low frequency state. At high powers however, the opposite occurs, the resonance is pulled down sharply and then follows the probe tone gradually as is typical of the kinetic inductance non-linearity. Crucially, we see that the quasiparticle nonlinearity is not quite saturating at higher powers, but is lessened as we become Q_c-limited and the nonlinear kinetic inductance drives y in the opposite direction. Lastly, a careful examination of these plots point towards probing the detectors on the high frequency side of resonance as the enhanced slope in the plots of y_c prior to bifurcation reveal how the quasiparticle nonlinearity can serve to amplify the signal from incident power.
§ CONCLUSION
We showed here a method of using the complex S_21 data from the full resonator trace to interpret and fit highly nonlinear resonator behavior, though this requires accurate correction of the environmental components (i.e. the cable delay, loop gain, and rotation from impedance mismatches) of the trace. We can use these plots with good constraints on the environment and Q_c values to immediately estimate f_0 and Q. Looking at Prime-Cam's Al KIDs as described here gives a qualitative picture of how the competing kinetic inductance and quasiparticle nonlinearities affect the line shape under realistic operating conditions from roughly 5–9 pW. At tone powers well below bifurcation due to nonlinear kinetic inductance, the resonator Q is significantly reduced and f_0 is shifted down. With increasing tone power (until near bifurcation), both Q and f_0 increase though the resonance circle becomes asymmetrically compressed on the sides off-resonance. By better understanding the behavior in this way, we can develop strategies for how to best account for it in parameter estimation and probe tone placement.
The CCAT project, FYST and Prime-Cam instrument have been supported by generous contributions from the Fred M. Young, Jr. Charitable Trust, Cornell University, and the Canada Foundation for Innovation and the Provinces of Ontario, Alberta, and British Columbia. The construction of the FYST telescope was supported by the Großgeräte-Programm of the German Science Foundation (Deutsche Forschungsgemeinschaft, DFG) under grant INST 216/733-1 FUGG, as well as funding from Universität zu Köln, Universität Bonn and the Max Planck Institut für Astrophysik, Garching. ZBH was supported by a NASA Space Technology Graduate Research Opportunity. MN acknowledges support from NSF grant AST-2117631. SW acknowledges support from the Cornell CURES fellowship.
spiebib
|
http://arxiv.org/abs/2409.02614v1 | 20240904111141 | Evaluating the Effects of Digital Privacy Regulations on User Trust | [
"Mehmet Berk Cetin"
] | cs.CY | [
"cs.CY"
] |
empty
Vrije Universiteit Amsterdam
< g r a p h i c s >
Master Thesis
.9.6pt
Evaluating the Effects of Digital Privacy Regulations on User Trust
.9.6pt
Author: Mehmet Berk Cetin (2644886)
1st supervisor: Dr. Anna Bon
2nd reader: Dr. Hans Akkermans
September 9, 2024
roman
[]0.4pt
“By three methods we may learn wisdom: First, by reflection, which is noblest; Second, by imitation, which is easiest; and third by experience, which is the bitterest.”
by Confucius
§ ABSTRACT
In today's digital society, issues related to digital privacy have become increasingly important. Issues such as data breaches result in misuse of data, financial loss, and cyberbullying, which leads to less user trust in digital services. This research investigates the impact of digital privacy laws on user trust by comparing the regulations in the Netherlands, Ghana, and Malaysia. The study employs a comparative case study method, involving interviews with digital privacy law experts, IT educators, and consumers from each country. The main findings reveal that while the General Data Protection Regulation (GDPR) in the Netherlands is strict, its practical impact is limited by enforcement challenges. In Ghana, the Data Protection Act is underutilized due to low public awareness and insufficient enforcement, leading to reliance on personal protective measures. In Malaysia, trust in digital services is largely dependent on the security practices of individual platforms rather than the Personal Data Protection Act. The study highlights the importance of public awareness, effective enforcement, and cultural considerations in shaping the effectiveness of digital privacy laws. Based on these insights, a recommendation framework is proposed to enhance digital privacy practices, also aiming to provide valuable guidance for policymakers, businesses, and citizens in navigating the challenges of digitalization.
§ INTRODUCTION
arabic
Imagine waking up one morning to discover that your personal data, consisting of your name, email, and even credit card information, has been compromised in a massive data breach. This unsettling scenario recently affected millions of users of the app MyFitnessPal, where a data breach exposed the personal information of over 150 million users, leading to widespread concerns about data security and privacy. Users' email addresses, usernames, and hashed passwords were among the compromised data, highlighting the risks associated with sharing personal information online without fully understanding how it will be protected <cit.>.
Despite such incidents, users of online digital services frequently feel obligated to share their personal data to utilize various services, yet they often disregard the terms and conditions that explain how their data will be used <cit.>, or they find these terms simply too complicated to understand <cit.>. This lack of clarity and level of complexity can result in personal data being sold or compromised in a data breach without the user's awareness <cit.>. Moreover, the online exposure of personal information can lead to its abuse <cit.>, financial damage <cit.>, and cyberbullying <cit.>. Such incidents demolish user trust <cit.> and negatively affect the digital economy <cit.>. Therefore, safeguarding digital privacy is crucial given the severe consequences data breaches have on consumers.
To tackle various privacy and security issues, countries and regions have implemented their own regulations. Examples include China's Personal Information Protection Law (PIPL) enacted in 2021 <cit.>, the European Union's General Data Protection Regulation (GDPR) introduced in 2018 <cit.>, Malaysia's Personal Data Protection Law (PDPL) from 2010 <cit.>, the California Consumer Privacy Act (CCPA) established in the United States <cit.>, and many more around the world. While there are many privacy and security laws worldwide, the GDPR is recognized as the strictest <cit.>. Companies handling data from EU citizens must comply with GDPR regulations or face fines of up to tens of millions of euros <cit.>. GDPR empowers consumers with greater control over their data, including rights to withdraw consent or request data deletion <cit.>.
Different privacy laws have distinct focuses. For example, GDPR emphasizes the nationality of the data subject and the location of the business, while PIPL focuses on where the data processing occurs <cit.>.
§.§ Background on digital privacy regulations
Digital privacy laws can affect business practices <cit.> and user behavior <cit.> when consuming digital services. Therefore, to have better digital services that are safer, more private, and hence more attractive to consumers, understanding the impact of privacy regulations on digital service consumption is critical in today's globalized digital landscape.
To explore this impact, we select three different countries, namely the Netherlands, Ghana, and Malaysia—representing Europe, Africa, and Asia, respectively. This diverse selection allows us to examine whether continental differences influence the interpretation and effectiveness of digital privacy laws. We further validate our choice of these countries in section <ref>
Digital Privacy Regulations in the Netherlands -
The General Data Protection Regulation (GDPR) <cit.> is a comprehensive data protection law implemented by the European Union (EU) in May 2018, and is enforced in the Netherlands. Its primary aim is to give individuals more control over their personal data and harmonize data protection laws across the EU. For users, some key points of interest include the requirement for explicit consent before their data can be collected and processed, the right to access and transfer their data, and the right to request the deletion of their data when it is no longer necessary. Moreover, organizations must notify users of data breaches that pose a risk to their rights and freedoms within 72 hours. Non-compliance with GDPR can result in significant fines, up to 4% of annual global turnover or €20 million <cit.>, whichever is higher.
Digital Privacy Regulations in Ghana -
Ghana's digital privacy law, governed by the Data Protection Act, 2012 <cit.>, aims to protect individual privacy by ensuring that personal data processing aligns with fundamental privacy rights. The act mandates fair and lawful data processing, data quality, and security requirements, and grants users rights to access, correct inaccuracies, and object to data processing under specific conditions. Explicit consent is required before collecting and processing personal data, similar to the GDPR. Additionally, the act establishes the Data Protection Commission<cit.>, an independent body that is responsible of compliance and enforces penalties for non-compliance with the law.
Digital Privacy Regulations in Malaysia -
In Malaysia, the Personal Data Protection Act (PDPA) <cit.> established in 2010 regulates the processing of personal data in commercial transactions. The PDPA aims to safeguard individual privacy and ensure data is managed responsibly. Key aspects for users include principles of data processing that ensure data integrity, security, and lawful processing. Users have the right to access, correct, and withdraw consent for the use of their data. Organizations must obtain user consent before data collection and notify users about the purpose of data processing. Additionally, the PDPA sets specific conditions for transferring personal data outside Malaysia, ensuring it is protected abroad. Not complying with these regulations can lead to fines from the government.
§.§ Comparing privacy regulations
All three regulations emphasize giving users control over their personal data, providing rights to access, correct, and delete their data, which enhances their control over personal information. Transparency is another common aspect, with regulations requiring organizations to be clear about data processing activities, enhancing trust among users. Robust security measures are required to protect user data from breaches and misuse. Moreover, all three regulations include mechanisms for users to seek compensation in case of data protection violations. Despite these similarities, there are differences between the regulations. GDPR applies broadly to any organization processing data of EU residents, regardless of where the organization is located. In contrast, Ghana’s Data Protection Act primarily focuses on data processed within Ghana, while Malaysia’s PDPA mostly targets commercial transactions with specific conditions for international data transfers. Enforcement is also different. GDPR is enforced by data protection authorities in each EU member state, with substantial fines for non-compliance. In Ghana, the Data Protection Commission <cit.> manages compliance and has the authority to enforce penalties. Malaysia’s enforcement is managed by the Department of Personal Data Protection <cit.>, which can also impose specific penalties for non-compliance.
§ LITERATURE REVIEW
§.§ Global variations in digital privacy laws
Digital privacy laws protect the digital society and focus on different aspects, as we saw in the previous section. There are various digital privacy laws in the world that vary among each other.
For instance, GDPR is more focused on where the business is established and PIPL is more focused on where the information processing happens <cit.>. The main aspect of DPP <cit.> is to empower citizens to protect the consumption of their non-consented data against third party agencies <cit.>.
There are discrepancies in privacy policies in the US and EU, and differences in the countries’ values, social norms, and interests result in a variance in regulations. Movius et al. <cit.> analyze the example of passenger name records in the travel industry as a case study and reach to the conclusion that US authorities are increasingly in favor of security, while European policy makers continue to emphasize personal freedoms. The author argues that due to the exchange of extensive volume of data between the US and EU, it is imperative for the economy and the protection of privacy rights of citizens to address the contrasting approaches on information privacy standards.
There is further research comparing privacy regulations among different countries <cit.>.
§.§ Impact of GDPR on businesses and user behavior
Since its implementation, GDPR has changed business practices and user behavior <cit.>.
There is research <cit.> on analyzing the GDPR on how much it complies with data protection practices and it's implications on user behavior.
Oijen et al. <cit.> examines how well GDPR helps people control their personal data. It finds that even though the GDPR includes rules like needing clear consent and allowing people to access, correct, move, and delete their data, people's behaviors often make these rules less effective. Problems like too much information leading to quick, uninformed consent, people choosing convenience over privacy, and feasible default settings that favor less privacy weaken GDPR's impact. The paper suggests that for the GDPR to work better, we need simpler privacy processes and more attention to how people actually behave.
Layton et al. <cit.> compares GDPR with best practices in data protection and examines whether the GDPR aligns with the European Union's research and best practices. GDPR aims to give users control of their data and facilitate business operations. There is a gap, however, between ENISA's <cit.> inputs for maximizing privacy and the GDPR's provisions. The GDPR focuses on specific regulations, institutions, and business practices but lacks discussion on improving user knowledge of privacy. The paper questions the assumption of digital literacy by GDPR and suggests that improving privacy, accountability, and trust through user behaviors could reduce costly compliance requirements.
§.§ Digital privacy practices in non-GDPR countries
There exists research on digital privacy practices in the non-GDPR countries, namely Rwanda <cit.> and Ghana <cit.>.
Mutimukwe et al. <cit.> investigates information privacy protection (IPP) practices in Rwandan e-government services using international privacy principles as benchmarks. Their case study, involving three organizations, revealed that none fully complied with these principles, indicating concerns regarding the effectiveness of existing IPP practices. The absence of dedicated privacy policies and anticipated national regulations led to confusion in the e-government services, while inadequate and misleading practices undermined user control and accountability. The study emphasizes the necessity for coordinated efforts among government entities to raise awareness and enforce existing privacy laws, thus improving information privacy protection and enhancing user trust.
Nsengimana et al. <cit.> discusses the impact of the Information and Communication Technology (ICT) revolution on personal privacy and its implications for the society, focusing on Rwanda's approach on protecting its citizens' privacy in the context of its digital transformation during the development of ICT in Africa, and its commitment to safeguarding personal privacy as a societal value.
There are certain limitations to privacy laws. Coleman et al. <cit.> examine digital colonialism as Western technology companies expand their presence in resource-rich, infrastructure-poor African countries, where data protection laws and regulations are not uniformly applied compared to Western standards. The paper analyzes the limitations of data protection laws, such as Kenya's 2018 data protection bill and the EU's General Data Protection Regulation (GDPR), in preventing digital colonialism.
§.§ Effects of digital privacy on user trust
Privacy regulations play a crucial role in shaping user trust and behavior in digital environments. Lancelot Miltgen and Smith <cit.> examines the relationship between information privacy regulations, perceived risks, trust, and user behavior. The study highlights how individuals' awareness of privacy regulations influences their perception of protection, which in turn builds trust in organizations and reduces concerns about privacy risks. Despite regulatory protections, users may still share personal information if they perceive significant benefits from doing so. The research suggests that understanding the balance between perceived risks and rewards is crucial for designing effective privacy regulations that foster trust and encourage responsible user behavior.
Privacy positively impacts both trust and ease of use in digital banking. Specifically, the more secure and protected users feel their personal information is, the more they trust the digital banking platform and find it easy to use <cit.>. Similarly, effective privacy management and compliance with regulations can significantly improve user trust in digital platforms <cit.>.
Aldboush et al. <cit.> focus on the fintech sector, analyzing how privacy regulations and ethical data practices affect customer trust. The study highlights the importance of corporate digital responsibility and compliance with data-protection laws to enhance trust. Transparent and responsible data handling practices are crucial for building and maintaining user trust in the digital finance industry. This aligns with findings by Kira <cit.>, who explores the impact of GDPR on user trust and organizational behavior. Kira's research indicates that strong privacy regulations like GDPR can enhance user trust by ensuring better protection of personal data and greater transparency from organizations.
Cao et al. <cit.> explore how enhanced data privacy protections, such as those provided by the GDPR, impact consumer trust. Transparent privacy policies and explicit consent mechanisms increase consumers' perceptions of control over their data, thereby enhancing trust in e-commerce platforms. Firms adhering to strict privacy regulations benefit from increased customer loyalty and trust. Similarly, Fox et al. <cit.> investigates the impact of GDPR-compliant privacy labels on user perceptions of privacy risk, control, and overall trust in e-commerce vendors. A GDPR label is a proposed standardized label designed to provide clear and concise information about a company's data protection and privacy practices. The label aims to improve transparency and enhance consumer trust by making privacy practices easily understandable. The research shows that such a label could significantly boost consumer confidence in how their personal data is handled.
In the banking sector, Lappeman et al. <cit.> examine the use of AI-driven chatbot services and how privacy concerns affect user trust and willingness to disclose personal information. The study concludes that robust privacy regulations and transparent data management practices are critical for building and maintaining user trust in digital services.
Overall, these studies collectively illustrate that privacy regulations are pivotal in enhancing user trust. By ensuring transparency, control, and security of personal data, these regulations help build a trustworthy digital environment, encouraging positive user behaviors and regulatory support.
§ PROBLEM & RESEARCH QUESTIONS
Despite the existence of privacy regulations, digital services often fail to implement effective privacy practices, leaving users feeling unsafe and unprotected.
There is a lack of research on the impact of digital privacy regulations and practices on user trust with the combination of comparing different countries. This study aims to address this gap by examining the impact of digital privacy regulations on user trust in the Netherlands, Ghana, and Malaysia. We chose these countries because we wanted to compare the distinct privacy regulations from countries in three different continents. Our contacts, resources and time-frame provided for this thesis led us to pick these three countries.
Due to having interviews from three different countries, this analysis will provide valuable insights into the global dynamics of privacy regulation and its implications for user trust across different continents.
We want to compare and understand how users perceive digital privacy regulations and what factors influence their trust in digital services. By exploring user perspectives in these three countries, this research can shed light on how digital privacy regulations affect users' sense of safety and privacy when engaging with digital services.
Therefore, the following research question (RQ) arises:
"How does the digital privacy regulations in the Netherlands, Ghana, and Malaysia impact user trust when consuming digital services?"
From the RQ above, we derive the following sub-research questions (SRQ):
SRQ1: How does the digital privacy regulations in the Netherlands, Ghana and Malaysia impact photo sharing in social media services?
SRQ2: How does the digital privacy regulations in the Netherlands, Ghana and Malaysia impact users' trust in businesses?
SRQ3: How does digital privacy regulations in the Netherlands, Ghana, and Malaysia impact the users' trust in e-government services?
§ METHODOLOGY
We conduct exploratory and comparative multiple-case study to investigate the impact of digital privacy laws on user trust in digital services in the Netherlands, Ghana, and Malaysia.
Our study examines how these regulations influence user trust in photo sharing, businesses, and e-government services. The comparative case study method is well-suited for this research because it allows for an in-depth understanding of how different regulatory environments impact user trust in digital services across diverse cultural and legal landscapes. By examining multiple cases, we can identify patterns and variations that a single case study might overlook. This approach provides a richer understanding of the subject matter, enabling us to draw more generalizable conclusions about the effectiveness of digital privacy regulations.
§.§ Research design
We interview 2 individuals each from the Netherlands, Ghana, and Malaysia to understand their perspectives on digital privacy. Our interviewees include digital privacy law experts, IT educators, and consumers of digital services. The qualitative data collected is analyzed within each specific context to grasp the dynamics of digital privacy regulations in different countries. With the comparative case study method, we focus on people who either consume digital services or posses extensive knowledge of digital privacy practices, which allows us to explore the similarities and differences between cases and later on propose a recommendation framework to improve the effectiveness of privacy practices.
§.§ Data collection
To answer the research question, we conduct semi-structured interviews because it is more suitable for qualitative studies <cit.> and we wanted to have some room for exploring different topics related to digital privacy during the interviews.
We take two individuals from each country, totaling to six participants. Some interviews are conducted in-person, while others are done online. Interviewees are selected based on their expertise and our contacts. All data collection has been done according to ethical standards. Interviewee personal data are kept confidential at their request.
A detailed list of interviewees and information about interviews is provided in Table <ref>. More information about interview content can be found here <cit.>. The interviews transcripts and recordings are kept private in a repository and available at request.
§.§ Data analysis
Data is collected using an audio recorder and then manually transcribed to create a document for each interview. Each case is analyzed separately to identify patterns and understand user perspectives on digital privacy. We utilize the phenomenological analysis method <cit.>, which focuses on uncovering the essence of users' experiences with digital privacy regulations. This approach is well-suited for prioritizing participants' perspectives, providing rich insights into how users perceive and interact with these regulations. The phenomenological approach is conducted using an exploratory multi-case study as follows:
Data collection - We conduct semi-structured interviews to gather rich data with detailed perspectives of participants' lived experiences in the area of digital privacy.
Phenomenological bracketing -
Before analyzing the data, we engage in phenomenological bracketing. This involves setting aside preconceptions, biases, and assumptions about the phenomenon of digital privacy to approach the data with openness and attentiveness to understand the participants' experiences as they describe them.
Data analysis -
We analyze each interview separately, focusing on identifying common themes and patterns related to participants' lived experiences about digital privacy. We seek to understand the underlying meanings from the user perspectives and perception on digital privacy. After analyzing our interviews, we come up with phenomenological aspects that are inline with our research questions, namely the general perspective, photo sharing, businesses, and e-government services.
Cross-case analysis -
We compare the analysis of interviews with each other, looking for similarities and differences in participants' lived experiences within different contexts in the realm of digital privacy.
Interpretation and reporting -
We interpret the findings, providing insights into the deeper meaning and significance of participants' lived experiences within the context of our research question. Lastly, we propose a recommendation framework consisting of sub-frameworks for the digital privacy regulations and practices.
§ INTERVIEW ANALYSIS
In this section, we explore the perspectives of interviewees from the Netherlands, Ghana, and Malaysia, addressing each research question for each country. By presenting the observations in bullet points, we aim to provide a clear and precise overview of user viewpoints. It is important to note that the claims and opinions expressed are exclusively those of the interviewees, capturing their individual experiences and insights regarding digital privacy within their respective regions.
§.§ Netherlands
We analyze the experiences of the interviewees from the Netherlands. This section has more content compared to other countries because we have more data related to GDPR due to interviews simply lasting longer than other countries.
Privacy policies -
The interviewees from the Netherlands generally tend to trust the privacy policies or terms and conditions for commercial digital services (e-commerce) at first glance. The privacy policy indicates that they value the privacy of their users, so we tend to trust them. After starting to read the privacy policy, we understand that the policy is quite broad and vague. The policy mentions things like improving your user experience or sharing your data with selected business partners from the digital service. This is not reassuring at all for users of the service.
Moreover, users tend to trust online digital services because their policies generally state that they value their users privacy. Hence, users trust the services without further reading the privacy policies. Nonetheless, the privacy policies of e-commerce services tend to be vague.
Users are not given a choice when using certain digital services. For instance, a user might need to use a service their school or work is using, and that same user has to agree to the privacy policies of the service, because the user has to use the service. Hence, users agree to privacy policies without actually consenting to it because they have to use the digital services for whatever reason.
Nevertheless, one of the main goals of GDPR is to give control of user data to the users themselves <cit.>. If the faculty or the affiliated company does not provide an option to the user regarding the usage of online digital services, it implicitly restricts the user's freedom of choice.
Impact of digital privacy regulations -
GDPR is regarded as the most strictest and well-implemented privacy regulation globally, but its effectiveness in making the average citizen feel safer or more protected is limited. The primary privacy issues, such as tracking and digital advertising, are not fully addressed by GDPR. Despite these regulations, enforcement of GDPR isn't fully done due to understaffed agencies, which allows major privacy violators to bypass compliance while small businesses and organizations struggle with the administrative burden and fear of violating the law because of huge fines <cit.>. An anecdote a dutch interviewee highlights is about the practical challenges of GDPR, where a simple request for a neighbor's phone number was denied due to privacy concerns, illustrating a small consequence and burden on GDPR compliance.
Nonetheless, GDPR has made somewhat positive impact, such as compelling multinational companies to store EU data within Europe. The average citizen, however, might not fully exercise their rights under GDPR, and the regulation is perceived more as a starting point than a comprehensive solution.
-0.2em
analysis:netherlands:P1GDPR is a good starting point for privacy regulations and has some positive impact. The average citizen, however, does not feel safer or secure because of GDPR.
-0.2em
analysis:netherlands:P2Privacy policies of commercial digital services don't give enough trust to the users that their data is being protected properly.
analysis:netherlands:P3Users give involuntary consent to digital services they are required to use.
Balance between privacy and economy -
The purpose of GDPR is two fold. The first purpose is that the European Union (EU) wants to protect the data protection interests of all the data subjects. The second purpose is to not limit the exchange of personal data within the European union. Hence, the purpose is both to protect the data protection rights of citizens and the other is to make sure that personal data flows freely within the European Union for both governmental and commercial institutions. Nobody talks about article 1 subsection 3 of the GDPR <cit.>.
The European Union is based on free flow of humans, capital and labor. Data is the new oil, and the EU wants to capitalize on data for Big Data purposes, training AI, and for other purposes. The EU doesn't want data protection to get in the way of economic growth. Hence, the aim for the EU is to balance data protection and economical growth. Moreover, the flow of personal data within the EU is well-protected by GDPR.
-0.2em
analysis:netherlands:P4GDPR is trying to balance between data protection and economical growth.
There is a complex interaction between GDPR and the commercial interests of social media platforms. While social media companies benefit from users sharing extensive content to increase engagement and profitability, they simultaneously face regulatory requirements to protect user privacy.
The strategy employed by social media platforms to navigate this tension is about offloading the responsibility of data privacy to the users. Specifically, these companies transfer the responsibility for obtaining consent for shared content to the users themselves through their terms and conditions. This approach allows platforms to maintain high levels of user-generated content, which is essential to their business model, while simultaneously complying with GDPR.
Despite GDPR's intentions to enhance user privacy and control over personal data, the implementation and enforcement of these regulations in the context of social media are complicated by the platforms' underlying business models. This outsourcing of responsibility to users and the broad licensing terms for user content indicate that the influence of GDPR on social media practices may be limited, prompting further investigation into the efficiency of current regulatory frameworks in terms of truly safeguarding user privacy.
-0.2em
analysis:netherlands:P5Some social media companies offload privacy responsibilities to its users, freeing themselves from certain data privacy liabilities.
Photo-sharing -
It is our freedom of expression right to share what happens in our lives, and if that is taking a photo on the public road, then let that be it. As long as the person in the photo is relatively anonymous and the photo is relatively by accident, then maybe it won't be a big deal in terms of privacy invasion. If one tags a person in the photo, however, then you make them identifiable. Hence, the photo contains personal data of another person, invading privacy. Untagged photos are also personal data but it is harder to identify the untagged person's identity, unless one uses facial recognition software. Facial recognition software challenges the idea of anonymity in public places. There is a blurry relationship between data protection and freedom of expression.
The authorship of the photographer can be limited by the reasonable interest of the person in the photo.
In contrast, for governments who require that you upload a photo the situation is different. The government has legal basis for asking our photo. We citizens hope that the government doesn't sell our photo or private data for other privacy invading purposes.
-0.2em
analysis:netherlands:P6Freedom of expression can be limited by the content of the photo.
Businesses -
The implementation of GDPR is positive, and has led to an improvement in transparency, with many businesses now disclosing their data processing practices on their websites. Users, however, find it challenging to verify the accuracy of these practices and privacy policies because of transparency and complexity issues. The compliance to privacy policies is only tested during certain issues like data breaches or company investigations, revealing false claims. Moreover, despite the necessity of using local e-commerce services, there is a lack of trust due to incidents like platform hacks that lead to anonymous calls. The reason is that small companies, like some local e-commerce, face challenges in securing data due to limited resources, require support mechanisms to aid their compliance efforts in digital privacy regulations.
Businesses must have a legal basis, typically consent, to share data, specifying the purposes beforehand, leading many companies to hire consultants to ensure compliance. This has created a lucrative market for consultancy services. GDPR, which replaced the Data Protection Directive <cit.>, introduced stricter enforcement and higher fines, causing concern among companies despite the similarity in rules to the previous directive. Large corporations can afford legal experts to comply with or bypass GDPR, while small businesses often mimic competitors' privacy policies or risk non-compliance due to limited resources. Although large tech companies often get fined, they remain powerful and influential, which indicates that legal measures alone might be insufficient to control their impact on privacy. Enforcement of GDPR in the Netherlands is insufficient, with understaffing in government agencies responsible for tracking compliance, leaving many issues unresolved. While GDPR aims to protect personal data, its effectiveness is questioned due to these enforcement challenges.
-0.2em
analysis:netherlands:P7GDPR improved transparency in businesses regarding data privacy. There is a lack of trust in local e-commerce services due to hacking incidents.
analysis:netherlands:P8Compliance to GDPR is a burden for small businesses. Large companies can use their resources to bypass GDPR.
analysis:netherlands:P9Government is understaffed to track compliance in GDPR.
E-government services -
The impact of the GDPR on privacy practices in e-government services is different than commercial services because governments can often justify data processing on the grounds of public interest. Governments process and share huge amounts of personal data among various agencies, sometimes leading to profiling and biased practices. For example, higher police surveillance in certain neighborhoods can create self-fulfilling prophecies, disproportionately targeting specific ethnic groups like Moroccan youths. Unlike commercial digital services, GDPR's "right to be forgotten" does not apply to government data needed for public services.
Moreover, the impact of GDPR on privacy practices in Dutch e-government services increased government's focus on privacy. A significant issue, however, is still present, which is the reliance on US-based cloud providers like Azure, which can potentially provide data to the US government if requested. This poses potential privacy risks for users. While GDPR has had a positive impact on Dutch e-government privacy practices, complete trust in these services is challenging due to unavoidable data sharing among government departments for various purposes, which GDPR cannot fully prevent.
Lastly, to improve GDPR, a public debate on its practical implications and societal values can be conducted making digital privacy more understandable for citizens, which can potentially lead to legal improvements. Additionally, the negative effects of data harvesting and addiction to social media, though not exact privacy issues, need legal attention as they are related to the broader impact of data harvesting.
-0.2em
analysis:netherlands:P10GDPR cannot prevent the consequences of relying on US-based cloud providers.
analysis:netherlands:P11GDPR's "right to be forgotten" does not apply to the Dutch government.
§.§ Ghana
In this section we analyze the experiences of the two interviewees from Ghana,
working as PhD students.
As mentioned before, the privacy law in Ghana is the data protection act established in 2012 by the National Information Technology Agency <cit.>.
Digital privacy regulation -
The data protection act prevents handling of private information without user consent. For instance, just like GDPR, the data protection act also prohibits people from giving out personal phone numbers without consent, which could lead to negative consequences.
Nonetheless, majority of the society is not aware and do not adhere to the data protection act. And most of the time the law isn't practiced because the people that need to make sure that the law is being practiced are understaffed within the Ghanaian government.
Hence, people only become aware of the existence of the act if something bad, like being hacked, happens to them.
People get fined or punished and only then learn about the data protection act because they don’t want to get fined or punished again.
So people are only aware that digital privacy is a thing in Ghana if the law is practiced on them.
Hence, the data protection act is more of a silent law that the government and corporate organizations are aware of and utilize.
The widespread lack of awareness regarding digital privacy regulations in Ghana primarily comes from inadequate education concerning these new regulations. While numerous technological laws have been enacted in response to advancements in technology, they often remain enclosed within legal texts without being effectively communicated to the public. As a result, many individuals are unaware of these new regulations. This lack of knowledge is largely due to the absence of comprehensive educational initiatives aimed at explaining the laws in a manner that is accessible and understandable to the general public.
Currently, the awareness of these digital privacy regulations is mostly limited to legal professionals, law students, or Information Technology professionals. Furthermore, the data protection act until recently was not that dominant in Ghana as compared to Europe. Thus people were more concerned about consuming or using the platforms but they were not much aware of what digital privacy protection is all about.
The Data Protection Act has minimal impact on individuals' behavior when using digital services, especially for people who are unaware of the law. Those with a background in computer science, who are already aware of potential risks regardless of the law, are cautious about what information they share online. Despite the existence of data protection laws in Ghana, their implementation and enforcement are often insufficient, leading individuals to rely more on their own self-protective measures rather than the data protection act. While the laws are recognized as important for governing the use of others' data, people in Ghana do not always remember specific legal details because it's quite difficult and cumbersome to read long pages of law documents. Moreover, the sense of security in using digital services has remained unchanged before and after the enactment of the Data Protection Act, indicating that the law did not significantly alter perceptions of data privacy.
-0.2em
analysis:ghana:P12Most people in Ghana only become aware of digital privacy regulations after a violation impacts them directly.
analysis:ghana:P13The Data Protection Act has minimal impact on people's behavior in Ghana, as insufficient enforcement leads privacy-aware people to protect their digital privacy on their own.
Photo sharing -
The Data Protection Act affects photo sharing on social media in various ways. Bloggers and careful users often add disclaimers when sharing photos. Many are also cautious because of internet fraud and not knowing much about digital safety.
They're more comfortable sharing photos on platforms like WhatsApp where they can control who sees them. Nevertheless, most people don't understand the risks of sharing personal information online, and are not aware of the Data Protection Act that's supposed to protect them.
People are more open with family and friends but more careful with others.
For instance, friends usually don't mind being tagged in pictures, so permission isn't often sought. Sharing photos of strangers without their consent, however, can cause problems, as people value their privacy and might confront you. Moreover, the law allows people to sue if their privacy is violated, but they must actively pursue this. The reason for taking and sharing the photo is also important. If it's done with good intent, it’s usually acceptable, but secret photos can lead to privacy issues that might not be addressed.
Moreover, people don't always realize what they share online can reach a wider audience. This misunderstanding comes from how social media used to be more private, but now it's more public, and many people in Ghana haven't caught up with this change.
-0.2em
analysis:ghana:P14Awareness regarding the digital privacy rights effects the users' behaviours in social media services.
analysis:ghana:P14.2People are not fully aware of the limits of social media when they share something.
Businesses -
People in the IT and business sector are probably more aware of the laws. The data protection act is probably enforced more on companies and businesses because they handle vast amount of data.
Businesses are aware of the law and are careful about it.
Data sharing between companies sometimes happens. For example, telecommunication companies share phone numbers with telemarketing companies.
Nonetheless, there's a significant problem with illegal data mining in Ghana, where companies obtain personal information without consent and sell it to others. This results in individuals receiving messages from unknown numbers and raises concerns about privacy violations. While the Data Protection Act should protect against such practices and impose fines on violators, government enforcement is lacking because of staffing and infrastructure limitations. Without sufficient resources to monitor and fine companies for breaching digital privacy laws, protecting the privacy rights of the society can be an issue.
Moreover, the society hopes that most of the companies adhere to the digital privacy laws. Most of the IT companies in Ghana are startups. Most of them are cautious regarding digital privacy because they wanna succeed and don’t want to take any action that is against the law. Those who may bypass the law might be huge companies. Furthermore, most Ghanaians would trust their local companies more than the international ones. When it comes to digital privacy, most of the concern is with international companies rather than local companies. This might be because people have used local business services for a long time and nothing has gone wrong.
Local businesses are often trusted more than international ones, and there's a belief that the government may punish local businesses for privacy violations but may be less strict with multinational corporations due to tax revenue considerations. Companies with limited financial resources often hire contract workers to ensure compliance with privacy regulations.
-0.2em
analysis:ghana:P15Government is understaffed to punish companies violating digital privacy regulations.
analysis:ghana:P16The track record of local companies lead to people trusting them more than international companies.
E-government services
Data protection is the top policy that is being practices by the government when implementing e-government services.
There is a significant level of trust in e-services, particularly in e-government platforms handling personal and financial services, perhaps because the government adheres to digital privacy practices and there have been no notable incidents of data breaches. Organizations with expertise in data privacy, such as National Information Technology Agency (NITA) and the National Communication Agency (NCA), play a crucial role in fostering this trust. NITA, which consists of a mix of IT industry experts and government officials, contributes to a balanced approach to data privacy. The Data Protection Act and is an example of NITA’s work <cit.>. This balance between industry and government personnel leads to greater confidence in e-government services.
The trust in the government's digital privacy efforts, despite suspicions in other areas, can be attributed to the IT revolution. The government's collaboration with industry experts to develop IT infrastructure has led to a different dynamic in the IT field compared to other government sectors. The involvement of trusted IT professionals within the government has resulted in a distinctive approach to IT policies in several African countries, including Ghana. The responsiveness of organizations like the NCA to public concerns, as demonstrated by their reversal of the decision regarding Starlink <cit.>, illustrates the strong influence and active participation of IT professionals in shaping governmental IT policies.
-0.2em
analysis:ghana:P17Trust in e-government services comes from the balance between industry and government officials.
§.§ Malaysia
In this section we analyze the experiences of the two interviewees from Malaysia who are academic faculty members in a university in Malaysia.
As mentioned before, the privacy law in Malaysia is the personal data protection act established in 2010 by the Malaysian government <cit.>.
Digital privacy regulation -
In Malaysia, the personal data protection act exists but is not effectively enforced, with no significant penalties for breaches. This has led to skepticism about its impact on privacy and security when using digital services. The law, established in 2010, is currently under refinement, potentially due to its age and evolving digital landscape.
In practice, the protection of personal information largely depends on the platform's reputation and security measures. A popular e-commerce platform, for instance, is trusted because of its robust security features, such as two-factor authentication (2FA), and its lack of security incidents.
This 2FA security measure is also employed for other applications, such as email logins and student accounts, providing a broader sense of safety.
Hence, trust in digital services is based more on the brand's security practices, especially the 2FA security measure, than on the digital privacy laws in Malaysia.
-0.2em
analysis:malaysia:P18The trust in digital services is based on the digital brand's security practices and lack of security incidents, rather than the laws.
Photo-sharing -
The impact of privacy practices on photo-sharing activities on online digital platforms is insignificant. People often upload photos without seeking permission and only remove them if someone complains. There is little awareness of or concern for the personal data protection act (PDPA), and no fines or punishments are enforced for sharing unwanted photos online. The primary response to issues like this is to remove the photo after complaints. Respect for privacy in photo-sharing is driven more by individual ethics than by law. For example, in Malaysia, the government prohibits teachers from sharing photos of children under 13 on social media and prohibits children of this age group from having social media accounts. Due to ongoing refinements of the law, however, enforcement is lacking, and compliance with the law varies among individuals. Teachers often remind parents of this rule.
Photo-sharing activities in social media is not different than in other online digital platforms. Respect for privacy on social media is minimal, with most users uploading photos without considering consequences or privacy breaches. Understanding of digital privacy is limited. While Malaysians are tech-savvy and proficient in using digital technology, they lack awareness of the rules governing its use. Those who are aware of privacy regulations tend to follow them, often seeking consent before sharing someone else's private information. The lack of adherence to rules is attributed to a lack of awareness rather than a deliberate choice to ignore them. Government efforts to increase awareness through PDPA campaigns have not been very effective, as many find the rules too lengthy and complex to read.
On the other hand, despite digital privacy being understood as a human right, it is less respected than physical privacy. Digital privacy education is minimal, with limited theoretical education at the university level. There is a call for the government to enhance public awareness and education on digital privacy to simplify the understanding of complex laws.
-0.2em
analysis:malaysia:P19Despite digital literacy, Malaysians often disregard privacy practices in photo-sharing on online digital platforms due to a lack of awareness and ineffective enforcement of the personal data privacy act.
Businesses -
In the context of data sharing among businesses, business-to-business (B2B) and business-to-consumer (B2C) transactions are generally perceived as secure, with minimal concerns about data leakage. Trust in these transactions is primarily based on the brand reputation and security practices of the platforms used, such as the implementation of two-factor authentication. Local and international companies with strong branding are trusted similarly, as seen with the e-commerce platform Shopee <cit.>, which, despite being based in Singapore, is highly trusted by Malaysians. Trust to digital service is based on brand rather than the personal data protection act. Instances of data breaches or privacy violations in Malaysia are rare and often unproven. Nonetheless, customers of Cimb <cit.> claimed that there had been a data breach, yet the company Cimb denied such an incident <cit.>.
There is, however, a cautious approach towards sharing financial information with anyone and sharing sensitive information with international companies due to potential data theft concerns.
-0.2em
analysis:malaysia:P20Trust in data sharing among businesses in Malaysia is primarily based on brand reputation and security practices rather than the personal data protection act.
E-government services -
Government services utilize privacy practices that are inline with the personal data protection act.
For instance, the government has introduced a digital ID that is utilized in Malaysia's e-government services. This is a step towards more security and privacy when using e-government services, although concerns about data safety persist among some users. The digital ID, which facilitates Single-Sign-On (SSO) for accessing government services, is part of a broader transition from physical IDs.
There is a trust in the government that they will protect the data of their citizens. There have been no reported incidents undermining trust in government data protection thus far. The relatively slow adoption of digital services is attributed to inefficiencies in service delivery when using digital ID, thus many prefer in-person interactions for governmental matters. Another aspect of trust in the government is that the Malaysian government is in the process of refining the personal data protection act, reflecting their growing awareness and the need for robust data protection in the context of advancing AI technologies and the shift towards a digital economy.
-0.2em
analysis:malaysia:P21Malaysians trust the government with their digital privacy. Yet, the digitalization efforts of the government in certain areas still need improving.
§.§ Comparison between countries
This comparative analysis highlights how the effectiveness and perception of digital privacy regulations vary significantly across the Netherlands, Ghana, and Malaysia, influenced by factors such as enforcement, public awareness, and the robustness of individual platform privacy measures. A summary of the impact of privacy regulations is presented in table <ref>.
Digital privacy regulations -
General trust and awareness of digital privacy policies differ per country. In the Netherlands, there is initial trust in privacy policies. Upon closer inspection, however, this trust diminishes due to vagueness of the privacy policies.
Ghana has a low general awareness of the Data Protection Act, and people only become aware of the act after an incident similar to Malaysia.
Similar to Ghana, people are not aware that there data privacy is protected by regulations in Malaysia. There is also skepticism due to lack of enforcement and trust is based more on platform reputation than in digital privacy laws.
The privacy acts in Ghana and Malaysia are not known by their citizens as Dutch citizens know the GDPR. So the awareness of GDPR is higher than the other two privacy acts in Ghana and Malaysia.
The effectiveness and implementation of each law differs per country. In the Netherlands, GDPR is seen as strict but is limited in addressing primary privacy concerns. Just like in Ghana and Malaysia, there are enforcement issues of the privacy regulations due to understaffing in the government. Nonetheless, the fines of GDPR are much higher than in Ghana and Malaysia, making it more compelling to comply with.
The practical impact and behavior of each privacy regulation differs per country. Complying with GDPR in the Netherlands causes practical challenges in daily interactions as people are afraid to be fined.
In Ghana and Malaysia there is minimal change in behavior. People rely on self-protection due to lack of awareness and enforcement of the law. So trust is mostly based on individual platform security practices.
Photo sharing -
In the Netherlands, there is a concern for the balance in freedom of expression and privacy in photo sharing activities, with emphasis on anonymity and consent. As mentioned before, GDPR is known but face practical enforcement challenges, especially with tagging and facial recognition issues.
Both in Ghana and Malaysia, photo-sharing practices are cautious among users that know the law. Nonetheless, regulations are not known well among the general public, and awareness is typically after an incident or fine.
Hence, photo-sharing practices show minimal concern for privacy laws, with actions driven by individual ethics rather than legal mandates.
Regulations lack effective enforcement, and awareness campaigns by the government related to digital privacy are insufficient, leading to widespread non-compliance due to complexity of the law and a lack of understanding.
Business -
Transparency and trust has improved in the Netherlands with GDPR, but there is still low trust in local e-commerce services due to data breaches. Privacy can be seen as a luxury, with wealthier people affording more secure services, like paying for certain software services or buying more expensive hardware that does not track the user. High compliance costs with GDPR lead to a market for consultancy services. Small businesses struggle with compliance, while large corporations manage but still face fines, and some actually get away with wrongdoings. Enforcement of the digital privacy regulations is lacking due to understaffing in the government.
In Ghana, there isn't even sufficient awareness among the general public regarding digital privacy, with businesses more aware of the law. Despite the law, there are still trust issues in businesses due to illegal data mining and lack of enforcement of the digital privacy law. Most local businesses and startups are careful about privacy due to the desire for success and adherence to the law to also not get fined while larger companies might bypass laws. Hence, local businesses are trusted more than international ones.
Unfortunately, just like the other two countries, enforcement of the digital privacy regulations is lacking due to understaffing in the government.
Moreover, in Malaysia, the trust in digital services is mostly based on brand reputation rather than legal compliance. There is not much concern regarding data breaches due to past reputations of businesses.
Compliance in digital privacy practices is mostly driven by individual ethics rather than the local digital privacy law.
Like The Netherlands and Ghana, enforcement of the digital privacy regulations is lacking due to understaffing in the government.
E-government services -
The Dutch government justifies processing of personal data on public interest grounds, leading to extensive data sharing among governmental departments and potential profiling and bias.
Public debate and awareness are needed to improve trust and understanding of GDPR's implications on e-government services.
GDPR has increased focus on privacy in the government, but challenges like reliance on US-based cloud providers persist.
In Ghana, e-government services comply to the local digital privacy law, with no notable data breaches demolishing trust.
IT experts are involved in policy-making for digitalization practices, which leads to more trust from the society. Thus, organizations like NITA and NCA play crucial roles in data privacy, balancing IT industry and government expertise.
E-government services in Malaysia comply with the local digital privacy law, with measures like digital IDs for security.
There is trust in government data protection. Nonetheless, slow adoption of digital services due exist due to inefficiencies in governmental processes. The government is refining the the local digital privacy law, reflecting the need for more robust digital protection measures in the digital economy.
§ RECOMMENDATION FRAMEWORK
Based on our interview analysis and comparison between countries, we propose a recommendation framework that could be applied to Netherlands, Ghana, and Malaysia to improve digital privacy practices. We construct our recommendation framework on principles for digital development <cit.> and academic research <cit.> to ensure it is robust, evidence-based, and aligned with best practices in the field.
By integrating these principles and research, our recommendation framework benefits from a well-rounded foundation that addresses the technical, ethical, and practical dimensions of digital development and data protection. This approach ensures that our framework is not only theoretically solid but also practical and applicable in real-world settings.
We focus on the following digital development principles: create open and transparent practices , establish people-first data practices, and use evidence to improve outcomes.
Open and transparent practices means that having and maintaining trust in digital ecosystems. People have confidence in digital ecosystems which is established through open and transparent practices. For instance, these practices include but not be limited with the use of agile methodologies, open data, and open source software. We pick this principle because many privacy policies and regulations are too complex to comprehend, leading users to consent on policies they do not comprehend. Examples of not practicing this principle might lead to the following perspectives P-<ref> and P-<ref>.
Establish people-first data practices means prioritizing people's rights and needs when handling their data, ensuring that value is returned to the data subjects. This includes obtaining informed consent, adhering to data standards, and enabling users to control and benefit from their data. Violating these principles can lead to harm, such as data breaches or discrimination. Examples of not practicing this principle might lead to the following perspectives P-<ref>, P-<ref>, P-<ref>, P-<ref>, P-<ref>, and P-<ref>
Use evidence to improve outcomes means that impact depends on continuously gathering, analyzing, and utilizing feedback to understand the outcomes of digital services for people, using both technological and analogue methods. This holistic approach ensures that digital policies and solutions are continuously improved based on meaningful, people-centered metrics. Without this, initiatives like GDPR or the personal data protection act may achieve efficiency but fail to recognize or enhance their real impact on people and communities. Examples of not practicing this principle might lead to the following perspectives P-<ref>, P-<ref>, P-<ref>, P-<ref>, P-<ref>, P-<ref>, P-<ref>, P-<ref>, P-<ref>, and P-<ref>,
Moreover, we come up with five sub-frameworks and visualize the general outlook of our recommendation framework in figure <ref>. Also, in table <ref> we match the user perspectives with the sub-frameworks that could provide valuable improvements to the digital privacy practices per country.
§.§ Transparency and communication framework
This framework focuses on enhancing user trust through clear communication and transparency about data handling practices. The main references that are used when forming this framework are from Schaar et al. <cit.>, Binns et al. <cit.>, and open and transparent practice principle from the digital development framework. Schaar discusses the concept of Privacy by Design, emphasizing the importance of integrating privacy considerations into the design and architecture of IT systems and business practices from the general outline. Binns explores user perceptions of algorithmic decision-making, emphasizing the need for transparency and clear communication to build trust and ensure fairness in data handling practices.
The components of this framework are:
* Clear privacy policies: Ensure that privacy policies are written in clear, non-technical language that users can easily understand.
* Regular updates: Provide regular updates about any changes in privacy policies or data handling practices.
* User education: Implement educational programs to increase users' understanding of their privacy rights and the protections offered by regulations.
§.§ User control and consent framework
The user control and consent framework focuses on empowering users by giving them control over their data and ensuring that users know what they are giving consent for. The main references that are used when coming up with this framework are from Cavoukian et al. <cit.>, Acquisti et al. <cit.>, and establish people first data practices from the digital divide framework. Cavoukian discusses the 7 foundational principles of privacy by design, advocating for user-centric controls and transparent data practices to empower users. Acquisti looks into the interaction between privacy, user behavior, and information systems, highlighting the importance of user control and informed consent in the digital age.
The components for this framework are:
* Granular consent: Allow users to provide consent for specific data processing activities rather than a blanket consent for all activities.
* Easy opt-out options: Ensure users can easily opt out of data processing activities they do not agree with.
* Data portability: Provide users with the ability to easily transfer their data to other service providers.
§.§ Accountability, security and governance Framework
This framework is about building trust by demonstrating accountability and robust governance in data protection practices. It is based on the research of Ahmad et al. <cit.> and Martin et al. <cit.>. Ahmad explores various information security strategies employed by organizations, advocating for a comprehensive, multi-strategy approach. It emphasizes the integration of diverse security measures such as prevention, detection, and response to effectively protect information systems. The study highlights the need for organizations to adopt a comprehensive security framework that includes appointing security officers, conducting regular audits, and developing robust incident response plans. The findings indicate that a multi-level security strategy is essential for mitigating risks and ensuring the protection of organizational data.
Martin explores the critical role of data privacy in marketing, emphasizing its impact on consumer trust and business practices. He examines various theoretical perspectives and empirical findings on data privacy, addressing the psychological, societal, and economic dimensions. The authors argue that effective privacy management can enhance consumer trust and loyalty, proposing a robust governance framework for marketers to manage privacy concerns responsibly and ethically. In short the study highlights the necessity for transparency, accountability, and user-centric privacy controls to build and maintain trust.
Components for this framework are:
* Data protection officers (DPOs): Appoint DPOs to oversee compliance with privacy regulations and handle user concerns.
* Audit and compliance checks: Conduct regular audits to ensure compliance with privacy regulations and best practices.
* Incident response plan: Develop and communicate a clear incident response plan for data breaches.
§.§ Technological safeguards framework
The Technological Safeguards Framework is about enhancing user trust by implementing strong technological safeguards to protect personal data. This framework is based on the NIST (National Institute of Standards and Technology) <cit.>, where NIST provides guidelines for managing privacy risks through a structured approach, offering strategies for implementing strong privacy safeguards like encryption and access controls.
The components for this framework are:
* Encryption: Use strong encryption methods to protect data at rest and in transit.
* Access controls: Implement strict access controls to ensure that only authorized people can access sensitive data.
* Anonymization and pseudonymization: Use techniques to anonymize or pseudonymize data to protect user identities.
§.§ Stakeholder engagement framework
This framework is about fostering trust through active engagement with stakeholders, including users, regulatory bodies such as government agencies, and industry groups. The main reference for this framework is from the use evidence to improve outcomes principle from the digital development framework.
The main components of this framework are:
* Stakeholder forums: Organize regular forums and gatherings to engage with users and gather feedback on digital privacy practices.
* Collaborative policy development: Involve users and other stakeholders in the development and refinement of digital privacy policies.
* Industry collaboration: Work with industry experts to develop, adjust, and adopt best practices in digital privacy protection.
§ CONCLUSION & FURTHER DISCUSSION
This study delves into the impact of digital privacy regulations on user trust across three distinct regions: the Netherlands, Ghana, and Malaysia. The analysis shows how awareness, enforcement, and user behavior are influenced by cultural, economic, and regulatory factors.
In the Netherlands, the General Data Protection Regulation (GDPR) is widely recognized and has enforced significant changes in business practices and user behavior. The practical impact of GDPR, however, depends on many other situations. While it has improved transparency and given users more control over their personal data, challenges remain. Users often find privacy policies vague and complex, leading to a low level of compliance. Furthermore, enforcement issues, primarily due to understaffed regulatory governance bodies, limit the regulation's effectiveness. This means that while GDPR sets a high standard, its real-world impact can sometimes fall short of its actual intentions.
Ghana presents a contrasting scenario where the Data Protection Act is not as widely known or enforced as GDPR. Public awareness is significantly lower, and many citizens only become aware of the regulations after experiencing a privacy violation. The lack of enforcement and public education leads to a reliance on personal protective measures. The involvement of IT industry experts in government policy-making, nonetheless, leads to a certain amount of trust in e-government services, which is a positive outcome of the collaborative approach in Ghana's government.
Malaysia's experience with the Personal Data Protection Act (PDPA) emphasizes a similar lack of public awareness and effective enforcement like Ghana. The trust in digital services in Malaysia is more closely tied to the reputation and perceived security of individual platforms rather than the legal framework. The ongoing refinement of the PDPA indicates a growing recognition of the need to improve digital privacy protections in response to evolving digital systems.
This comparative analysis highlights several critical insights. Firstly, awareness and education are imperative. Higher levels of public understanding of privacy regulations, as seen in the Netherlands, correlate with greater trust and more informed user behavior. In contrast, the low awareness in Ghana and Malaysia diminishes the effectiveness of these laws. Secondly, enforcement is an important aspect of effective privacy regulation. The understaffed regulatory bodies in all three regions is an obstacle to successful execution of the privacy laws. Lastly, cultural and economic contexts play a crucial role. The localized adaptation of privacy regulations and the balance between governmental and industry expertise, particularly evident in Ghana, illustrate the importance of adapting privacy strategies to specific regional dynamics.
§.§ Contributions
This study contributes to the broader understanding of digital privacy regulations and their impact on user trust in several ways. The comparative analysis in section 5.4 provides a robust method for evaluating the effectiveness of privacy laws across different countries and emphasizes the importance of considering local contexts when assessing regulatory impact. We also propose a recommendation framework, where each sub-framework could be used to improve a different aspect of digital privacy, aiming to improve the overall effectiveness of digital privacy regulations and practices.
§.§ Further discussion
Cross-Cultural implications of privacy regulations -
One area for further discussion is the cross-cultural implications of privacy regulations.
Cultural differences play a crucial role in how privacy laws are perceived and implemented. For example, the Netherlands, with its robust legal infrastructure and high public awareness, is different than Ghana and Malaysia, where cultural norms and lower awareness influence the effectiveness of privacy laws. Future research could look into how cultural values shape attitudes towards privacy and compliance, exploring whether culturally adapted privacy regulations could enhance effectiveness and user trust in different regions.
Technological advancements and privacy -
Technological advancements, such as artificial intelligence (AI) and machine learning, present both opportunities and challenges for digital privacy. While these technologies can improve data security and privacy management, they also create new risks and ethical dilemmas. For instance, the use of facial recognition technology raises significant privacy concerns. Further research could examine how emerging technologies intersect with privacy regulations, and how laws can evolve to address new challenges while leveraging technological benefits to enhance privacy protection.
Effectiveness of enforcement mechanisms -
The effectiveness of enforcement mechanisms is another critical area for further discussion. This study found that enforcement is a significant challenge across all regions examined, primarily due to understaffed regulatory bodies. Investigating alternative enforcement strategies could provide insights into more efficient and effective ways to enforce privacy regulations. Additionally, comparative studies on the enforcement models of different countries could identify best practices and innovative approaches to ensure compliance.
abbrv
|
http://arxiv.org/abs/2409.02362v1 | 20240904011819 | Bundled matrix product states represent low-energy excitations faithfully | [
"Thomas E. Baker",
"Negar Seif"
] | quant-ph | [
"quant-ph"
] |
theoremTheorem
definition
definitionDefinition
remark
*remarkRemark
corollaryCorollary[theorem]
lemma[theorem]Lemma
proposition[theorem]Proposition
ε̅
r
N_e
N
d
N_s
N_ML
d
d'
Julia
morekeywords=abstract,break,case,catch,const,continue,do,else,elseif,
end,export,false,for,function,immutable,import,importall,if,in,
macro,module,otherwise,quote,return,switch,true,try,type,typealias,
using,while,
sensitive=true,
alsoother=,
morecomment=[l]#,
morecomment=[n]#==#,
morestring=[s]"",
morestring=[m]'',
[keywords,comments,strings]
language = Julia,
basicstyle = ,
keywordstyle = ,
numbers=left,
stringstyle = ,
commentstyle = ,
showstringspaces = false,
frame = single,
FIGs/
[Please direct correspondence to: ][email protected] of Physics & Astronomy, University of Victoria, Victoria, British Columbia V8P 5C2, CanadaDepartment of Chemistry, University of Victoria, Victoria, British Columbia V8P 5C2, CanadaCentre for Advanced Materials & Related Technologies, University of Victoria, Victoria, British Columbia V8P 5C2, CanadaDepartment of Physics & Astronomy, University of Victoria, Victoria, British Columbia V8P 5C2, Canada§ ABSTRACT
We consider a set of density matrices. All of which are written in the same orbital basis, but the orbital basis size is less than the total Hilbert space size. We ask how each density matrix is related to each of the others by establishing a norm between density matrices based on the truncation error in a partial trace for a small set of orbitals. We find that states with large energy differences must have large differences in their density matrices. Small energy differences are divided into two groups, one where two density matrices have small differences and another where they are very different, as is the case of symmetry. We extend these ideas to a bundle of matrix product states and show that bond dimension of the wavefunction ansatz for two states with large energy differences are larger. Meanwhile, low energy differences can have nearly the same bond dimensions for similar states.
Bundled matrix product states represent low-energy excitations faithfully
Negar Seif
September 9, 2024
=========================================================================
§ INTRODUCTION
Density matrices represent one of the core objects in quantum mechanics. They store a wealth of information about the system and can be useful for solving problems. It is well established that the trace of the density matrix multiplied onto any operator gives the expectation value of the operator for a given state that the density matrix represents <cit.>.
When diagonalized with an eigenvalue decomposition, density matrices are decomposed into a diagonal matrix that contains the orbital occupations of the natural orbitals. The natural orbitals themselves are the eigenvectors of the density matrix. It was originally pointed out by Löwdin <cit.> that natural orbitals were a rapidly converging basis set ( i.e. the lowest eigenvalue converges faster than other choices of a basis with increasing numbers of natural orbitals). Because the solution of natural orbitals requires a ground state wavefunction, it is often computationally costly to obtain them before a computation. So, another basis set is often used.
However, describing the density matrix with a number of states equal to the total Hilbert space size is computationally costly. Reducing the number of degrees of freedom while maintaining accuracy on the result is the main challenge of computational chemistry and solutions of quantum problems on the classical computer in general. This is the foundational idea behind renormalization.
What is considered less often is how natural orbitals for one density matrix describe well or do not describe at all another density matrix. Our goal in this paper is to determine how density matrices when summed over some of the basis functions in a given basis can describe a system. We consider here the idea of bundling together different density matrices. We further consider how accurate those density matrices are if a common basis is used to write both density matrices in full. For example, if a set of m orbtials that have the highest occupation for one density matrix are used to express another density matrix, how accurate can the second density matrix be and what is the best way to characterize it?
The fundamental question that is being asked here is how best can one relate what we will call a bundled set of density matrices, defined as follows.
A bundle of density matrices is defined as a set of density matrices that are all written in the same basis.
This is not the same as an ensemble of states contained in the density matrix.
The fundamental quantity that we want to investigate is whether a notion of closeness ( i.e., a norm) can be defined for the independent density matrices. The result used will be to establish a relationship between the truncation error and the metric distance between two density matrices. This will also be related to the energy difference between two states. The argumentation applies to any local Hamiltonian, which is reasonable for physical systems. We then extend the outcomes of those answers to matrix product states (MPS) to understand how the bond dimension of a bundle of MPSs will behave. This will explain why the bond dimension of the bundled MPS was not explosively large when an algorithm was formulated to solve for excitations in a quantum model <cit.>.
The analysis tools used to formulate the truncation of the bundled MPS apply in principle to bundles of any type of density matrices so long as the eigenstates satisfy the area law of entanglement. However, we choose to focus on bundles of eigenstates because they apply most readily in entanglement renormalization algorithms. In the following, we use theorems only when they are most relevant to the main thesis statement of the paper. We use definitions throughout to clearly define core concepts and keywords. Corollaries are used to communicate small extensions of the core theorems. Lemmas are used when heavy reliance on results outside of the paper are required to prove the statement and also when those statements are required for proofs later on.
§ BACKGROUND ON DENSITY MATRICES
The class of problems that we wish to solve are based on the definition of a Hamiltonian that is a self-adjoint operator <cit.>, H, composed of complex coefficients (H∈ℂ^M× M for an M sized Hilbert space). The eigensolutions, ψ, of this operator have the relationship Hψ=Eψ for eigenvalues (energies) E∈ℝ and eigenvectors (wavefunctions) ψ∈ℒ^2 (square integrable) and contain complex numbers <cit.>. The Hamiltonian can be represented by a number of site indices i,j,k,ℓ,…. For many-body Hamiltonians, there is a quartic term (4-indices required) that appears to account for the electron-electron interaction, although the results we derive here will apply for any interaction. We choose to start from many-body Hamiltonians since this will recover a wide class a non-relativistic phenomena that we are interested in.
There is no consideration for divergences ( i.e., points where the evaluation of a quantity is infinite) in any terms as this analysis is solely concerned with models implemented to a finite numerical precision. Thus, all singularities in any interactions are regularized by finite difference approximations.
§.§ Density matrices
A density matrix can have several connotations. We explicitly define several that are useful here. The type of density matrix that will be used here is the one-body reduced density matrix, although we describe the more general case in many places.
For a given Hamiltonian H, the full density matrix of a system is defined by
ρ=∑_kη_k|ψ_k⟩⟨ψ_k|
for the kth excitation of the system and some occupation used throughout as 0≤η_k≤1. When projected onto a real space lattice (or other basis) by through a resolution of the identity, the density matrix then assumes the form,
ρ= ∑_ ijk𝒜ℬη_k|i𝒜⟩⟨ i𝒜|ψ_k⟩⟨ψ_k|jℬ⟩⟨ jℬ|
= ∑_ij𝒜ℬρ_ij𝒜ℬ|i𝒜⟩⟨ jℬ|
after defining ρ_ij𝒜ℬ=∑_kη_k⟨ i𝒜|ψ_k⟩⟨ψ_k|jℬ⟩. When 𝒜 and ℬ are a null set, when either variable contains no indices, the density matrix is represented as a one-body reduced matrix ( i.e., only requiring the indices i and j),
ρ=∑_ijρ_ij|i⟩⟨ j|.
Had more indices been kept, then a higher order density matrix would be represented ( i.e., i,j,k,ℓ for the two-body reduced density matrix). In second quantization, one can simply compute ρ_ij=⟨ c^†_iσ c_jσ⟩ (see Apx. <ref>) in a fermion model with spin σ. The density matrix defined in Def. <ref> is generally representing a mixed density matrix.
There are a few types of density matrices that form special types of the above definition.
In the special case where ρ^2=ρ or that Tr(ρ^2)=1 the density matrix is called a pure density matrix. Pure states can be represented as ρ=|ψ⟩⟨ψ| for some state ψ.
In quantum chemistry, there is a different normalization convention. The trace of the density matrix is not always one, but instead is the number of particles, N_e, with a particular spin. We find it occasionally useful to refer to an ensemble density matrix that we define as the following.
Any density matrix whose trace is not one (η_k can assume any positive value).
Note that when the normalization of the ensemble density matrix has a trace that is not 1 that one can have Tr(ρ)≠1 and ρ^2=ρ. For example, three electrons of the same spin in a pure state can be represented in an ensemble density matrix and have Tr(ρ)=3.
In many contexts, the mixed density matrix and the ensemble density matrix are used synonymously. A density matrix in the mixed representation can be thought of as a linear superposition of other states, so the concepts are the same.
We define explicitly here the ensemble density matrix not only to distinguish between normalization factors in quantum information and quantum chemistry, but we also take this opportunity to highlight that the bundled density matrix from Def. <ref> is not an ensemble density matrix. Specifically, the bundle of density matrices can contain ensemble density matrices, although the density matrices in the bundle can be of any type.
The diagonalization of a density matrix yields a set of eigenvectors known as the natural orbitals. For the one-body reduced density matrix, these orbitals represent the density matrix as
ρ=∑_kε_k|Φ_k⟩⟨Φ_k|
with an eigenvalue ε_k sometimes called an occupational weight.
At first glance, Eq. (<ref>) and Eq. (<ref>) appear to be identical. However, this is not the case. The point of the definition of the natural orbitals is that they are expressed in a single-particle basis (only one coordinate 𝐫 as used more extensively in Apx. <ref>). Meanwhile, the eigenvectors used in Eq. (<ref>) have one coordinate 𝐫∈ℝ^3 for each electron in the system.
We will note that certain renormalization schemes can generate a more efficient basis set than natural orbitals <cit.>. However, reducing the problem down to few enough orbitals that a polynomial time solver could be used would imply that the determination of that transformation is not discoverable in polynomial time since the general problem is known to be hard <cit.>. Thus, no universally efficient procedure should be expected.
For a given Hamiltonian operator H, the expectation value E (energy) is
E=Tr(ρ H)≡∑_k⟨ k|ρ H|k⟩.
Replacing H by any other operator W gives the expectation value ⟨ W⟩. The index k is taken over any set of orbitals that is orthogonal and complete.
Throughout, we will only consider orthogonal basis states, which applies equally to k above, and the final result requires a local operator.
Typically what is done in practical computations is to take a truncated trace from the natural orbitals with the highest to the lowest weight. The result answer converges very quickly which can be seen from the using these functions in practice <cit.>.
For completeness, we define an excitation in the system using the above concepts.
Given a set of excitations spanning an interval of energy, the next excitation can be defined as follows. To find an excitation at eigenenergy E̅, take the set of all density matrices composed of eigenstates with E<E̅. Then find the states that are orthogonal to those states. The minimum energy will be the excitation up to degeneracy.
Two different excitations will have a different density matrices.
§.§ Truncated density matrices
So far, we have discussed density matrices where the basis states used to describe the density matrix spans the entire Hilbert space. Let us now define a truncated density matrix where small occupations are set to zero.
Returning to Def. (<ref>), the definition can be modified to define a truncated density matrix if the sum over k in Eq. (<ref>) is restricted to a value m less than the Hilbert space size, M. We denote the truncated trace as
Tr_m^(γ)(ρ^(α))=∑_i=1^m⟨Φ_i^(γ)|ρ^(α)|Φ_i^(γ)⟩
for orbitals from a set γ on the α excitation.
We use the term `truncated trace' because `partial trace' is usually associated with tracing over lattice sites and producing the partial density matrix.
There is an immediate consequence that the density matrix is now truncated, leading naturally to the definition of the truncation error.
The density matrix may be truncated to dimension m, known as the bond dimension. The difference from the true value of the trace of ρ is known as the truncation error, δ, which provides an estimate of the precision of the resulting expectation values from Def. <ref>. The full definition is then
Tr(ρ^(α))-Tr_m^(α)(ρ^(α))=∑_i=1^Mρ_ii^(α)-∑_i=1^mρ_ii^(α)≡δ^(α)_m;γ
where α denotes a state that was used to construct ρ and γ is the basis over which the truncated trace was evaluated.
§ RELATIONSHIP BETWEEN TWO DENSITY MATRICES
The energy difference between excitations can be defined using the expectation values as the following.
For a given Hamiltonian H, the energy difference between two states α and β is
Δ E_αβ = Tr((ρ^(α)-ρ^(β))H)
where Δ E_αβ=E_α-E_β.
Two excitations α and β each have natural orbitals. These do not need to be the same set, nor orthogonal to each other. However, we restrict ourselves to orthogonal sets of natural orbitals so that they are related by a unitary transformation in every case.
The trace of any density matrix is invariant to the basis over which the trace is performed,
Tr(ρ)=Tr(Uρ U^†)
this is often known as the cyclic property of the trace. However, a unitary cannot be assumed at the start, and we will see that the traces between the two density matrices must be satisfied to ensure this is true.
Natural orbitals of two different density matrices (assumed to be written in the same orbital basis with different occupational weights) are related by a unitary transformation if the trace is the same.
Two density matrices for two states α and β satisfy
Tr(ρ^(α))-Tr(ρ^(β))=Δ N_e
and by the cyclic property of the trace from Eq. (<ref>), we can transform ρ^(α)→ Uρ^(α)U^† without changing the trace. If the number of particles between the two density matrices is the same, then this implies that the natural orbitals between the two states are related by a unitary matrix. For two states with different particle number, we would need to use an orthogonal transformation U^†→ O^-1 but the change in magnitude of the density matrix by the orthogonal transformation should be considered that we always consider natural orbitals with a normalized amplitude. Thus, one can enforce a normalization to again show that the natural orbitals are related by a unitary transformation.
So long as the natural orbital sets for α and β are both complete in the space spanned by two density matrices (including unoccupied natural orbitals), then this proof holds. We restrict our consideration to density matrices with the same total trace for simplicity, but all results can be extended to the case where an orthogonal transformation is required.
§.§ Local systems
The notion of locality can also apply to the eigensolutions and general operators <cit.>. In fact, is a central idea in quantum physics for all types of physically relevant ground-states.
§.§.§ Local correlations
The area law is a statement of correlations for extremal eigenvalues of the full spectrum. The two conditions for correlation functions (operator O) depending on whether there is a gap in the eigenvalue spectrum (gapped) or not (gapless) <cit.>:
⟨ i|O|j⟩∼exp(-|i-j|/ξ) [gapped]
1/|i-j|^γ [gapless]
for two arbitrary, real exponents ξ and γ. This condition holds up to a sign of the function. It is without loss of generality to more than two sites that this same definition still holds in the following.
For a proof of the correlation dichotomy, in some contexts known as Kohn's near-sightedness principle, we refer the interested reader to Refs. hastings2004locality,kohn1996density,prodan2005nearsightedness.
The natural orbitals are local, defined as a non-zero on a compact subset of 𝐫∈ℝ^3.
Since they are derived from ⟨ĉ^†_iσĉ_jσ⟩ for each element of the density matrix. Upon diagonalization, a linear combination of these elements will be the result. Since a linear combination of local correlation functions are local themselves, the natural orbitals are also local.
§.§.§ Local Hamiltonians
The specific property of the system that we want to study here is for local Hamiltonians ( i.e. those with finite extent). The basic assumption at a coarse level constrains the long-range behavior of the Hamiltonian is contained in the following definition.
A local Hamiltonian satisfies the following two properties in the thermodynamic limit.
|i𝒜-jℬ|→∞lim⟨ i𝒜|H|jℬ⟩→0
where i and the set of coordinates 𝒜.
If the interactions are local, then in one limit
|i𝒜-jℬ|→0lim⟨ i𝒜|H|jℬ⟩→ C_i𝒜
where C_i𝒜∈ℂ is finite and real. This definition holds whether the sites i𝒜 and jℬ represent single sites or clusters of sites, but it represents the ultra local limit where the Hamiltonian appears truly local.
§.§ Relationship between natural orbital states
It is useful to explore the relationship between the natural orbitals of two different states and how the unitary that connects them can appear. There are two broad categories that the unitary can take and it is worth explicitly defining each. After defining the two cases, we remark on some physical cases where these can be found.
A unitary matrix U relating the natural orbitals of two excitations is nearly the identity except for a sub-block over m states if the two states α and β both have small truncation error in the m most important orbitals to ρ^(α).
Denote the occupation values of the rotated density matrix U ρ^(β)U^† as (Tr^(γ)_m denotes a truncated trace in a basis γ for m of the most relevant orbitals)
Tr^(α)_m(Uρ^(β) U^†)=∑_k=1^m_kk^(β)
and the unrotated value would be the same but without the bar applied. The bar denotes the truncated trace in the space of the most relevant orbitals for α. In general, this does not have to be the same evaluation as over the most important m orbitals for β.
The following statement is equivalent to having a small truncation error in the most relevant orbitals for a state α:
Tr(ρ^(β))-Tr^(α)_m(ρ^(β))=δ^(β)_m;α!≅δ^(α)_m;α≪Tr(ρ^(α))
where it is clear that the truncation error is small. The trace is taken over the largest 1<m M orbitals relevant for the most important natural orbitals for α, {Φ^(α)}_m. If we have δ^(β)_m;α≅δ^(α)_m;α,then the most important states for α give a small truncation error for β.
In order for the condition in Eq. (<ref>) to hold, the following must also be true
∑_k=1^mε_k^(α)≈∑_k=1^m_kk^(β)=∑_k,k'=1^m∑_ℓ=1^Mε_ℓ^(β)u_ℓ ku_ℓ k' ^*δ_kk'
following Lemma <ref> for an element of the unitary u_kℓ. We use the index k for the basis of natural orbitals for α. The indices ℓ and ℓ' for the natural orbitals for β.
There is a relationship between the unrotated coefficients and the β state and those for the truncation in the states relevant for α (indexed by k),
∑_k,k'=1^m∑_ℓ=1^Mε_ℓ^(β)u_ℓ ku_ℓ k' ^*δ_kk'≤∑_ℓ=1^mε_ℓ^(β)
where the sum ℓ over the diagonal elements of ρ^(β) is taken to be over the most relevant m natural orbitals for the β state. The equality is true when the truncation error is small for both α and β and only relevant orbitals for α are allowed, then those same orbitals must be relevant for β. Thus, the terms in the identity matrix appear strongly diagonal except for an m× m block for states of low truncation error.
States where the unitary is close to the identity in the irrelevant (M-m)×(M-m) block of U will be called similar excitations.
Two excitations have a similar set of m natural orbitals if their density matrices have the relationship that a unitary U transforming one natural orbital set into another has approximately the decomposition U=W⊕ P for a unitary W of size m× m and an second unitary matrix P of size (M-m)×(M-m).
This definition is not meant to be exhaustive or tight for all possible similar states, but it is sufficient for the states here.
The opposite would be dissimilar excitations.
Two excitations have a similar set of m natural orbitals but are not approximately of the form U=W⊕ P as defined from similar states.
This type of state is what would be encountered in the case of a symmetry protected state. Alternatively, two very widely separated centers of a potential v(x) in an eigenvalue problem would also be dissimilar.
§.§ Truncation errors as a metric distance
The set of truncated density matrices should also be discussed in the context of a normed vector space, but the metric must be defined appropriately. In essence, we ask: how closely related are two truncated density matrices?
The relative truncation error between two states α and β will be defined as
r^(γ)_m(α,β)=|δ^(α)_m;γ-δ^(β)_m;γ|
where it is very important to note that both truncation errors were evaluated in the same m-sized basis generated from excitation γ (either α or β or some other state).
In order for the truncated set of natural orbitals from one excitation to be mapped onto a vector space with respect to the natural orbitals of another excitation, there must be a definition of the metric distance. We can show that the relative truncation is a suitable metric.
The relative truncation error can be used as a practical metric distance between states in almost all cases of practical interest.
The absolute value of the truncation error, Eq. (<ref>), satisfies all necessary properties of the metric distance in almost every case of relevance. The metric r^(α)_m for some truncation error δ^(α) satisfies
* r^(γ)_m(α,α)=0
* r^(γ)_m(α,β)≥0
* r^(γ)_m(α,β)=r^(γ)_m(β,α)
* r^(γ)_m(α,β)≤ r^(γ)_m(α,ζ)+r^(γ)_m(ζ,β) [Triangle ineq.]
which are the properties of a metric <cit.>. The triangle inequality is satisfied in the same way that the ℒ^1-norm is constructed normally. Points 1 and 2 follow by definition. However, it is technically possible that r^(γ)_m(α,β) can be equal to zero as is especially evident when m=M as the truncation errors are both zero. Yet, it is generally the case that Point 2 holds for general states even when evaluating the truncation error out to numerical precision for 1≪ m M. This is why we call it a practical metric instead of simply a metric. The symmetry condition (3) is satisfied if the orbitals used to evaluate the truncation error is the same for both states. The triangle inequality (4) can be readily verified.
The main takeaway from this identification of a metric is that the amount to which the two states differ (with respect to a common set of states γ) from each other is communicated through the truncation error. When the description of a given state is accurate to δ^(α), then the states that will be most efficiently bundled are those also with low truncation error δ^(β).
The practical effect of this is that the truncation error is not only meaning the amount of information lost in a truncation to m orbitals, but the difference between truncation errors is also communicating the distance between the excitations, a highly remarkable feature especially in the context of entanglement renormalization which normally only assigns the truncation error as a means of uncertainty. The metric here implies it can also determine how well an excitation is described in a given basis.
§.§ Energy differences as a metric distance
With regards to the identification of the metric for the truncation error, it is well-known that the truncation error implies an uncertainty in the energy, so it is often used as an error measure for entanglement renormalization methods <cit.>. Because of this relationship between the truncation error and uncertainty on expectation values, it is natural to ask if another quantity can also serve as a metric distance, a quantity that is more physical.
The most readily available quantity in many cases is the energy difference. We seek to establish if the differences in energies can be proven to be a metric as well.
Two density matrices (for states α and β) truncated to order m in some basis have a metric distance given by the absolute value of the energy difference, |Δ E^(m)_αβ;γ| with Δ E^(m)_αβ;γ=E^(m)_α;γ-E^(m)_β;γ, for local Hamiltonians expressed in local basis sets if not both of E_α and E_β are zero.
Consider the energy difference in a given basis written as
Δ E_αβ;γ^(m)=.Tr_m^(γ)((ρ^(α)-ρ^(β))H)|_γ=α
where we select the γ=α basis here for clarity but can select any basis without loss of generality. An equivalent way to express this is to take the trace over all real-space positions but only expand the density matrices out to order m in the most relevant orbitals for α. The resulting energy difference in the most relevant natural orbitals for α is then
Δ E_αβ;α^(m)= ∑_i𝒜⟨ i𝒜|∑_k=1^m(ε^(α)_k|Φ^(α)_k⟩⟨Φ^(α)_k|.
.-∑_ℓ=1^M∑_k'=1^mε^(β)_ℓ u_ℓ ku_ℓ k' ^*|Φ^(α)_k⟩⟨Φ^(α)_k'|)H|i𝒜⟩
where ρ^(β)=∑_ℓε_ℓ^(β)|Φ_ℓ^(β)⟩⟨Φ_ℓ^(β)| with ℓ indexing the complete space and with the unitary defined earlier in Eq. (<ref>).
Let the definition
_kk'^(β)=∑_ℓ=1^Mε^(β)_ℓ u_ℓ ku_ℓ k' ^*
hold in the following. A complete set of states can be inserted before H to give
H|i𝒜⟩→∑_jℬ|jℬ⟩⟨ jℬ| H|i𝒜⟩
where the term ⟨ i𝒜|H|jℬ⟩ can be thought of as the Hamiltonian in real space. For local Hamiltonians, this term goes to a delta function for large differences between the sites i,j,𝒜,ℬ denoted as h_ij𝒜ℬ.
In the local limit where (h_ij𝒜ℬ→ C_i𝒜δ_i𝒜,jℬ), Eq. (<ref>) becomes
Δ E_αβ^(m)= ∑_i𝒜⟨i𝒜|∑_k=1^m(ε^(α)_k|Φ^(α)_k⟩⟨Φ^(α)_k|.
.-∑_k'=1^m^(β)_kk'|Φ^(α)_k⟩⟨Φ^(α)_k'|)|i𝒜⟩ C_i𝒜
At this point, the density matrix for β projected into the orbitals for α is not diagonal. If we impose that the orbital basis is local, as in Lemma <ref> for natural orbitals, then this implies that the as the difference between k and k' are large, then the overlap between functions must go to zero. Recall that we only consider orthogonal basis sets. This means that ∑_ℓ u_ℓ ku_ℓ k'^*=δ_kk' in the ultra-local limit. Consequently, the expression reduces to
Δ E_αβ^(m)= ∑_i𝒜∑_k=1^mC_i𝒜(ε^(α)_k-^(β)_k)|⟨Φ^(α)_k|i𝒜⟩|^2.
where _k^(β) is the diagonal coefficients of ρ^(β) in the basis of the most relevant orbitals for α.
With an absolute value sign, |Δ E^(m)_αβ;γ|, one can verify the axioms of a metric for 1≪ m M are practically satisfied as presented in Thm. <ref>.
There can be an ambiguity in the definition of the metric if the states are zero but this is not typical of eigenvalue problems relevant here. So, we exclude this case.
Theorem <ref> made heavy use of the ultra-local limit where the Hamiltonian is effectively diagonal. This and several other features are discussed here as useful to derive the core statements, but understanding how things would change if the Hamiltonian is longer-range is worth performing.
If the Hamiltonian contains longer range terms, then terms like ⟨ i𝒜|H|jℬ⟩ to some relative distance between i𝒜 and jℬ can be incorporated.
Many common basis functions decay exponentially. Notably, the Gaussian basis set decays exponentially away from the origin of the Gaussian. Thus, in the regime where matrix product states are best applied (local, gapped Hamitonians), the local approximation is a very good one for the excitations.
The ultra-local limit is not the most general form that the unitary can take here for arbitrary problems, but it is most applicable one for the tensor network case we find below where there is merely a permutation of the elements in the same basis. All of the excitations will be written in the same basis of entangled states and the density matrix eigenvalues of each is in an m× m basis ( i.e., effectively all of the occupational weights can be discovered in g same-size matrices which all are diagonal and in the same basis). If one prefers, then {Φ_k^(α)} and {Φ_k^(β)} belong to a common set γ and an m× m tensor with occupational weights can be identified for both states in that same basis.
§.§ Large energy differences
We can now make a general statement about large energy differences.
Large energy differences imply larger differences between density matrices.
Note that
|Δ E_αβ;γ^(m)|≤|Tr_m(|ρ^(α)-ρ^(β)|H)|≤Tr(|ρ^(α)-ρ^(β)|H)
where |ρ^(α)-ρ^(β)| is the element-wise absolute value implemented in the following way
|ρ^(α)-ρ^(β)|=∑_k,ℓ=1^m|ρ̃_kℓ^(α)-ρ̃_kℓ^(β)||Φ^(γ)_k⟩⟨Φ^(γ)_ℓ|
where tildes denote that the density matrices are rotated into the γ basis. Effectively, one simply replaces round braces around the occupation values in Eq. (<ref>) by an absolute value. By comparison with Eq. (<ref>), then Eq. (<ref>) is satisfied. Further, recall that coefficients C_i𝒜 in Eq. (<ref>) set an effective energy scale. The maximum of which, C_max., can act as a normalization to give
|Δ E_αβ;γ^(m)|/C_max.≤|Tr_m(|ρ^(α)-ρ^(β)|H)|/C_max.≤|ρ^(α)-ρ^(β)|_F
where|ρ^(α)-ρ^(β)|_F is the Froebenius norm and since coefficients of H can be either positive or negative. This is true by inspection of Eq. (<ref>) and |ρ^(α)-ρ^(β)|_F=∑_k,ℓ=1^m|ρ_kℓ^(α)-ρ_kℓ^(β)|. The sum over k and ℓ in the Froebenius norm can be truncated to m or kept in the full basis.
Summarily, when |Δ E_αβ;γ^(m)| is large, then the density matrices must be different as conveyed by the Froebenius norm. So, a large energy difference implies a large difference between the states α and β.
The result is valid in any basis if the operator is local. The use of a finite number of states m for the orbital basis makes this result useful in a truncated space.
So, density matrices with large energy differences necessarily have large differences. The opposite is not so clearly defined. If the energies are low, they can either be a similar state, in which the most relevant basis for one excitation is relevant for another state. Alternatively, if the state is dissimilar, then the basis relevant for both excitations is very different, as discussed earlier.
There are several useful consequences of the previous derivation of the main result. We will discuss them here before moving towards the solution with partial density matrices.
The results of Thm. <ref> generalize to all nearby (Δ E_αβ≈0) states with low truncation error.
Reconsider the form of Eq. (<ref>) but now for a chain of energy differences between several states of a single symmetry sector. The first energy difference satisfies Eq. (<ref>) as does the next set of two excitations. Therefore, as a transitive property, all wavefunctions with small energy differences share a high degree of overlap in their truncated density matrices.
Taken together, the results of this section show that partial density matrices with low energy can be bundled efficiently ( i.e., a basis m can give a small truncation error for both states) together. Each excitation added to the bundle comes at either no additional cost in terms of the number of natural orbitals required for a small truncation error if the states are similar. For small energy differences, there is a possibility that the small energy states cost no more than making the lowest energy state. When bundling two states in the bulk of the eigenvalue spectrum together, one must only pay the cost of one of the states, the second comes at a small cost if similar.
When adding a dissimilar state to the bundle, a sufficient number of natural orbitals must be added to the orbital basis to give a low truncation error for that dissimilar state. Once the new dissimilar state is added to the bundle, a new set of nearby similar states can be bundled at small cost.
§.§ Application to quantum chemistry
The primary focus of this paper is on locally entangled systems. This is not the general case in quantum chemistry where models have longer range interaction. In this case, the entanglement of the states is known to be much larger than for the local models as can be seen from direct computation. In particular, we note that the extension of models to two dimensions can cause an exponential increase in the entanglement with the width of the system <cit.>.
This would cast doubt on whether the ultra-local limit will apply in a quantum chemistry system. It turns out that by use of the singular value decomposition (SVD), the basis states of the model can be written such that the basis functions between two states are identical for two different systems. For example, writing two excitations and then cutting the system with the SVD at the same bond in each system will give a form like Ψ=UDV^† which has grouped basis functions to the left of the cut (contained in U) and the other basis functions for the right of the cut (contained in V^†). Thus, the same basis functions can be assigned for both U and V^† for both states. Between the two states, the basis functions may be different in practice, but we can always find a unitary transformation that makes the states match between the U (V^†) for one state and the other.
However, the two D matrices are still diagonal. This is exactly the ultra-local limit as we applied it to the density matrix (but not the Hamiltonian from Def. <ref>) because the only difference is that the two states have different occupation numbers inside of D. Thus, only a permutation is allowed from the basis functions to rearrange the occupation numbers of the first state to the second. Thus, the ultra-local limit, in the construction of the SVD, is valid even between two different states. We merely motivated the limit by applying an understanding of the real-space orbitals previously.
Thus, quantum chemistry can fit into this hierarchy established here. The computational bottleneck appears in retaining enough states in the SVD in the quantum chemistry system. Because entanglement is larger, there must be more states retained, thus the method is less efficient.
§ DENSITY MATRIX RENORMALIZATION
Up to now, all considerations have been for the full wavefunction and full density matrix and n-body reduced density matrices. The statement for the full wavefunction is not completely useful for the solution in a tensor network decomposition where partial density matrices on either the left or right partition of a given system (partitioned in the sense of partitioning a graph) is the relevant quantity for the eventual solution.
A system partitioned into two sets can identify basis functions for each set. The basis functions may be used to project the full density matrix into a reduced site representation. The density matrix on one half of the system is equivalent to the full density matrix but traced out on the other half. No matter how the system is partitioned, the entanglement expressed by one partial density matrix must be equal to the complementary density matrix's entanglement.
We do not rule out that a null set can be used for one of the two partitions, but this would imply that the partial density matrix is equivalent to the full density matrix. The sites do not need to be contiguous, but they are chosen to be such here.
§.§ Matrix product states
An entanglement renormalization algorithm explicitly calculates the components of a density matrix partitioned between two parts of a system with the use of a singular value decomposition (see Ref. bakerCJP21,*baker2019m,dmrjulia1 for an explicit derivation). The following definitions cover the matrix product state and make use of the above theorems.
A wavefunction with degrees of freedom σ_i on each site i is written as
|ψ⟩=∑_{σ_i}c_σ_1σ_2σ_3σ_4…|σ_1σ_2σ_3σ_4…⟩
for some probability amplitudes c_σ_1σ_2σ_3σ_4…∈ℂ. By performing a series of reshapes and SVDs <cit.>, Eq. (<ref>) can be decomposed into a series of tensors <cit.>
|ψ⟩=∑_{σ_i}, {a_i}
A^σ_1_a_1A^σ_2_a_1a_2… D^σ_i_a_i-1a_i…
B^σ_N_a_N-1|σ_1…σ_N⟩
for a number of sites, N. Raised and lowered indices are for ease of viewing as there is no notion of covariant or contravariant indices <cit.>. Raised and lowered indices are used here to signify the partitioning of the lattice via a reshaping operation <cit.>. The index introduced by the SVD (a_i) is known as a link index and its dimension is called a bond dimension. The orthogonality center D contains the weight of the basis sets and can be gauged to any site or bond <cit.>. All tensors to the left of the center of orthogonality (A) are left-normalized. Similarly, all tensors to the right of the orthogonality center (B) are said to be right-normalized. Contraction of any left- or right-normalized matrix with itself leaving only two indices corresponding to a_i uncontracted yields an identity matrix.
Eq. (<ref>) can be represented with Penrose's graphical notation as in Fig. <ref><cit.>. Vertical lines on the MPS correspond to the σ indices and a are the horizontal indices.
Because of how the density matrices are truncated in the MPS through the D matrix retaining the largest values, we can remark that the basis states that are kept represent the most entangled basis functions between the two partitions. Thus, the use of natural orbitals in the previous derivations and theorems now becomes a set of the most entangled basis functions between the two partitions.
§.§ Matrix product states represent the ground-state faithfully
In a seminal work in Ref. verstraete2006matrix, it was demonstrated that the MPS can represent the true ground state with a low error. The rigorous result is established with a bound on the Renyi entropy in relation to the truncation error of the MPS. The argument demonstrates that locally entangled models are efficiently described by the MPS ansatz and there is a generalization to the MPS in higher dimensions and connections to the area law of entanglement.
The techniques used in this paper veer closer to those used in quantum chemistry, making use of the MPS's relationship to the full density matrix. We recast the results of Ref. verstraete2006matrix into the tools used here in case it is useful.
Consider a wavefunction of N sites. Reshaping the N degrees of freedom into two groups, a left group and a right group allows us to take a singular value decomposition of the form (we drop the dagger from V when writing it in terms of tensor components for clarity)
ψ_σ_1…σ_N=U^σ_1…σ_j_a_j-1D_a_j-1a_jV_a_j^σ_j+1…σ_N
where σ_i represents the degrees of freedom locally on each site and the wavefunction was decomposed on the jth bond. Raised and lowered indices do not mean anything and are only used for clarity.
The decomposition according to the SVD gives the elements necessary to construct the partial density matrix for the left
ρ_L=UD^2U^†
and right
ρ_R=VD^2V^†
of the system <cit.>. The occupations of the natural orbitals are contains in the D^2 matrices and are known to decay rapidly. Note that if we had the expectation value of the Hamiltonian, H,
E=Tr(ρ H)=∑_iλ_ih_ii
where the basis of the natural orbitals was used to make the density matrix diagonal. If the orbital occupations of the density matrix are ρ_ii=λ_i and ordered as 1≥λ_1≥λ_2≥λ_3≥λ_4…≥λ_N≥0, and decay rapidly, which is an assumption we use throughout.
Because the occupation values of the density matrix decay rapidly, the entanglement, S=-Tr(ρlnρ) is well approximated. Thus, if the entanglement is low, then the truncation error is also low <cit.>.
This argument we use here leaves out the extensions to the area law of entanglement, but we find these methods useful for the extension to the bundled MPS case.
§.§ Bundled matrix product states
The bundled MPS represents several density matrices considered together but independently. We note a key distinction that when writing a wavefunction for two excitations Ψ=(ψ_1+ψ_2)/√(2) that the states individually would form a pure density matrix but that the combination is mixed. The density matrices for each state in the bundle would still be pure. Since the wavefunctions are considered independent from each other is the key difference between the bundled and ensemble density matrices. The most common form will be a set of mixed density matrices (with or without normalization of the trace to a value of 1).
A ensemble of excitations can be added to the MPS with the additional of another index ξ onto the orthogonality center.
|Ψ⟩=∑_{σ_i},
{a_i}, ξ
A^σ_1_a_1A^σ_2_a_1a_2… D^σ_iξ_a_i-1a_i…
B^σ_N_a_N-1|σ_1…σ_N⟩|ξ⟩
where ψ_ξ represents g∈ℕ excitations indexed by ξ. This is also referred to as a bundled MPS.
§.§ Construction of the bundled matrix product state
An explicit construction of the exact bundled MPS was demonstrated in Ref. baker2024direct from a group of MPSs. Note that that construction in Ref. baker2024direct over-determines the form of the left- and right-normalized matrices. The ability to compress the bundled MPS is the main point illuminated by the results in this paper ( i.e., many states are similar enough to reduce the size of the left- and right-normalized tensors).
An alternative derivation to Ref. baker2024direct can be found from the traditional form of a set of excitations as an d^N× g matrix for N sites, size of the physical index d, and number of excitations g, giving a complex coefficient, c_σ_1σ_2…σ_N^ξ where σ_i is defined as in the main text to index the physical degrees of freedom on a site i and ξ indexes the excitations.
The task to separate the indices follows the same procedure as the derivation of the MPS from a single wavefunction <cit.>. The first step is to reshape the matrix to isolate σ_1 from the rest of the indices and perform a singular value decomposition.
|Ψ⟩=∑_σ_1…σ_N
a_1ξA^σ_1_a_1c^ξσ_2…σ_N_a_1|σ_1…σ_N⟩|ξ⟩
Continuing with more SVDs and reshapes, we discover more terms in the bundled MPS until recovering Eq. (<ref>).
The importance of this exercise is to discover the size of the bond dimension at the ends of the bundled MPS. Recall that for the single MPS that the maximum size of the bond dimension in the exact case (without truncation) scales as
m_max=min(∏_x=1^id_x,∏_x=i+1^N_sd_x)
for bond i and physical index of size d_x on bond x. This is because the SVD is chosen with a convention that the size of the D matrix is the minimum of the two dimensions of the input matrix <cit.>.
A similar expression can be derived for the bundled MPS. The only difference is that when gauged to a site j, the excitation index can be thought of as multiplying the bond dimension for the purposes of this size counting argument. Thus, redefining Eq. (<ref>) with d_j→ g· d_j will be sufficient.
The main point of this exercise is to show that when the orthogonality center is gauged to the first or last site in the system, one half of the system has the same size bond dimensions as the single MPS. The other half of the system has a larger bond dimension dependent on g.
This procedure is unfeasible beyond small systems since the exact wavefunctions can not be determined efficiently by exact diagonalization from the exponential size of the Hamiltonian operator with increasing number of sites <cit.>.
§.§ Overlap matrices of system partitions
The overlap matrices representing the left and right partitions of the system can be constructed from the left- and right-normalized tensors of the MPS.
The wavefunction ψ may be decomposed with an SVD as (with the matrix D given on a bond, not a site)
|ψ⟩=U^σ_1σ_2…σ_i_a_i-1D_a_i-1a_iV^σ_i+1…σ_N_a_i|σ_1…σ_N⟩
The contraction of the wavefunction onto another state ϕ is given by
⟨ϕ|ψ⟩=∑_a_i-1,a_i,a'_i-1,a'_iρ^(L)_a_i-1a_i-1'D_a_i-1a_iD̃_a_i-1'a_i'ρ^(R)_a_ia_i'
and with
ρ^(L)_a_i-1a_i-1'=∑_{σ_i}Ũ^σ_1σ_2…σ_i_a_i-1'U^σ_1σ_2…σ_i_a_i-1
ρ^(R)_a_ia_i'=∑_{σ_i}V^σ_i+1…σ_N_a_iṼ^σ_i+1…σ_N_a_i'
with both of those terms represented in Fig. <ref>. The matrices D can be truncated just as the density matrix was to m≤ M just as in a truncated SVD. This will define a truncated partial density matrix.
The following minor statement is presented as a lemma because it directly follows from the additional definition of the partial density matrix and Thm. <ref>.
A set of relevant singular vectors for a partial density matrix of an excitation can differ at most from the most relevant singular vectors of another excitation related to their energy difference similar to Thm. <ref>'s statement for natural orbitals.
The partial density matrices are presented graphically in Fig. <ref> between the excitations. Note that immediately, one can apply the results from Thm. <ref> onto the partial density matrices. For those states on the left of the orthogonality center, the density matrix is written in the states for the left and right basis functions
ρ=
U D^2 U^† (left)
V D^2 V^† (right)
and the indices on each tensor are skipped here for the sake of brevity.
In order to use Eq. (<ref>) with the density matrix, the lattice sites on the whole system must be partitioned into a left (L) and right (R) group ( i.e., either (i𝒜)_L or (i𝒜)_R). The sum can then be separated into each group, and the density matrix must be represented in a basis natural to each group ( i.e. with the basis functions for the left or right as in Def. <ref>.
Δ E^(m)_αβ;γ =∑_j∈{(i𝒜)_L}C_j⟨ j|U_γ(D_α;γ^2- D_β;γ^2 ) U_γ^†|j⟩
=∑_j∈{(i𝒜)_R}C_j⟨ j|V_γ(D_α;γ^2 - D_β;γ^2)V_γ^†|j⟩
where the subscript denotes which excitation the component of the SVD belongs to for some chosen basis γ of which m states were kept. Just as in Eq. (<ref>), the Hamiltonian must be local for this expression to be valid. In the tensor network formalism, this corresponds to the MPO containing only local terms. This is because the states k from Eq. (<ref>) can also index the basis vectors of the SVD used to compose U, D, and V. This proof applies to any bond in the MPS, so there is no issue of how the MPS is gauged.
Note that the D matrices for either the α or β excitation are guaranteed to be diagonal by construction and are written in the same basis of entangled states. This is exactly the ultra-local limit from Def. <ref> that was used in Thm. <ref>. In this case, the orbitals {Φ^(α)_k} and {Φ_k^(β)} are the same basis. The degree of freedom that allows one to write a higher excitation is the changing of the magnitude of the occupancies ε_k for either state. Thus, the general prescription defined for the general case of natural orbitals in the quantum chemistry context is precisely the relevant case for the tensor network.
For smaller energy differences, the most relevant singular vectors will be more common when the states are similar. This means that the primary results from Thm. <ref> can be equally applied to the singular vectors here implying a maximum bound on the number of differences in D. The singular vectors are also orthogonal in this case, so this is different from Thm. <ref> in that the overlap of the singular vectors is not needed.
The reduction here applies equally well for pure or mixed states, although in a given computation the states computed are a mixed state representation of a given eigenvalue. Combined with the previous results, the main statement here applies to all local Hamiltonians.
We are then ready to state a central theorem to the paper.
A bundle of states in a tensor network containing an energy interval [E_α,E_β] (Δ E_αβ≈0) and with δ^(α)≪1 and δ^(β)≪1 will not require a larger bond dimension m to add a similar state δ^(γ)≪1 with Δ E_αγ≈0. It will cost at most m' more states (the number required to describe γ with δ^(γ) of some magnitude) to add a dissimilar state of any energy difference.
This immediately follows from the results in Sec. <ref> since the energy of the new state on the interval will have a small energy difference with the other states on the compact interval. The summary statement is that the bundled MPS is not necessarily more expensive to solve for if a suitable initial state is solved and then low energy excitations are found.
§.§ Discussion of states with different symmetries
How many excitations are similar is highly model dependent. While it is true that the next excitation does not need to be similar, it is true that the set of all states on a compact interval contains similar states in most models. So, there should be an expected reduction in the size of the bond dimension.
Consider two classical MPSs (m=1), one with a state with 1 fermion and another with 10 fermions. The overlap between these two states will definitely zero because of the quantum number symmetries. As the bond dimension of the MPS is grown (as in a DMRG computation) and each tensor contains more blocks each with different symmetries, it is not guaranteed that the symmetry sectors will have much overlap upon contraction of the full network. From this argument, it can be reasoned that based on symmetries alone that the singular values will not have much overlap between the two excitations. This is even true for a state with 1 fermion and another with 2 fermions with all fermions on the first site and if the net fluxes of the quantum number symmetries are assigned on the last site.
Conservatively, we can state that the addition of a new state into the bundled MPS that does not have a symmetry represented will require another m states to be added onto the link index. So, the scaling appears as O(mN_sym.); however, this a generous upper bound because of similarities between some symmetry sectors.
Take for example the ensemble containing 1 spin-up fermion and another excitation for 1 fermion with a spin-down fermion for a spin-symmetric Hamiltonian ( i.e., no magnetic field or other effect applied). These two symmetry sectors in the Hamiltonian are identical blocks by definition. The wavefunctions, however, while not having the same symmetry have many elements in the Hamiltonian in common and have a conjugate set of singular vectors in the other.
Another example would be for a chain of entirely up spins and another state of entirely down spins. The quantum number symmetries on each link index show that the spaces in which the tensors exist is completely orthogonal to each other in general.
§ EXAMPLES
To illustrate some of the concepts presented in the previous sections, we focus on three models and determine some relevant quantities. All models are solved with the DMRjulia library.
§.§ Transverse field Ising model
We consider the case of the transverse field Ising model, defined as
H=∑_iσ_i^z·σ_i+1^z+h_xσ^x_i
where
σ^x=([ 0 1; 1 0 ]) and σ^z=([ 1 0; 0 -1 ])
with subscripts indicating the matrix belongs to a Pauli string of Kronecker products O_i=I⊗ I⊗…⊗ O⊗…⊗ I with O in the ith position. The model exhibits a phase transitions when h_x=1, which creates a gapless spectrum <cit.>. We note that we focus on the properties of the bundled MPS in this paper and not the algorithm used to solve it, which in the most general case should be considered to be undecideable–in the sense of the halting problem–to find the spectral gap of the Hamiltonian <cit.>, meaning that finding the right method to solve the bundled MPS is an open challenge for arbitrary systems (including three-dimensional models).
One way to visualize the relevant states in the system for different energies is to form the overlap matrix, Γ from Fig. <ref> and multiply the D matrices contracted onto the link index, giving
Γ_ij = Γ_a_i'a_i=∑_a_i-1,a_i,a'_i-1,a'_iρ^(L)_a_i-1a_i-1'D_a_i-1a_iD̃_a_i-1'a_i'
This ensures that no only will we be able to see the overlap between two of the basis states used in two different bundles, but we will also see how relevant each are for the overall answer. All plots are shown with the logarithm of the absolute value of each element of the overlap matrix Γ. Matrices are truncated so that no singular values of zero are represented.
Brighter colours indicate that the matrix element is important and should not be truncated. Darker colours indicate that the matrix elements is small. Truncating an entire row or column is possible in the normal evaluation of the reduced bond dimension of a system. Thus, we look in the figures for a row or column where (nearly) all of the elements are a darker colour. This means that the row or column can be removed without losing precision in the bundled MPS.
The results in Fig. <ref> tell the same story as the theorems presented previously. The larger the energy difference between states, the more states in this measure have a higher weight (lighter color).
The general trend is that the larger the energy difference, the fewer rows and columns that can be truncated.
Results for the critical model in Fig. <ref> are largely the same but the bond dimension is larger because of the gapless eigenspectrum of the model.
§.§ Heisenberg model
We can perform the same analysis with an XXZ model (Δ=1) of the form
H=∑_i𝐒_i·𝐒_i+1
where 𝐒=⟨ S^x,S^y,S^z⟩=1/2⟨σ^x,σ^y,σ^z⟩ and the commutator [σ^x,σ^z]=-2iσ^y<cit.>.
Figure <ref> shows the weighted overlap matrix. This time, groups of 10 eigenstates are shown. The Heisenberg model has a well-known symmetry structure that creates a blockier appearance for the graph. However, the main message of the analysis remains the same: larger energy differences have more high-weight states. Bundling together energy states that are close together have many states in common and thus the bond dimension can be truncated.
§ CONCLUSION
We introduced the bundled density matrix, a set of density matrices that are independent but written in a common basis. We showed that the difference in the truncation errors constitute a practical metric to determine a notion of distance between the density matrices. It was shown that for local system, the energy difference in that basis was also a practical metric to describe a notion of distance between density matrices in the bundle.
The larger the energy difference between the density matrices, the larger the difference in the density matrices are. One might expect that as one bundles excitations that are further into the bulk that the volume law entanglement would dominate and therefore drive the bond dimension higher. What these results suggest, in effect, is that this is only guaranteed for large energy differences. For small energy differences, the density matrices can either be similar, where the m most relevant states constitute a good basis for low-energy excitations. Or, the states can be dissimilar, such as in the case of a different symmetry sector, where the truncation error is large.
We extended these ideas to the bundled matrix product state where the results demonstrate that the bond dimension is lower for similar states. Bundled matrix product states therefore may support low energy excitations in the same symmetry sector without a costly increase in the bond dimension. To add a dissimilar state, one must pay the cost of the matrix product state representation of that state. Large energy differences in bundled matrix product states have effectively no degrees of freedom overlapping and are not very different from two separate MPSs.
Definitions for similar states and the exact applicability of the local limit used in this paper can certainly be expanded and modified, and we encourage the community to work with the concept of the bundled density matrix in other contexts including quantum information.
§ ACKNOWLEDGEMENTS
This research was undertaken, in part, thanks to funding from the Canada Research Chairs Program.
The Chair position in the area of Quantum Computing for Modelling of Molecules and Materials is hosted by the Departments of Physics & Astronomy and of Chemistry at the University of Victoria.
N.S. acknowledges the NSERC CREATE in Quantum Computing Program (Grant Number 543245).
This work is supported by a start-up grant from the Faculty of Science at the University of Victoria.
This work has been supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grants RGPIN-2023-05510 and DGECR-2023-00026.
§ TWO EQUIVALENT FORMS TO DETERMINE THE DENSITY MATRIX
The definition of the one-body reduced density matrix is
ρ(𝐱,𝐱')= ∫…∫ψ^*(𝐱,𝐫_2,…,𝐫_N_e)
×ψ(𝐱',𝐫_2,…,𝐫_N_e)d𝐫_2… d𝐫_N_e
which works well for applications in quantum chemistry. In the graphical notation used in the main section of this paper, the vertical physical indices would have to be kept track of as separate but all other physical indices would be contracted.
The case of the tensor network has an immediately large computational cost with using this form. The tensor network form would require that we isolate physical indices and perform a contraction that winds up being exponential for large |i-j|. Neither of these is desirable for the tensor network because it was specifically formulated not incur the exponential cost of the full quantum problem.
Another definition of the density matrix with elements
ρ_ij=⟨ψ| c^†_iσ c_jσ|ψ⟩
for fermions <cit.> is often cited. This form is equivalent to the above form as we now demonstrate.
Consider the definition of the wavefunction as a superposition in some position. We denote the local degrees of freedom on each cite i by σ_i. The general decomposition of the wavefunction becomes
|ψ⟩=∑_σ_1…σ_Nw_σ_1…σ_N|σ_1…σ_N⟩
for N sites. The application of the operator c_j gives
c_j|ψ⟩=∑_σ_1…σ_Nw_σ_1…σ_N|σ_1…σ̅_j…σ_N⟩
where σ̅_j is the modified value of σ_j upon evaluation of the operator. In the case of a spin-half system, a value of ↑ would go to ↓.
A similar form can be derived for c^†_i on the dual vectors
⟨ψ| c^†_i=∑_σ_1…σ_N⟨σ_1…σ̅_i…σ_N|w_σ_1…σ_N^*
Neither this sum nor the dual expression in Eq. (<ref>) have the same number of terms as in Eq. (<ref>) because the annihilation operator creates some zero terms from the original expression. The final expectation value becomes
⟨ψ| c^†_i c_j|ψ⟩=∑_σ_iσ_j⟨σ̅_iσ_j|W_σ_iσ_j|σ_iσ̅_j⟩.
where
W_σ_iσ_j=∑_{σ_1…σ_N}\{σ_iσ_j}w^*_σ_1…σ_Nw_σ_1…σ_N
where the summation is over all variables but not σ_i or σ_j.
Returning now to Eq. (<ref>), it can be seen that the replacement of the 𝐱 and 𝐱' terms would be represented with the ket form as
ρ(𝐱,𝐱')⇒ ρ(σ_𝐱,σ_𝐱')discretize=ρ(σ_i,σ_j)
= ∑_{σ_1…σ_N}\{σ_iσ_j}⟨σ_1…σ_N|W_σ_iσ_j|σ_1…σ_N⟩
and the primed index on i or j is a reinterpretation of the continuous, real-space position variable 𝐱 into the lattice variable for some site on the discrete lattice space. Eq. (<ref>) then reduces to Eq. (<ref>). Thus, the two forms of the density matrix are the same. |
http://arxiv.org/abs/2409.03520v1 | 20240905133304 | Speaker and Style Disentanglement of Speech Based on Contrastive Predictive Coding Supported Factorized Variational Autoencoder | [
"Yuying Xie",
"Michael Kuhlmann",
"Frederik Rautenberg",
"Zheng-Hua Tan",
"Reinhold Haeb-Umbach"
] | eess.AS | [
"eess.AS",
"eess.SP"
] |
Speaker and Style Disentanglement of Speech Based on Contrastive Predictive Coding Supported Factorized Variational Autoencoder
Yuying Xie1, Michael Kuhlmann2, Frederik Rautenberg2 , Zheng-Hua Tan1, Reinhold Haeb-Umbach2
1. Department of Electronic Systems, Aalborg University, Denmark
2. Department of Communications Engineering, Paderborn University, Germany
===========================================================================================================================================================================================================================================================
§ ABSTRACT
Speech signals encompass various information across multiple levels including content, speaker, and style.
Disentanglement of these information, although challenging, is important for applications such as voice conversion.
The contrastive predictive coding supported factorized variational autoencoder achieves unsupervised disentanglement of a speech signal into speaker and content embeddings by assuming speaker info to be temporally more stable than content-induced variations.
However, this assumption may introduce other temporal stable information into the speaker embeddings, like environment or emotion, which we call style.
In this work, we propose a method to further disentangle non-content features into distinct speaker and style features, notably by leveraging readily accessible and well-defined speaker labels without the necessity for style labels.
Experimental results validate the proposed method's effectiveness on extracting disentangled features, thereby facilitating speaker, style, or combined speaker-style conversion.
disentangled representation learning, voice conversion
§ INTRODUCTION
Disentangled representation learning aims to extract features which can represent different attributes in the observed data.
In speech signal processing, disentangled representation learning offers solutions extensively <cit.>, including voice conversion.
One essential property of speech is that the speaker's traits should remain consistent within a single utterance, while the content rapidly varies over time. This characteristic then can serve as a prior for developing an end-to-end voice conversion model to extract disentangled content and speaker features separately, as voice conversion aims to change the speaker trait and preserve the content of an utterance.
Ebbers et al. <cit.> assume that the utterance-level information at the current frame is similar to that from the frames 1s before and after in the same utterance, and different between utterances in a batch.
This assumption is then introduced as inductive bias in the fvae to disentangle speaker from content information via a completely unsupervised fashion, requiring neither speaker labels nor a transcription.
Specifically, during training, cpc loss <cit.> is applied on the utterance-level embeddings, by regarding feature of current frame as anchor, features from the same utterance as positive pairs and from other utterance in the batch as negative pairs.
Adversarial training is working on the content embedding for independence of extracted utterance-level and content features.
The extracted utterance-level feature in fvae are used to represent speaker identity only.
Even though utterance-level feature is used for speaker identity representation widely <cit.>, it can also contain other temporally stable information.
For instance, channel, emotion, or environment (e.g., room impulse response) information, should be similarly stable as speaker properties in one utterance.
Results in the works <cit.> about speaker embedding extraction also prove this point.
For instance, Xia et al. <cit.> utilize momentum contrastive (MoCo) <cit.> learning on speaker embedding extraction.
Results on the Voxceleb test set show that extracted features cluster not only on speaker identity, but also on session id.
Besides, Cho et al. <cit.> apply DIstillation with NO labels (DINO) <cit.> to extract utterance-level embeddings, and results show that the extracted embeddings can not only represent speakers, but also emotions.
Inspired by the fact that not only speaker traits change on the utterance-level, this work
further disentangles the utterance-level features from fvae <cit.> into speaker embedding and other temporally stable embeddings.
We call the other temporally stable embeddings as `style' embeddings hereafter.
The contribution of this work is as follows:
First, we posited that the speech signal is generated from three factors: speaker identity, style and content.
Second, this work extracts features representing these three factors in a hierarchical disentanglement manner.
The proposed method only needs speaker labels, which are cheap and easy to access.
Third, the proposed method is applicable to different definitions of style embeddings.
In this work, acoustic environment and emotion have been chosen as different cases.
For the environment case, the performance of the proposed method is tested on both synthetic datasets and real unseen challenging datasets.
For emotion, we evaluate the performance on a cross-language dataset.
Finally, we also investigated the application of the proposed method in style and speaker conversion using real-recorded datasets, both in clean and reverberant conditions. https://yuxi6842.github.io/speaker_style_disen.github.io/Demos showcasing these applications are provided.[https://yuxi6842.github.io/speaker_style_disen.github.io/https://yuxi6842.github.io/speaker_style_disen.github.io/]
§ RELATED WORK
For fine-grained style control in speech synthesis and conversion, style disentanglement is applied to obtain more expressive embeddings. Li et al. <cit.> propose a Tacotron2-based framework for cross-speaker and cross-emotion speech synthesis.
Disentanglement is applied to get speaker-irrelavant and emotion-discriminative embeddings for emotion control.
Although the model proposed in <cit.> can achieve cross-speaker emotion synthesis, only three speakers are contained in total.
Du et al. <cit.> uses disentanglement to obtain speaker, content and emotion embeddings simultaneously for expressive voice conversion.
While the model in <cit.> uses mutual information loss for disentanglement, the evaluation is not sufficiently comprehensive, especially lacking in disentanglement analysis.
Besides, the extracted speaker features perform speaker verification poorly in <cit.>.
Except for emotion, environment information like noise and reverberation has also been chosen as another controllable factor.
Omran et al. <cit.> shows an information bottleneck (IB) based way to split speech and noise or reverberation. Masking is used on specific dimensions in the latent variable space, while reconstruction loss controls the particular information flow.
However, similar to other IB based methods, the performance depends heavily on bottleneck design in <cit.>.
Another work in <cit.> uses disentanglement to realize environment conversion, in which only four seen environments are considered in their work.
§ PROPOSED METHOD
§.§ Theoretical Explanation
Disentanglement aims to find codes to represent underlying causal factors <cit.>.
The original fvae actually assumes the log-mel feature 𝐗=[𝐱_1, …, 𝐱_T] is generated from independent latent variables 𝐒 and 𝐙 via a random generative process:
𝐗=g_1(𝐒,𝐙),
in which 𝐒 and 𝐙 represent the utterance-level characteristic and content, respectively.
In this work, we further assume latent variable 𝐒 is generated from a random process with
𝐒=g_2(𝐒^spk, 𝐒^sty),
in which 𝐒^spk and 𝐒^sty are independent latent variables to denote speaker identity and style in one utterance.
Thus eq (<ref>) can also be written as:
𝐗=g_1(𝐒,𝐙)=g_1(g_2(𝐒^spk, 𝐒^sty),𝐙),
and can be further simplified as:
𝐗=g(𝐒^spk, 𝐒^sty,𝐙).
Generation process of the proposed method illustrated in fig <ref> is learned as eq. (<ref>).
Meanwhile, inference process in the proposed method is based on eq (<ref>) following a hierarchical structure as in fig <ref>.
The first step, inherited from fvae, learns to decompose observation 𝐗 into latent variables 𝐒 and 𝐙 according to the data inherent properties.
The second step further disentangles 𝐒 into 𝐒^sty and 𝐒^spk by introducing low-cost speaker labels in training.
Adversarial learning is applied in both steps to achieve independence.
As self-supervised learning method have been used in the first disentanglement step, the proposed hierarchical structure reduces the labeling cost of disentangling multiple factors.
§.§ Structure
The structure of the proposed method is shown in <ref>.
Compared with FVAE, the new component, the proposed disentanglement module is highlighted in blue in <ref>.
The proposed module contains a speaker encoder, a speaker classifier, a style encoder and an adversarial speaker classifier.
Additionally, like FVAE, both vtlp <cit.> and in <cit.> are used to partially remove speaker information at the content encoder's input.
The utterance-level encoder, the content encoder, the adversarial CPC classifier, and the decoder are same with <cit.>.
The first disentanglement step, inherited from FVAE, is applied on 𝐗 to get utterance-level feature 𝐒=[𝐬_1, …, 𝐬_T] and content feature 𝐙=[𝐳_1, …, 𝐳_N].
The contrastive learning method is firstly applied to 𝐒, to ensure that the extracted utterance-level feature is temporally stable in one utterance.
Regarding embedding 𝐬_t as an anchor, 𝐬_t+τ from the same utterance as the positive pair, and 𝐬̃_t+τ from other utterances in the training batch ℬ as the negative pair, cpc loss is then calculated on 𝐒 as:
L_cpc = -1/T - τ∑_t=1^T-τexp(𝐬_t+τ^T·𝐬_t)/∑_ℬexp(𝐬̃_t+τ^T·𝐬_t)
in which T is the frame length of input sequence 𝐗, and τ=80 (corresponding to 1s) is the time lag.
Variational autoencoder (VAE) <cit.> is applied on modeling content embedding 𝐙 as an information bottleneck approach for improving disentanglement.
The length of extracted 𝐙 is N. N≤ T as there may be downsampling in content encoder.
Each vector 𝐳_n in 𝐙 is regarded as a stochastic variable with prior p(𝐳_n)=𝒩(0, 𝐈) and posterior q(𝐳_n)=𝒩(μ_n, diag(σ_n^2)). Parameters of q(𝐳_n) are generated from the content encoder and optimized with KL divergence:
L_kld = 1/N∑_n=1^NKL(q(𝐳_n)p(𝐳_n)).
Besides, an adversarial CPC classifier is applied on 𝐙 for independence of 𝐒, cf. <cit.>.
The second disentanglement is then operated on the frame-wise utterance-level features 𝐒 via the proposed new module.
Speaker and style encoders have same structure built on one-dimensional CNN layers.
Denote
𝐒^=[𝐬^_1, …, 𝐬^_T] and
𝐒^=[𝐬^_1, …, 𝐬^_T] as the output from speaker encoder and style encoder.
Speaker classification then works frame-wisely on 𝐬^_t to encourage extracting speaker information alone.
Meanwhile, frame-wise adversarial speaker classification is applied on 𝐬^_t to ensure that the extracted style embedding is less dependent on speaker trait.
Gradient reversal layer <cit.> is applied between the output layer of the style encoder and the adversarial speaker classifier.
Cross-entropy loss is used here for classification on both 𝐬^_t and 𝐬^_t.
To get aggregated information over time and discard unnecessary details, vectors 𝐬^ and 𝐬^ are obtained by applying gap on 𝐒^ and 𝐒^.
The decoder takes as input a matrix where 𝐬^ and 𝐬^ are repeated N times along the time axis and then concatenated with the content embedding 𝐙.
To get sharper, less-oversmoothed output, reconstruction loss here is XSigmoid function <cit.>:
L_rec = 1/T∑_t‖|𝐱̂_t - 𝐱_t| (2 σ(𝐱̂_t - 𝐱_t) - 1)‖_1 ,
in which σ(·) is the sigmoid function.
Denote θ_enc^utt, θ_enc^spk, θ_enc^sty, θ_enc^cont are the parameter sets of utterance encoder, speaker encoder, style encoder and content encoder.
And the union of all encoders' parameter sets as
θ_enc=θ_enc^utt∪θ_enc^spk∪θ_enc^sty∪θ_enc^cont.
The parameter sets of speaker classifier and adversarial speaker classifier are θ_clf^spk, θ_adv_clf^spk, while the parameter set of decoder is θ_dec
The total loss function is then written as:
L(θ_enc,
θ_dec,
θ^spk_clf, θ^spk_adv_clf)
= L_rec(g_𝐗̂(𝐗; θ_enc,θ_dec),𝐗)
+ λ_s L_cpc(g_𝐒(𝐗; θ_enc^utt))
+ β L_kld(g_𝐙(𝐗;θ_enc^cont))
- λ_z L_cpc (R(g_𝐙(𝐗; θ_enc^cont)))
+ L_CE(g_𝐏^spk(g_𝐒^spk(𝐗;θ_enc^utt,θ_enc^spk);θ_clf^spk),𝐘)
- L_CE(g_𝐏^sty(R(g_𝐒^sty(𝐗; θ_enc^utt,θ_enc^sty));θ_adv_clf^spk),𝐘),
in which 𝐘 is the true speaker label one-hot matrix, 𝐏^spk and 𝐏^sty are speaker label predictions from speaker classifier and adversarial speaker classifier.
Mapping g_𝐲(𝐱;θ) denotes that 𝐲 = g_𝐲(𝐱;θ), where 𝐱 is the input and θ is the parameter set in the corresponding neural networks.
For a better illustration of forward and backward behavior,
gradient reversal layer is represented via a 'pseudo-function' R(·) as <cit.>:
R(𝐱) = 𝐱, dR(𝐱)/d𝐱 = - 𝐈,
where 𝐈 is an identity matrix.
To balance the terms, λ_s, λ_z and β are coeffients in the total loss function.
§ EXPERIMENTS
§.§ Datasets
For testing the separation of speaker from environment information, the model is trained and tested with artificially reverberated data.
Besides, an unseen real-recorded dataset is chosen for testing only.
For the case of separating speaker from emotion information, a cross-language emotion dataset is used for training and testing.
LibriSpeech:
The clean speech dataset of this work is based on LibriSpeech <cit.>.
The subsets train-clean-360 (921 speakers), train-clean-100 (251 speakers) and test-clean (40 speakers) are used and convolved with real-recorded RIRs.
LibriSpeech + air:
The air dataset <cit.> contains 107 real RIR recordings, and is used for data augumentation on the training dataset.
Each utterance from the LibriSpeech training set is convolved with 4 different randomly selected RIRs from air.
Thus, the training dataset in the environment case is 4 times larger than the LibriSpeech training set.
VOiCES:
To evaluate the performance on speaker verification,
VOiCES <cit.> is chosen in this work for testing only.
This dataset is recorded in four different rooms, with differently-located microphones and various directions of arrival between microphones and speakers.
In this work, only the speech dataset without noise from VOiCES_devkit is used for evaluation on real challenging acoustic environment.
Emotion dataset: The emotional speech dataset (ESD) <cit.> contains studio-quality recordings of 10 English and 10 Mandarin speakers in neutral style and four acted emotions (happy, angry, sad, and surprise).
Each speaker utters the same 350 utterances in all five styles, totalling 35,000 utterances.
The subset which contains all 10 English speakers and 4 styles (neutral, happy, angry, and sad) is used for training, validation and evaluation (seen condition) firstly, with split of 85%/10%/5%, respectively.
Besides, the remaining style, surprise, is used for evaluation as an unseen emotion. Moreover, the Mandarin subset is used for evaluation when both speaker and language are unseen.
§.§ Implementation Details
Following <cit.>, each update of the encoders, speaker classifier and decoder is followed by three exclusive updates of the adversarial CPC and adversarial speaker classifier.
We set β=0.01 and λ_s=λ_z=1.
We use the Adam optimizer and a learning rate of 5×10^-4. Both fvae and proposed method use eq (<ref>) as reconstruction loss for fair.
Environment
Training is based on the 'LibriSpeech + AIR' set with eq (<ref>) as loss function.
The utterance-level encoder, content encoder and decoder in this work have the same structure as in <cit.>.
The speaker encoder and style encoder both contain 3 one-dimensional CNN layers with stride equal to 1 and kernel size equal to 5 for all layers. The extracted speaker embedding and style embedding have both a dimension of 128. The speaker classifier contains one fully-connected (FC) layer, while the adversarial speaker classifier contains three FC layers with 128 hidden units each. The other settings are the same as in <cit.>.
Emotion
Training on ESD is performed for 100,000 iterations.
The networks are configured in the same way as in environment case, except for the input of content encoder: Inspired by HuBERT <cit.>, features from a CNN-based waveform extractor are used to replace the mel spectrogram.
The whole model is firstly pre-trained on LibriTTS <cit.>.
Then fixed the content encoder, and fine-tuned the other modules in the whole model on ESD.
The loss functions for pretraining and fine-tuning are the same, i.e. eq (<ref>).
§.§ Evaluation under Environment Case
We firstly use t-SNE plots to visualize the extracted style and speaker embeddings.
Results are shown in fig <ref>.
The example shown is taken from the LibriSpeech test-clean set, convolved with 4 different RIRs from the AIR dataset.
These figures show that the style and speaker features form well-defined clusters according to their style and speaker labels, respectively, while speakers are spread across style clusters, and RIR-labels across speaker clusters.
This demonstrates that the proposed second disentanglement can separate speaker and environment information well.
Moreover, we test the extracted embeddings performance via speaker verification tests on the LibriSpeech test-clean set and VOiCES dev-kit dataset.
Evaluation is based on equal error rate (EER) according to cosine similarity.
Results are shown in <ref>.
When dealing with Librispeech test set, we found that there is slight recording environment change between some chapters in the original dataset.
Thus, EER is calculated separately for the case when same speaker pairs are selected within the same chapter (WC) and when selected across chapters (AC), with the latter giving larger EERs.
Compared with original fvae, the proposed speaker embeddings after the second disentanglement step result in an improvement in EER by a margin on the clean LibriSpeech dataset, cf. <ref>.
Even though the proposed speaker embeddings reduce the gap between WC and AC, the exist of the gap may because: compared to the generated data, environmental changes in LibriSpeech can be negligible.
Besides, the proposed method shows effective disentanglement on VOiCES dev-kit
dataset. The speaker embedding from proposed second disentanglement shows large improvement with an EER of 12.5% compared with 20.6% before the second disentanglement.
Even though VOiCES is unseen during training, this result illustrates that the second disentanglement can help on extracting more robust speaker embeddings under challenging realistic situation.
§.§ Evaluation under Emotion Case
Both speaker verification and speech emotion recognition results are shown in <ref>.
To check how the extracted features influenced by emotion, we calculate EER in speaker verification experiment under two different scenarios when positive speaker utterance pairs have the same emotion (within-emotion, WE) or not (across-emotion, AE).
For emotion recognition, we train a 3-layer FC classifier on the embeddings using the English speaker subset with all 5 emotional states.
In speaker verification experiments, when speaker and language are seen in training, speaker embedding performs superior compared with utterance-level features and style embedding.
Meanwhile, the evaluation of unseen speakers and language (Mandarin) subset, reveals that speaker embedding demonstrates stability and is less prone to the impact of emotional factors in speech. This is especially evident when positive pairs in speaker verification involve varying emotions.
For speech emotion recognition, style embedding shows slightly better performance when language and speaker are seen during training, and comparable performance with utterance-level features in Mandarin subset.
§ CONCLUSION
Inspired by the existing results that the utterance-level feature contains not only speaker identity, but other style attributes, this work proposed a hierarchical disentanglement method based on fvae.
Particularly, the utterance-level embedding from fvae is further decomposed into speaker and style features in this work.
Training of this disentanglement framework needs speaker labels only, which are easy to get.
We evaluated the proposed method under two cases of style: environment and emotion.
For the environment case, the extracted style embeddings show clusters on RIR labels.
The extracted speaker embeddings perform well for speaker verification, when tested on a clean speech dataset and a real-recorded reverberation dataset, indicating the effectiveness of the proposed method.
The findings from the emotion dataset demonstrate that the extracted speaker embeddings become more distinctive in terms of speaker identification and less susceptible to emotional variations.
We hope this work may contribute to future work including multi-factor voice conversion, and conversion under challenging environments.
IEEEbib
|
http://arxiv.org/abs/2409.02605v1 | 20240904104013 | Inverse problems for quantum graph associated with square and hexagonal lattices | [
"K. Ando",
"E. Blåsten",
"P. Exner",
"H. Isozaki",
"E. Korotyaev",
"M. Lassas",
"J. Lu",
"H. Morioka"
] | math-ph | [
"math-ph",
"math.MP"
] |
14pt
A Software Visualization Approach for
Multiple Visual Output Devices
Malte Hansen
Department of Computer Science
Kiel University
Kiel, Germany
[email protected]
Heiko Bielfeldt
Department of Computer Science
Kiel University
Kiel, Germany
[email protected]
Armin Bernstetter
GEOMAR Helmholtz Centre for Ocean Research Kiel
Kiel, Germany
[email protected]
Tom Kwasnitschka
GEOMAR Helmholtz Centre for Ocean Research Kiel
Kiel, Germany
[email protected]
Wilhelm Hasselbring
Department of Computer Science
Kiel University
Kiel, Germany
[email protected]
September 9, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We solve inverse problems from the D-N map for the quantum graph on a finite domain in a square lattice and that on a hexagonal lattice, as well as inverse scattering problems from the S-matrix for a locally perturbed square lattice and a hexagonal lattice.
§ INTRODUCTION
§.§ Gel'fand problem
In the International Congress of Mathematics at Amsterdam in 1954, I.M. Gel'fand raised the following problem (an extended form): Let (M,g) be a compact Riemannian manifold with boundary. Let λ_0 < λ_1 ≤⋯ be the eigenvalues and φ_0(x), φ_1(x), … the associated orthnormal eigenvectors for the operator H = - c(x)^2Δ_g + V(x) on M with the Dirichlet boundary condition on ∂ M.
Then, from the knowledge of the boundary spectral data (BSD) {(λ_n,∂/∂νφ_n|_∂ M) ; n = 0, 1, 2, …}, where ν is the unit normal at the boundary, can one determine the manifold M and the operator H?
For the case of Riemannian manifold, this problem was solved by Belishev-Kurylev <cit.> using the boundary control method developped by Belishev <cit.>. The solution is unique up to a diffeomorphsim leaving ∂ M invariant, and this is the only obstruction for this problem.
§.§ Quantum graph Hamiltonian and discrete operator
We are interested in the analogue of Gel'fand's problem on graphs. There are two models in which one can do that. The first one concerns graphs Γ = {𝒱,ℰ} understood as a vertex set 𝒱 and an edge set ℰ determined by an adjacency matrix; we are then interested in the discrete operator defined on the vertex set,
Ĥû(x) = 1/μ_x∑_y ∼ x g_xy(û(y) - û(x)) + q(x)û(x), x ∈𝒱,
where y ∼ x means that x and y are the endpoints of a same edge e = e_xy∈ℰ, μ : 𝒱→ℝ_+ = (0,∞) is a weight on 𝒱, g : ℰ→ℝ_+ is a weight on ℰ and q : 𝒱→ℝ is a scalar potential. The other one concerns the so-called quantum graphs <cit.> in which the edges are identified with line segments and the Hamiltonian is a collection of one-dimensional Schrödinger operators defined on them,
H_ℰ = {h_e = - d^2/dz^2 + V_e(z), z ∈ [0,ℓ_e] }_e ∈ℰ,
with a real-valued potential V_e(z). To make such an operator self-adjoint, one has to match the function properly at the vertices; we choose the simplest possibility assumig the δ-coupling condition, see (<ref>) below.
It was proved in <cit.> that in the case of the discrete graph, the graph structure and the coefficients of (<ref>) are determined by BSD under the Two-Point Condition introduced there. We emphasize here that the inverse problem for the discrete graph cannot be solved without such conditions, as documented by a counter example given in <cit.>. Recall that the knowledge of BSD is equivalemt to the knowledge of the associated D-N map for all energies. The indicated result has various applications, in particular, those concerning the following three issues:
* Inverse boundary value problems for random walks.
* Inverse scattering problems for locally perturbed discrete periodic graphs.
* Inverse boundary value problems and inverse scattering problems for quantum (metric) graph.
The problem (1) has been discussed in <cit.>, where it was shown that the graph structure and the transition matrix of the random walk can be uniquely recovered from the distribution of the first passing time on the boundary, or from the observation on the boundary of one realization of the random walk.
The problem (2) is considered in <cit.>.
For a locally perturbed discrete periodic graph, the structure of the perturbed subgraph, along with the edge weight g and the potential q can be recovered from the scattering matrix at all energies provided that the Two-Points Condition is preserved under the perturbations.
For a fixed energy λ, the S-matrix S(λ) of the whole system and the D-N map Λ(λ) determine each other, and thus the problem is reduced to that of the bounded domain, to which we can apply the result of <cit.>. In particular, if two locally perturbed periodic lattices have the same S-matrix for all energies, we can conclude:
(i) If μ = μ', then g = g' and q = q'. (ii) If q = q', then μ = μ' and g = g'. (iii) In particular, if μ(v) = deg (v), μ'(v') = deg (v'), then g = g' and q=q'.
The well known duality between the discrete and metric graphs <cit.> allows to determine the spectrum of an equilateral `continuous' graph from that of its discrete counterpart and vice versa, and therefore the discrete Gel'fand problem is expected to play a role in inverse problems for quantum graphs. In <cit.>, we have considered such equilateral graphs, i.e. those
in which all the edges have the same length and the one-dimensional Hamiltonian has the same potential on them.
Letting C_v be the δ-coupling constant (see (<ref>) below), and d_v the degree of v ∈𝒱, it was assumed that C_v/d_v is a constant independent of v ∈𝒱. We then proved that if two such quantum graphs 𝔾_Γ and 𝔾_Γ' have the same D-N map (or the same S-matrix) for all energies, then there exists a bijection 𝔾_Γ→𝔾_Γ' preserving the edge relation. Moreover, we have d_v = d_v' and C_v = C_v' ∀ v ∈𝒱.
Therefore, the S-matrix of an equilateral quantum graph determines its graph structure.
In this paper, we continue the investigation of <cit.>, and consider the problem of determining local perturbation of C_v and V_e(z) for lattices such that ℓ_e = 1 holds for all e ∈ℰ and the degree d_v is the same for all v ∈𝒱.
Our method is different from that of <cit.> relying strongly on the explicit form of the lattice. Therefore, although we are convinced that the result will be true for a larger class of lattices, we restrict ourselves to the proof for the square and hexagonal situations.
§.§ Edge and vertex Schrödinger operators in the quantum graph
Let us recall basic facts about quantum graphs. Let Γ = {𝒱, ℰ} be a quantum (or metric) graph with a vertex set 𝒱 and edge set ℰ. We assume that each edge e has unit length and identify it with the interval [0,1]. Consider a quantum-graph Schrödinger operator, called the edge Schrödinger operator in this paper,
H_ℰ = {h_e = - d^2/dz^2 + V_e(z), z ∈ [0,1] }_e ∈ℰ,
with a real-valued potential satisfying the symmetry condition
V_e(z) ∈ L^2((0,1)), V_e(z) = V_e(1 -z), ∀ e ∈ℰ.
The generalized Kirchhoff condition, or the δ-coupling condition, is imposed[Here, v ∈ e means that v is an end point of an edge e and the derivative is conventionally taken in the outward direction.]:
∑_v ∈ eu'_e(v) = C_vu_e(v), ∀ v ∈𝒱^o := 𝒱∖∂𝒱,
C_v being a real constant. Here, ∂𝒱 is the boundary of 𝒱, which will be chosen suitably later. Let ϕ_e(z,λ) be the solution of
{ (- d^2/dz^2 + V_e(z))ϕ_e(z,λ) = λϕ_e(z,λ),
z ∈ [0,1],
ϕ_e(0,λ) = 0, ϕ'_e(0,λ) = 1, ^'= d/dz.
.
We put
ϕ_e0(z,λ) = ϕ_e(z,λ), ϕ_e1(z,λ) = ϕ_e(1 - z,λ).
Each edge e is parametrized as e(z), 0 ≤ z ≤ 1. Letting e(0) = v, e(1) = w, any solution u = {u_e(z,λ)}_e ∈ℰ of the equation (h_e - λ)u = 0 can be written as
u_e(z,λ) = u(v,λ)ϕ_e0(z,λ)/ϕ_e0(1,λ) + u(w,λ)ϕ_e1(z,λ)/ϕ_e1(1,λ).
The edge Schrödinger operator H_ℰ is related to the vertex Schrödinger operator in the following way: we define the operators Δ_𝒱,λ and Q_𝒱,λ on 𝒱 by
(Δ_𝒱,λu)(v) = 1/d_v∑_w ∼ v1/ϕ_e0(w,λ)u(w,λ),
Q_𝒱,λ = 1/d_v∑_v ∈ eϕ'_e0(1,λ)/ϕ_e0(1,λ) + C_v/d_v,
where d_v is the degree of v ∈𝒱. Then, the δ-coupling condition (<ref>) is rewritten in the form of vertex Schrödinger equation
(- Δ_𝒱,λ + Q_𝒱,λ)u(v) = 0, ∀ v ∈𝒱^0.
In our previous work <cit.>, we studied spectral properties of quantum graphs on a class of locally (i.e. on a bounded part) perturbed periodic lattices including the square and hexagonal ones which are the object of interest in this paper. We have found, in particular,
that the N-D map (and the D-N map as well) for the vertex Schrödinger operator on the interior domain and the S-matrix for the edge Schrödinger operator on the whole system determine each other, cf. Corollary 6.16 in <cit.>. Therefore, one can reduce the inverse boundary value problem for these lattices to that on a domain of the shape to be described below, namely the rectangular domain for the square lattice and the hexagonal parallelogram for the hexagonal lattice.
§.§ Inverse boundary value problem for square and hexagonal lattices
Let Γ_0 = {𝒱_0, ℰ_0} be the square or the hexagonal lattice in ℝ^2 with vertex set 𝒱_0 and edge set ℰ_0. Assume that we are given a bounded domain Ω in ℝ^2 and a subgraph Γ = {𝒱, ℰ}, where ℰ = ℰ_0 ∩Ω, 𝒱 = 𝒱_0 ∩Ω. We define ∂ℰ to be the set of v ∈𝒱 such that v is an end point of some edge in ℰ, and deg_Γ(v) = 1, where deg_Γ(v) is the degree of v in the graph Γ. We further assume that Γ has the following properties[These assumptions were used in <cit.>; we state them again to make the paper self-contained.]:
(i) ∂ℰ = ∂𝒱.
(ii) d_v = 1 for v ∈∂𝒱.
(iii) The unique continuation property holds for the vertex Schrödinger operator in the exterior domain 𝒱_ext := 𝒱_0 ∖𝒱 in the following sense: if u satisfies the equation (- Δ_𝒱,λ + Q_𝒱,λ)u = 0 in 𝒱_ext and u = 0 in {v ∈𝒱_ext ; |v| > R} for some R > 0, then u = 0 on 𝒱_ext.
The D-N map for the edge Schrödinger operator is defined by
Λ_ℰ(λ) : f→u_e'(v),
e(0) = v ∈∂𝒱,
where u_e is the solution to the equation
{ (h_e - λ)u_e = 0, on ℰ,
u_e = f, on ∂ℰ = ∂𝒱,
δ-coupling condition.
.
The D-N map for the vertex Schrödinger operator is defined by
Λ_𝒱(λ) : f(v) →1/ϕ_e0(1,λ)u(w), v = e(0) ∈∂𝒱, w = e(1) ∈𝒱^o.
where u is the solution to the equation
{ (- Δ_𝒱,λ + Q_𝒱,λ)u(v) = 0, v ∈𝒱^0,
u(v) = f(v), v ∈∂𝒱.
.
Then, by Lemma 3.1 in <cit.>, Λ_ℰ(λ) and Λ_𝒱(λ) determine each other under the assumptions (i), (ii), (iii).
Our first main result is as follows:
Let Ω be a bounded domain in the 2-dimensional square or hexagonal lattice having the properties (i), (ii), (iii), and consider the edge Schrödinger operator H_ℰ = {- d^2/dz^2 + V_e(z)}_e ∈ℰ assuming the δ-coupling condition and the Dirichlet boundary condition on ∂ℰ. Then one can uniquely reconstruct V_e(z) and C_v for all e ∈ℰ and v ∈𝒱
from the D-N map of H_ℰ- λ for all values of the energy λ, provided we know V_e(z) for all the edges e adjacent to the boundary of 𝒱.
§.§ Inverse scattering problem
In <cit.>, we have discussed the spectral and scattering theory for Schrödinger operators on a class of locally perturbed periodic quantum graphs. Here we will use these results for perturbations of square and hexagonal lattices quantum graphs preserving the lattice structure. Assume that the length of each edge is one. The assumptions on V_e(z) and C_v are as follows:
(iv) There exists a constant C_0 ∈ℝ such that C_v = C_0 except for a finite number of vertices v ∈𝒱.
(v) There exists V_0 ∈ L^2((0,1)) such that V_e(z) = V_0(z) except for a finite number of edges e ∈ℰ.
One can then define the S-matrix S(λ) for the Hamiltonian of the quantum graph built on Γ. Consider a bounded domain Ω in ℰ
which contains all the indicated perturbations, in particular, assume that V_e(z) = V_0(z) holds on any edge e adjacent to ∂𝒱.
In Corollary 6.16 in <cit.>, we have proven that the S-matrix S(λ) and the D-N map Λ_ℰ(λ) for 𝒱 determine each other.
Applying Theorem <ref>, we obtain our second main result:
Consider the Schrödinger operator H_ℰ = {- d^2/dz^2 + V_e(z)}_e ∈ℰ on the 2-dimensional square or hexagonal lattice Γ satisfying the conditions (i)-(v). Then one can uniquely reconstruct V_e(z) and C_v for all e ∈ℰ and v ∈𝒱 from the knowledge of the S-matrix S(λ) of H_ℰ for all energies λ.
Let us remark that by assumption we know V_0(z) and C_0 a priori.
§.§ Related works
The study of spectral and scattering theory on quantum graphs is now making a rapid progress. Although the topics of this paper are restricted to inverse problems on quantum graphs for square and hexagonal lattices,
there are plenty of articles devoted to this subject. Our previous work <cit.>, on which the present paper is based, deals with the forward problem and some types of inverse problems. We add here a brief look at the related works.
A general survey of discrete graph and quantum graph properties can be found in the monographs <cit.>, see also the paper <cit.>.
For dicussion of the δ-coupling and related topics we refer to <cit.>. The relation between edge Schrödinger operators and vertex Schrödinger operators was studied in <cit.>. Spectral properties of quantum graphs are discussed in <cit.>. The wave operators for discrete Schrödinger operators are investigated in <cit.>. Various inverse problems for the quantum graphs are studied in <cit.>, see also <cit.>. Finally, for earlier results on the inverse scattering for discrete Schrödinger operators on locally perturbed periodic graphs see <cit.>.
§.§ Acknowledgement
The authors express their gratitude for the funding obtained. P.E. was supported by the EU under the Marie Skłodowska-Curie Grant No 873071. H.I. was supported by Grant-in-Aid for Scientific Research (C) 20K03667 and (C) 24K06768 Japan Society for the Promotion of Science. H.M. was supported by Grant-in-aid for young scientists 20K14327 Japan Society for the Promotion of Science. The work of E.B. was supported by the Research Council of Finland
through the Flagship of Advanced Mathematics for Sensing, Imaging and
Modelling (decision number 359183).
§ SQUARE LATTICE
As we have proven in Theorem 5.7 of <cit.>, the S-matrix and the D-N map determine each other, if the unique continuation theorem holds in the exterior domain. Then, we can change the domain Ω as long as the conditions (i), (ii), (iii) hold. Therefore, to prove Theorem <ref>, we have only to consider the case in which Ω is a rectangular domain as below.
Given a square lattice Γ_0 = {𝒱_0, ℰ_0} in ℝ^2, let Ω be its rectangular domain as sketched in Figure <ref>, and Γ = {𝒱, ℰ}, where 𝒱 = 𝒱_0∩Ω, ℰ = ℰ_0 ∩Ω. The black dots there denote the boundary points satisfying d_v = 1 for v ∈∂𝒱, while d_v = 4 for v ∈𝒱^o. The boundary ∂𝒱 consists of four parts (∂𝒱)_T, (∂𝒱)_B, (∂𝒱)_L, (∂𝒱)_R, where the top (∂𝒱)_T and the left side (∂𝒱)_L are given by
(∂𝒱)_T = {a_1, a_2, …,a_m}, (∂𝒱)_L = {b_1, b_2,…, b_n},
and the bottom (∂𝒱)_B and the right side (∂𝒱)_R are defined similarly.
Let - Δ_𝒱, λ + Q_𝒱,λ be the vertex Hamiltonian introduced in Subsection <ref>.
By a cross with center v_0 we mean the graph shown in Figure <ref>.
Denoting by e_i the edge with the endpoints v_0 and v_i, we can rewrite the equation (- Δ_𝒱,λ + Q_𝒱,λ)u = 0 as
∑_i=1^41/ϕ_e_i(1,λ)u(v_i) =
(∑_i=1^4ϕ'_e_i(1,λ)/ϕ_e_i(1,λ) + C_v_0)u(v_0).
The key to the inverse procedure is the following partial data problem <cit.>. Denoting 𝒱^o = 𝒱∖∂𝒱, we define Neumann derivative[Note that this definition of Neumann derivative differs from that in <cit.>; here we adopt the one employed in <cit.>.] on the boundary for the vertex Hamiltonian by
(∂_νû)(v) = - 1/d_v∑_w ∼ v, w ∈𝒱^o1/ϕ_e0(w,λ)u(w), v ∈∂𝒱,
where ϕ_e0(w,λ) is given in (<ref>).
(1) Given partial Dirichlet data f on ∂𝒱∖(∂𝒱)_R, and partial Neumann data g on (∂𝒱)_L, there is a unique solution u on 𝒱 to the equation
{ (- Δ_𝒱,λ + Q_𝒱,λ)u = 0 in 𝒱^o,
u =f on ∂𝒱∖(∂𝒱)_R,
∂_νu = g on (∂𝒱)_L.
.
(2) Given the D-N map Λ_𝒱(λ), partial Dirichlet data f_2 on ∂𝒱∖(∂𝒱)_R and partial Neumann data g on (∂𝒱)_L, there exists a unique f on ∂𝒱 such that f = f_2 on ∂𝒱∖(∂𝒱)_R and Λ_𝒱(λ)f = g on (∂𝒱)_L. Moreover, f is uniquely determined by the D-N map.
Let A_k be the line with the slope -1 passing through a_k as sketched in Figure <ref>. Denote the vertices on A_k ∩𝒱 by
a_k = a_k,0, a_k,1, … , a_k^∗,
successively. Then, A_k ∩∂𝒱 = {a_k, a_k^∗}.
(1) There exists a unique solution u to the equation
(- Δ_𝒱,λ + Q_𝒱,λ)u = 0 in 𝒱^o,
with the partial Dirichlet data f on ∂𝒱∖(∂𝒱)_R
Is this correction OK? such that
{ f(a_k) = 1,
f(v) = 0 for
v ∈∂𝒱∖((∂𝒱)_R
∪{a_k
})
.
a_k^∗ is removed.
and the partial Neumann data g = 0 on (∂𝒱)_L.
(2) This solution satisfies
u(v) = 0 in vertices below (not including) the line A_k.
(3)
Moreover, the values of f on (∂𝒱)_R and
a_k^∗ are uniquely determined by the D-N map.
The argument is the same as in Lemma 6.2 of <cit.>, which is in turn based on Lemma 6.1 of the same paper. The assertion (3) follows from
claims (2), (3) of the indicated Lemma 6.1.
We determine V_e(z) and C_v inductively by sliding down the line A_k. Assuming that we know V_e(z) and C_v for all e and v above A_k, we determine V_e(z) between A_k and A_k-1, C_v for v ∈ A_k. Since the D-N map is given, by the induction hypothesis,
we can then compute u(v,λ) in Lemma <ref> for all v above A_k, including this line as well, as a meromorphic function of λ by using the equation (<ref>)[More detailed explanation is given in the proof of Lemma <ref> below.].
Let us compute the values of u(v,λ) more carefully. Observing Figure <ref>,
we use the equation (<ref>) for the cross with center r_1. As u = 0 at r_1, r_2, s_1, s_2, we have
1/ϕ_e_0(1,λ)u(p_0,λ) +
1/ϕ_e_1(1,λ)u(p_1,λ) = 0.
Observing next the cross with center p_1, we obtain
1/ϕ_e_3(1,λ)u(p_3,λ) +
1/ϕ_e_4(1,λ)u(p_4,λ) =
(
∑_i=1^4ϕ'_e_i(1,λ)/ϕ_e_i(1,λ) + C_p_1)u(p_1,λ).
(1) Assume that we know u(v,λ) for v = p_0, p_1, and V_e(z) for e = e_0. Then, we can determine V_e(z) for e = e_1.
(2) Assume that we know u(v,λ) for v = p_3, p_4, p_1. Assume also that we know V_e(z) for e = e_3, e_4, e_1.
Then, we can determine V_e(z) for e = e_2, and C_v for v = p_1.
As
ϕ_e_1(1,λ) = - u(p_1,λ)/u(p_0,λ)ϕ_e_0(1,λ), one can compute the zeros of ϕ_e_1(1,λ) from the given data. This means that the Dirichlet eigenvalues referring to the potential V_e_1(z) are determined by these data. Indeed, by Borg's theorem (see e.g. <cit.>, p. 55 and <cit.>, p. 27), a symmetric potential is determined by its Dirichlet eigenvalues. This allows to construct V_e_1(z) uniquely proving thus the first claim.
Rewrite next relation (<ref>) as
ϕ'_e_2(1,λ)/ϕ_e_2(1,λ) + C_p_1 = 1/ϕ_e_3(1,λ)u(p_3,λ)/u(p_1,λ) + 1/ϕ_e_4(1,λ)u(p_4,λ)/u(p_1,λ)
- ∑_i=1,3,4ϕ'_e_i(1,λ)/ϕ_e_i(1,λ).
Both sides are meromorphic with respect to λ∈ C and the singular points of the right-hand side are determined by the given data only. Noting that ϕ'_e(1,λ) ≠ 0 if ϕ_e(1,λ) = 0 (recall that ϕ_e(1,λ) = ϕ'_e(1,λ) = 0 implies ϕ_e(z,λ) = 0 for all z), we see that the zeros of ϕ_e_2(1,λ) are determined exclusively by the given data and that the value of C_p_1 plays no role in fixing the zeros of ϕ_e_2(1,λ). Since these zeros are the Dirichlet eigenvalues referring to the potential V_e_2(z), Borg's theorem implies that V_e_2(z) is determined by the data uniquely. The equation (<ref>) then gives C_p_1 which proves the second claim.
As in Figure <ref>, we draw the line, denoted there as c, passing through the upper-left corner of the lattice 𝒱^o, and call it A_0, and later also B_n+1.
We need one more preparatory result before stating the following lemma. Comparing Figures <ref> and <ref>, we rewrite the equation (<ref>) above for the cross with the center a_k-1,i+1 in the form
u(a_k,i+1,λ) = - ϕ_e'_k,i(1,λ)/ϕ_e_k,i(1,λ)u(a_k,i,λ),
and the equation (<ref>) for the cross with center a_k,i+1 as
1/ϕ_e_k+1,i(1,λ)u(a_k+1,i,λ) +
1/ϕ_e'_k+1,i+1(1,λ)u(a_k+1,i+1,λ)
= (
ϕ'_e'_k,i(1,λ)/ϕ_e'_k,i(1,λ) +
ϕ'_e_k,i+1(1,λ)/ϕ_e_k,i+1(1,λ) +
ϕ'_e_k+1,i(1,λ)/ϕ_e_k+1,i(1,λ) +
ϕ'_e'_k+1,i+1(1,λ)/ϕ_e'_k+1,i+1(1,λ) + C_a_k,i+1)u(a_k,i+1,λ).
Let 1 ≤ k ≤ m-1, and assume that we know V_e(z) and C_v for all e and v above the line A_k (not including the points v∈ A_k). Then
one is able to determine V_e(z) and C_v for all e between A_k and A_k-1 and C_v for all v ∈ A_k.
Observe Figure <ref> and put i = 0. Considering the cross with the center a_k-1,1, one can employ the first claim of Lemma <ref> to determine V_e(z) for e = e'_k,0. Using further the second claim, one can determine V_e(z) for e = e_k,1 and C_v for v = a_k,1. This is the first step; proceeding then inductively with i = 1, 2, … we prove obtain the lemma.
We have thus determined all the V_e(z) and C_v for the edges e and vertices v above the line A_0 (excluding v∈ A_0). Modifying the used argument, we can deal with the part below the line A_0 = B_n+1.
Let 2 ≤ℓ≤ n+1, and assume that we know V_e(z) and C_v for all the edges e and vertices v above B_ℓ, where v ∉B_ℓ. Then we can determine V_e(z) and C_v for all the e between B_ℓ and B_ℓ-1 and C_v for all v ∈ B_ℓ.
Consider the case ℓ = n+1 and denote the points in 𝒱∩ B_n+1 by
c, c_1, …, c^∗.
We use Lemma <ref> taking the partial Dirichlet data as
{ f(b_n) = 1,
f(v) = 0, v ∈∂𝒱∖ ((∂𝒱)_R ∪{c, c^∗}),
.
and the partial Neumann data as
{ g(b_n) = 0,
g(v) = 0, v ∈∂𝒱∖ ((∂𝒱)_L ∪{b_n}),
.
to which the solution u constructed there corresponds. Then u = 0 holds below B_n+1 except at b_n, in fact, this is true for the vertices adjacent to (∂𝒱)_L in view of the Neumann data. Inspecting the equation, the same is true on the next line. Arguing then as in Lemma <ref> we obtain the claim.
We have thus proven the following theorem.
Let Ω be a rectangular domain as in Figure <ref>. From the knowledge of the D-N map of - Δ_𝒱,λ + Q_𝒱,λ for any energy λ, one can then determine all the V_e(z) and C_v, provided we know V_e(z) for all the edges e adjacent to the boundary of 𝒱.
Theorems <ref> and <ref> for the square lattice then follow from Theorem <ref>.
§ HEXAGONAL LATTICE
§.§ Hexagonal parallelogram
Let us next consider the hexagonal lattice. The first three steps of the construction are parallel to the square case discussed above.
We identify ℝ^2 with ℂ, and put
ω = e^π i/3, v_1 = 1 + ω, v_2 = √(3) i,
p_1 = ω^-1 = ω^5, p_2 = 1,
Given n = n_1 + i n_2 ∈ℤ[i]= ℤ + iℤ, we denote
ℒ_0 = { v(n) ; n ∈ℤ[i]}, v(n) = n_1 v_1 + n_2 v_2,
and define the vertex set 𝒱_0 by
𝒱_0 = 𝒱_01∪𝒱_02, 𝒱_0i = p_i + ℒ_0.
Let 𝒟_0 be the Wigner-Seitz cell of 𝒱_0. It is a hexagon the six vertices of which are at the points ω^k, 0 ≤ k ≤ 5, and the center at the origin. Take the set D_N := {n ∈ℤ[i] ; 0 ≤ n_1 ≤ N, 0 ≤ n_2 ≤ N}, and put
𝒟_N = ⋃_n ∈ D_N( 𝒟_0 + v(n));
the number N we have to choose large enough. This is a parallelogram in the hexagonal lattice as sketched in Figure <ref>.
The piecewise linear loop which is the perimeter of 𝒟_N has interior angles switching between the values 2π/3 and 4π/3. Let 𝒜 be the set of vertices with the former angle, and to each z ∈𝒜 we attach a new edge e_z,ζ with the new vertex ζ = t(e_z,ζ) at its terminal point, naturally laying outside of 𝒟_N. Let
Ω = {v ∈𝒱_0 ; v ∈𝒟_N}
be the set of vertices in the interior of the resulting discrete graph; its boundary consists of the added vertices, ∂𝒱 = {t(e_z,ζ) ; z ∈𝒜}, and one is able to divide it into four parts, naturally labeled as top, bottom, right, and left:
(∂𝒱)_T = {α_0 , ⋯ , α_N },
(∂𝒱)_B = {2ω^5 + k(1 + ω) ; 0 ≤ k ≤ N},
(∂𝒱)_R = { 2+ N(1+ω ) + k√(3) i ; 1 ≤ k ≤ N }∪{ 2+N(1+ω ) +N√(3) i + 2 ω ^2 } ,
(∂𝒱)_L = {2ω^4}∪{β_0,⋯,β_N} ,
where α_k = β _N + 2ω + k (1+ω ) and β _k = -2 + k√(3) i for 0≤ k ≤ N. The perturbations we are going to consider are again supposed to be of compact support, hence taking N large enough one can ensure that they are supported within 𝒟_N.
For 0 ≤ k ≤ N, consider the line A_k as sketched in Figures <ref> and <ref>:
A_k = {x_1 + ix_2 ; x_1 + √(3)x_2 = a_k},
where a_k is chosen so that A_k passes through
α_k = α_0 + k (1+ ω ) ∈
(∂𝒱)_T.
The vertices belonging to A_k∩Ω are written as
α_k,ℓ = α_k + ℓ (1 + ω^5 ), ℓ = 0, 1, 2, ⋯.
§.§ Special solutions to the vertex Schrödinger equation
We consider the following Dirichlet problem for the discrete Schrödinger equation
{ ( - Δ_𝒱,λ + Q_𝒱,λ)u = 0 in 𝒱^o,
u = f on ∂𝒱.
.
As in the case of the square lattice, the following lemmata hold.
(1) Given partial Dirichlet data f on ∂𝒱∖(∂𝒱)_R, and partial Neumann data g on (∂𝒱)_L, there is a unique solution u on 𝒱 to the equation
{ (- Δ_𝒱,λ + Q_𝒱,λ)u = 0, in 𝒱^o,
u =f, on ∂𝒱∖(∂𝒱)_R,
∂_νu = g, on (∂𝒱)_L.
.
(2) Given the D-N map Λ_𝒱(λ), partial Dirichlet data f_2 on ∂𝒱∖(∂𝒱)_R and partial Neumann data g on (∂𝒱)_L, there exists a unique f on ∂𝒱 such that f = f_2 on ∂𝒱∖(∂𝒱)_R and Λ_𝒱(λ)f = g on (∂𝒱)_L. Moreover, f is uniquely determined by the D-N map.
Let A_k ∩∂𝒱 = {α_k,0,α_k,m}.
(1) There exists a unique solution u to the equation
(- Δ_𝒱,λ + Q_𝒱,λ)u = 0 in 𝒱^o,
with the partial Dirichlet data f such that
{ f(α_k,0) = 1,
f(z) = 0 for
z ∈∂𝒱∖((∂𝒱)_R
∪α_k,0∪
)
.
As in Lemma 2.2, α_k,m is removed.
and the partial Neumann data g = 0 on (∂𝒱)_L.
(2) The said solution satisfies
u(x_1 + ix_2) = 0 if
x_1 + √(3) x_2 < a_k.
(3)
Moreover, the values of f on (∂𝒱)_R and α_k,m are uniquely determined by the D-N map.
The values of u on A_k can be computed by only using the D-N map, the edge potentials V_e(z) for the edges e in the halfplane x_1 + √(3)x_2 ≥ a_k, and the vertex potentials C_v for vertices v in the halfplane x_1 + √(3)x_2 > a_k.
The claim is obtained using equation (<ref>) and the D-N map. Specifically, we regard the said equation as the Cauchy problem with initial data on the top and left sides of 𝒱. More precisely, we argue in the same way as in <cit.>, Lemma 6.1. Let us inspect Figure <ref>. Given the D-N map, one can compute the values of u(v) at the vertices v on the line x_1 = -1. They are zero in the halfplane x_1 + √(3)x_2 < a_k, since the same is true for relevant boundary data. Next we observe the values of u(v) on the adjacent line, x_1 = - 1/2. We start from ω^4 and go up. Then, by using the D-N map for ω^4 and then the equation (<ref>) for the vertices above ω^4, we see that u(v) = 0 holds for v in the halfplane x_1 + √(3)x_2 < a_k when x_1 = -1/2. Repeating this procedure, we conclude that u(v) = 0 holds in the lower halfplane x_1 + √(3)x_2 < a_k. To compute the values of u(v) in the upper halfplane, x_1 + √(3)x_2 ≥ a_k, we observe the top
boundary (∂𝒱)_T. We first compute u(v) for vertices v adjacent to (∂𝒱)_T. They are determined by the D-N map. Next we observe u(v) on the neighboring line, the second adjacent to the boundary. This time, we start from the rightmost vertex (for which, by virtue of claim (3) from Lemma <ref>, the value u(v) is determined by the D-N map) and go down to the left. With the aid of the equation, we can then determine u(v) by using the values of V_e(z) and C_v in the halfplane x_1 + √(3)x_k > a_k, which are known by assumption. Finally, using the equation, the values of u(v) on the line x_1 + √(3)x_2 = a_k are computed from those in the halfplane x_1 + √(3)x_2 > a_k and V_e(z), C_v specified in the lemma.
Now we pass to the reconstruction procedure.
Let u be the solution to the equation (<ref>) specified in Lemma <ref>. If a_k is large enough, all the perturbations lie below the line A_k, hence the support of u is disjoint with them. Let us slide A_k downwards, and observe the equation when we first touch the perturbation. Let a, b, b', c ∈𝒱 and e, e' ∈ℰ be as in Figure <ref>.
Then, evaluating the equation (<ref>) at v = a, we obtain
u(b) = - ϕ_e0(1,λ)/ϕ_e'0(1,λ) u(b').
Let e_k,1, e'_k,1, e_k,2, e'_k,2, ⋯ be the series of edges just below A_k starting from the vertex α_k, and put
f_k,m(λ) = - ϕ_e_k,m0(1,λ)/ϕ_e'_k,m0(1,λ).
Then we see that the solution u in Lemma <ref> satisfies
u(α_k,ℓ) = f_k,1(λ)⋯ f_k,ℓ(λ) ;
note that the right-hand side does not depend on C_v.
To proceed, observe the graph detail sketched in Figure <ref>.
As in Lemma <ref>, assume that we know V_e(z) and C_v for all edges e and vertices v above A_k. Then, assuming that we know V_e(z) for e = e_k,1, one can determine V_e(z) for e = e'_k,1 and e = e_k,2, and also C_v for v = α_k,2.
In view of (<ref>), since we know u(α_k,ℓ) for all ℓ, we also know the corresponding f_k,ℓ(λ). Furthermore, since we know ϕ_e_k,1(1,λ), we get from (<ref>) ϕ_e'_k,10(1,λ) for all values of the energy λ. The zeros of ϕ_e'_k,10(1,λ) are the Dirichlet eigenvalues of - (d/dz)^2 + V_e(z), e = e'_k,1. By Borg's theorem, these eigenvalues determine the potential V_e(z) at the edge e = e'_k,1.
Making the equation (<ref>) explicit at the vertex v = α_k,2 and noting that u = 0 holds below A_k, we have
ϕ'_e_k,20(1,λ)/ϕ_e_k,20(1,λ) = 1/ϕ_e”_k,20(1,λ)u(v”)/u(v) - ϕ'_e”_k,20(1,λ)/ϕ_e”_k,20(1,λ) - ϕ'_e'_k,10(1,λ)/ϕ_e'_k,10(1,λ) - C_v,
where v = α_k,2 = e”_k,2(0), v” = e”_k,2(1). The both sides of (<ref>) are meromorphic function in the complex plane, and we note that u is also meromorphic with respect to λ by construction. The singularities of the left-hand side are thus determined by those of the right-hand side. Furthermore, the singularities of the right-hand side can be computed without using C_v. Therefore, the zeros of ϕ_e_k,20(1,λ) are determined by V_e(z) for edges e in the halfplane x_1 + √(3)x_2 ≥ a_k and e = e_k,1, e'_k,1 as well as C_w for w in the open halfplane x_1 + √(3)x_2 > a_k. As before, Borg's theorem determines V_e_k,2(z), which in turn determines ϕ_e_k,20(1,λ) and ϕ'_e_k,20(1,λ). The value of C_v is then computed from the formula (<ref>) for regular points λ.
Lemma <ref> implies the following result.
Let u be as in Lemma <ref>, and assume that we know V_e(z) for all the edges e above A_k, and C_v for all the v above A_k but not on A_k. Then, one can determine all the V_e(z) for e between A_k and A'_k, and C_v for all the v on A_k.
Let A_k,B_l be as in Figure <ref>. By Lemma <ref>, we can compute (from the knowledge of the D-N map) the edge potentials between A_k,A_k' and between B_l,B_l', and also compute C_v for the vertices on A_k,B_l. We call this the initial procedure.
Next we make a step inside 𝒟_N. For the notational reason, we determine the edge potentials below B'_ℓ. To be able to repeat the described procedure after passing to the neighboring “lower” line B_l-1, it suffices to know the edge potential for the edges below B'_ℓ and A'_k with the endpoints on B'_ℓ, as well as C_v for such endpoints on B'_ℓ in Figure <ref>.
The following “local” version of Lemma <ref> holds.
In Figure <ref>, if one knows the edge potentials for the edges ab and ac, one can compute the edge potential of the edge ac' and the vertex potential C_v for v=a.
As in Lemma 3.5, consider the equation for A_k-1, as shown in Figure <ref>; for the sake of clarity and brevity, we denote it by u:=u_A_k-1. Using the edge identification e=ab, we also use the symbol ϕ_ab(1,λ) for ϕ_e0(1,λ).
The procedure to “to compute ϕ” means to express it using the known quantities in the successive steps:
(i) Evaluating the equation (<ref>) at b_2, we can compute u_A_k-1(b_1).
(ii) Evaluating the equation at b_4, we can compute u_A_k-1(b_5).
(iii) Evaluating the equation at b_5, we can compute u_A_k-1(b).
(iv) The potential at the edges bb_1,ba and the vertex potential C_v at b are found during the initial procedure for the lines B.
Hence solving the equation at b, we can compute u_A_k-1(a).
(v) The potentials at the edges ab,ac are found during the initial procedure for the lines B. Since u_A_k-1(c)=u_A_k-1(c')=0 and u_A_k-1 at a,b are known, one can determine the potential at ac' and C_v at v=a by the same argument as in Lemma 3.7.
Let us explain the step (v) in detail. We consider the equation for u at vertex a:
(-Δ_𝒱,λ+Q_a,λ)u(a)=0,
where
(Δ_𝒱,λ u)(a)=1/3∑_w∼ a1/ϕ_aw(1,λ)u(w)=1/31/ϕ_ab(1,λ)u(b),
since u(c)=u(c')=0. By (<ref>),
Q_a,λ = 1/3∑_w∼ aϕ_aw'(1,λ)/ϕ_aw(1,λ)+C_a/3
= 1/3(ϕ_ab'(1,λ)/ϕ_ab(1,λ)+ϕ_ac'(1,λ)/ϕ_ac(1,λ)+ϕ_ac''(1,λ)/ϕ_ac'(1,λ)+C_a).
This equation yields
ϕ_ac''(1,λ)/ϕ_ac'(1,λ)= 1/ϕ_ab(1,λ)u(b)/u(a)-ϕ_ab'(1,λ)/ϕ_ab(1,λ)-ϕ_ac'(1,λ)/ϕ_ac(1,λ)-C_a.
By the above construction, u(a) is a meromorhic function of λ. All the quantities on the right-hand side of (<ref>) except C_a are known. This means the singularities of the right-hand side of (<ref>) are known. Hence we can determine the singularities of the left-hand side of
(<ref>) without knowing C_a. The singularities of the left-hand side are the Dirichlet eigenvalues for the edge ac', which determine the edge potential of ac' by Borg's theorem. This shows ϕ_ac'(z,λ), z∈ [0,1] can be computed. Inserting ϕ_ac'(1,λ),ϕ'_ac'(1,λ) back into (<ref>) gives C_a.
Note that in the above proof, we use only the knowledge of edges between B_ℓ and B_ℓ'. We emphasize that the knowledge of edges below both of A_k and B_ℓ, which is still unknown, is not used. Then we repeat Lemma <ref> by taking the function u for A_k-2,A_k-3,⋯, and thus we recover the edge potential for all the edges just below B'_ℓ and left to A_k-1,
and C_v for all vertices between B_ℓ and B_ℓ' in Figure <ref>. This shows that we can push the lines B_l as “low” as needed.
By a symmetric reasoning, choosing the function u in the different direction (i.e., choosing u_B_j for the proper j), one can show that the lines A_k can also be pushed “down” as low as needed. But in fact, this is not needed, since pushing the lines B_l “down” alone can already recover the whole perturbed region.
We arrive thus at the following conclusions:
Let Ω be a hexagonal paralellogram as in Figure <ref>. Then from the D-N map of - Δ_,λ + Q_𝒱,λ, one can determine all the V_e(z) and C_v, provided we know V_e(z) for all the edge e adjacent to the boundary of 𝒱.
Theorems <ref> and <ref> for the hexagonal lattice then follow from Theorem <ref>.
99
A1 K. Ando, Inverse scattering theory for discrete Schrödinger operators on the hexagonal lattice. Ann. Henri Poincare 14 (2013), 347–383.
AIM16 K. Ando, H. Isozaki, H. Morioka, Spectral properties for Schrödinger operators on perturbed lattices. Ann. Henri Poincare 17 (2016), 2103–2171.
AIM18
K. Ando, H. Isozaki and H. Morioka, Inverse scattering for Schrödinger operators on perturbed periodic lattices, Ann. Henri Poincaré 19 (2018), 3397-3455.
AvdBelMat11
S. Avdonin, B. P. Belinskiy, and J. V. Matthews, Dynamical inverse problem on a metric tree, Inverse Porblems 27 (2011), 075011.
Bel87
M. Belishev, An approach to multidimensional inverse poblems for the wave equation. (Russian) Dokl. Akad. Nauk SSSR. 297 (1987), 524-527.
Bel04
M. I. Belishev, Boundary spectral inverse problem on a class of graphs (trees) by the BC method, Inverse Problems 20 (2004), 647-672.
BelKur92
M. Belishev and Y. Kurylev, To the reconstruction of a Riemannian manifold via its spectral data (BC-method), Comm. PDE. 17 (1992), 767-804.
Below85
J. von Below, A characteristsic equation associated to an eigenvalue problem on c^2-networks, Linear Algebra Appl. 71 (1985), 309-325.
BerkolaikoKuchment2013
G. Berkolaiko and P. Kuchment, Introduction to Quantum Graphs, Amer. Math. Soc., Providence, R.I. (2013).
BILL2021
E. Blåsten, H. Isozaki, M. Lassas and J. Lu, The Gel'fand's inverse problem for the graph Laplacian, J. Spectral Theory 13 (2023), 1-45.
BILL2
E. Blåsten, H. Isozaki, M. Lassas and J. Lu, Inverse problems for discrete heat equations and random walks for a class of discrete graphs, SIAM Discrete Math. 37 (2023), 831-863.
BEILL
E. Blåsten, P. Exner, H. Isozaki, M. Lassas and J. Lu, Inverse problems for locally perturbed lattices - Discrete Hamiltonian and quantum graph, to appear in Ann. Henri Lebesgue.
BondaShieh17
N. Bondarenko and C. T. Shieh, Partial inverse problem for Sturm-Liouville operators on trees, Proceedings of the Royal Society of Edingburgh 147A (2017), 917-933.
Bonda20
N. Bondarenko, Spectral data characterization for the Sturm-Liouville operator on the star-shaped graph, Anal. Math. Phys. 10 (2020), 83
BroWei09
B. M. Brown and R. Weikard, A Borg-Levinson theorem for trees, Proc. Royal Soc. Lond. Ser. A Math. Phys. Eng. Sci. 461 2062 (2005), 3231-3243.
BPG08
J. Brüning, V. Geyler and K. Pankrashkin, Spectra of self-adjoint extensions and applications to solvable Schrödinger operators, Rev. Math. Phys. 20 (2008), 1-70.
Cattaneo97
C. Cattaneo, The spectrum of the continuous Laplacian on a graph, Monatsh. Math. 124 (1997), 215–235.
ChExTu10
T. Cheon, P. Exner and O. Turek, Approximation of a general singular vertex coupling in quantum graphs, Ann. Phys. 325 (2010), 548-578.
Exner96
P. Exner, Weakly coupled states on branching graphs, Lett. Math. Phys. 38 (1996), 313–320.
Exner97
P. Exner,
A duality between Schrödinger operators on graphs and certain Jacobi matrices, Ann. Inst. Henri Poincaré 66 (1997), 359–371.
EKMN17
P. Exner, A. Kostenko, M. Malamud and H. Neidhardt, Spectral theory for infinite quantum graph, Ann. Henri Poincaré 19 (2018), 3457-3510.
ExnerKovarik2015
P. Exner and H. Kovařík, Quantum Waveguides, Springer, Cham Heidelberg New York Dordrecht London (2015).
ExnerPost09
P. Exner and O. Post, Approximation of quantum graph vertex couplings by scaled Schrödinger operators on thin branched manifolds, J. Phys. A: Math. Theor. 42 (2009), 415305.
Es
M.S. Eskina, The direct and the inverse scattering problem for a partial difference equation, Soviet Math. Doklady, 7 (1966), 193-197.
GutSmil01
B. Gutkin and U. Smilansky, Can one hear the shape of a graph? J. Phys. A 34 (2001), 6061-6068.
FrYu01
G. Freiling and V. Yurko, Inverse Sturm-Liouville Problems and their Applications, Nova Science Publishers, Hauppauge (2001).
IK H. Isozaki, E. Korotyaev, Inverse problems, trace formulae for discrete Schrödinger operators. Ann. Henri Poincare 13 (2012), 751–788.
IsoMo15
H. Isozaki and H. Morioka, Inverse scattering at a fixed energy for discrete Schrödinger operators on the square lattice. Ann. l'Inst. Fourier 65 (2015), 1153–1200.
KoLo07
E. Korotyaev and I. Lobanov, Schrödinger operators on zigzag nanotubes, Ann. Henri Poincaré 8 (2007), 1151-1076.
KoSa14
E. Korotyaev and N. Saburova, Schrödinger operators on periodic discrete graphs, J. Math. Anal. Appl. 420 (2014), 576-611.
KoSa15b
E. Korotyaev and N. Saburova, Estimates of bands for Laplacians on periodic equilateral metric graphs, Proc. Amer. Math. Soc. 114 (2016), 1605-1617.
KoSa15
E. Korotyaev and N. Saburova, Scattering on periodic metric graphs, Rev. Math. Phys. 32 (2020), 2050024 (51 p.).
KN22
A. Kostenko, N. Nicolussi, Laplacians on Infinite Graphs, Mem. EMS, Berlin 2022.
KostrykinSchrader1999
V. Kostrykin, R. Schrader, Kirchhoff's rule for quantum wires, J. Phys. A: Math. Gen. 32 (1999), 595-630.
Ku24
P. Kurasov, Spectral Geometry of Graphs, Birkhäuser, Berlin 2024.
MochiTroosh12
K. Mochizuki and I.Yu. Trooshin: On the scattering on a loop-shaped graph, Progress of Math. 301 (2012), 227-245.
Nakamura14
S. Nakamura, Modified wave operators for discrete Schrödinger operators with long-range perturbations, J. Math. Phys. 55 (2014), 112101.
Niikuni16
H. Niikuni, Spectral band structure of periodic Schrödinger operators with two potentials on the degenerate zigzag nanotube, J. Appl. Math. Comput. (2016) 50:453-482.
Pank06
K. Pankrashkin, Spactra of Schrödinger operators on equilateral quantum graphs, Lett. Math. Phys. 77 (2006), 139-154.
Pankrashkin13
K. Pankrashkin: An example of unitary equivalence between self-adjoint extensions and their parameters, J. Funct. Anal. 265 (2013), 2910–2936.
ParRich18
D. Parra and S. Richard, Spectral and scattering theory for Schrödinger operators on perturbed topological crystals,
Rev. Math. Phys. 30 (2018), Article No. 1850009, pp 1-39.
Pivo00
V. Pivovarchik, Inverse problem for the Sturm-Liouville equation on a simple graph, SIAM J. Math. Anal. 32 (2000), 801-819.
Post12
O. Post, Spectal Analysis on Graph-like Spaces, Lecture Notes in Mathematics 2039, Springer, Heidelberg (2012).
PoTru
J. Pöschel and E. Trubowitz, Inverse Spectral Theory,
Academic Press, Boston, (1987).
Tadano19
Y. Tadano, Long-range scattering for discrete Schrödinger operators, Ann. Henri Poincaré 20 (2019), 1439-1469.
VisComMirSor11
F. Visco-Comandini, M. Mirrahimi, and M. Sorine, Some inverse scattering problems on star-shaped graphs, J. Math. Anal. Appl. 387 (2011), 343-358.
YangXu18
X. C. Xu and C. F. Yang, Determination of the self-adjoint matrix Schrödinger operators without the bound state data,
Inverse Problems 34 (2018), 065002 (20pp).
Yurk05
V. Yurko, Inverse spectral problems for Sturm-Liouville operators on graphs, Inverse Problems 21 (2005), 1075-1086.
Yurko16(1)
V. Yurko, Inverse spectral problems for differential operators on spatial networks, Russ. Math. Surveys 71, No 3 (2016), 539-584.
|
http://arxiv.org/abs/2409.02810v1 | 20240904152825 | A hybrid FEM-PINN method for time-dependent partial differential equations | [
"Xiaodong Feng",
"Haojiong Shangguan",
"Tao Tang",
"Xiaoliang Wan",
"Tao Zhou"
] | math.NA | [
"math.NA",
"cs.AI",
"cs.NA"
] |
UIC]Xiaodong Feng
[email protected]
UIC]Haojiong Shangguan
[email protected]
GZ,UIC]Tao Tang
[email protected]
lu]Xiaoliang Wan
[email protected]
LSECL]Tao Zhou
[email protected]
[UIC]Division of Science and Technology, BNU-HKBU United International College, Zhuhai 519087, China
[GZ]School of Electrical and Computer Engineering, Guangzhou Nanfang College, Guangzhou 510970, China
[lu]
Department of Mathematics and Center for Computation and Technology, Louisiana State University, Baton Rouge 70803, USA
[LSECL]Institute of Computational Mathematics and Scientific/Engineering
Computing, Academy of Mathematics and Systems Science, Chinese Academy
of Sciences, Beijing, China
§ ABSTRACT
In this work, we present a hybrid numerical method for solving evolution partial differential equations (PDEs) by merging the time finite element method with deep neural networks. In contrast to the conventional deep learning-based formulation where the neural network is defined on a spatiotemporal domain, our methodology utilizes finite element basis functions in the time direction where the space-dependent coefficients are defined as the output of a neural network. We then apply the Galerkin or collocation projection in the time direction to obtain a system of PDEs for the space-dependent coefficients which is approximated in the framework of PINN. The advantages of such a hybrid formulation are twofold: statistical errors are avoided for the integral in the time direction, and the neural network's output can be regarded as a set of reduced spatial basis functions.
To further alleviate the difficulties from high dimensionality and low regularity, we have developed an adaptive sampling strategy that refines the training set.
More specifically, we use an explicit density model to approximate the distribution induced by the PDE residual and then augment the training set with new time-dependent random samples given by the learned density model.
The effectiveness and efficiency of our proposed method have been demonstrated through a series of numerical experiments.
Evolution equation Finite element methodDeep learning Adaptive sampling method
§ INTRODUCTION
Evolution equations, including both time-dependent ordinary and partial differential equations (ODEs / PDEs), are used
Many numerical approaches have been developed for such problems, e.g. the finite difference method, the spectral method, and the finite element method.
Recently solving PDEs with deep learning methods has been receiving increasing attention <cit.>. Typical techniques include physics-informed neural networks (PINNs) <cit.>, the deep Ritz methods <cit.>, the weak adversarial networks <cit.>, etc.
Although deep learning-based approaches have shown a lot of potential in solving high-dimensional PDEs, there still exist many numerical issues in adapting the neural network approximation to the problem studied.
§.§ Related work
In this work, we pay particular attention to the error of PINNs for evolution equations which may grow too fast and limit the application of PINNs in long-term integration. Many efforts have been made to address this issue. We now briefly review the relevant works.
Improved PINNs: PINNs represent the approximate PDE solution as a single neural network, which takes a space-time tuple as input and is trained by minimizing
the PDE residual on random collocation points in the space-time domain. To improve the performance of PINNs on long-term integration, many approaches have been developed which mainly focus on seeking a more effective training strategy. In <cit.>, a marching-in-time strategy is proposed by splitting the time domain into many small segments, where the training is done segment by segment and the approximate solution at the end of one segment is used as the initial condition for the next segment.
In <cit.>, backward-compatibility PINNs (bc-PINNs) are proposed, where the obtained solution in previous time segments is used as a constraint for the model training in the current time segment.
In <cit.>, Causal PINNs are developed to incorporate causality into the training process by introducing causality weights. In <cit.>, a unified scalable framework for causal sweeping strategies is developed.
Evolution deep neural networks (EDNNs): EDNNs <cit.> are formulated with the Dirac-Frenkel variational principle to train networks by minimizing the residual sequentially in time, where the model parameters are time-dependent rather than global in the whole space-time domain.
Traditional time-marching methods can be used to update the model parameters. Compared to PINNs, the training of EDNNs is more expensive while it is more flexible to adapt EDNNs to constraints such as Hamiltonian conservation <cit.>. The efficiency of EDNNs is improved in <cit.> by making a part of model parameters time-dependent and in <cit.> by updating randomized sparse subsets of model parameters at each time step.
Operator learning: The main idea is to learn an operator that maps the solution from the current time step to the next time step.
For example, physics-informed DeepONet <cit.> can be used to learn a solution operator over a short time interval t∈[0,Δ t]. Starting with n=2, the model's prediction at nΔ t can be obtained from the trained model
using the approximate solution at (n-1)Δ t as the input. Other examples include auto-regressive networks <cit.>, transformer <cit.>, etc.
Hybrid strategies: These approaches try to hybridize classical numerical methods with deep learning techniques by either adapting neural networks to augment classical PDE solvers <cit.> or adapting classical numerical approaches to improve the performance of PINNs <cit.>. For example, in <cit.>, a coupled automatic and numerical differentiation approach is proposed to take advantage of the regularization induced by numerical discretization.
In <cit.>, a deep adaptive basis Galerkin approach is proposed where the orthogonal polynomial expansion is employed in time direction and the expansion coefficients are modeled as the output of a deep neural network.
§.§ Our contribution
The main contributions of this work are summarized as follows:
* We have developed a hybrid numerical method by merging the time finite element method with deep neural networks. The approximate solution is a linear combination of the time finite element basis functions, where the coefficients are given by the output of a neural network. We subsequently apply Galerkin or collocation projection to eliminate the time and use PINN to approximate the system of PDEs for the coefficients. This strategy has some advantages: First, the numerical difficulties induced by random sampling and causality are avoided in the time direction since the projection can be done accurately.
Second, all the coefficients define a set of reduced basis functions on the computation domain, which are learned through the neural network. The approximate solution can also be regarded as a time-dependent linear combination of these reduced basis functions, which shares similarities with the low-rank representation in the study of high-dimensional problems.
* We have proposed a deep adaptive sampling strategy to enhance the numerical efficiency. Mesh refinement in the time direction is straightforward. Particular attention needs to be paid to the random sampling in the physical space, especially for high-dimensional and low-regularity problems. Using a spatially conditional bounded KRnet and a discrete distribution in the time direction, we have constructed a joint density model to learn the distribution induced by the PDE residual, based on which new time-dependent samples are generated to refine the training set.
The remainder of the paper is organized as follows. In Section <ref>, we provide an overview of PINN. Then we briefly introduce the existing work in solving evolution PDEs and review classic finite element methods for the first-order ordinary differential equations, including Galerkin and collocation frameworks. In Section <ref>, we introduce a hybrid FEM-PINN method
and also develop some adaptive sampling strategies. Several numerical examples are demonstrated in Section <ref> to test the performance of the proposed method. The paper is concluded in Section <ref>.
§ PRELIMINARIES
§.§ Physics-informed neural networks (PINNs)
We begin with a brief overview of physics-informed neural networks (PINNs).
We consider a general time-dependent PDE
u_t(x,t) + 𝒩[u](x,t)=f(x,t), x∈Ω⊂ℝ^d, t∈[0,T],
subject to the initial and boundary conditions
u(x,0) =g(x), x∈Ω,
ℬ[u](x,t) = b(x, t), x∈∂Ω, t∈[0,T],
where 𝒩[·] is a linear or nonlinear differential operator, and ℬ[·] is a boundary operator corresponding to Dirichlet, Neumann, Robin or periodic boundary conditions.
Following the original work of Raissi et al. <cit.>, we represent the unknown solution u(x, t) with a deep neural network u_θ(x, t), where θ denotes all tunable parameters (e.g. weights and biases). Then, a physics-informed model can be trained by minimizing the following composite loss function
ℒ(θ) = λ_icℒ_ic(θ) + λ_bcℒ_bc(θ) + λ_rℒ_r(θ),
where
ℒ_ic(θ) = 1/N_ic∑_i=1^N_ic|u_θ(x_ic^i, 0) - g(x_ic^i)|^2,
ℒ_bc(θ) = 1/N_bc∑_i=1^N_bc|ℬ[u_θ]( x_bc^i, t_bc^i) - b(x_bc^i, t_bc^i)|^2,
ℒ_r(θ)=1/N_r∑_i=1^N_r|∂ u_θ/∂ t(x_r^i, t_r^i) + 𝒩[u_θ](x_r^i, t_r^i) - f(x_r^i, t_r^i)|^2.
Here {x_ic^i}_i=1^N_ic, {t_bc^i,x_bc^i}_i=1^N_bc and {t_r^i, x_r^i}_i=1^N_r can be the vertices of a fixed mesh or points that are randomly sampled
at each iteration of a gradient descent algorithm. The gradients with respect to both the input variables (t,x) and the model parameters θ can be efficiently computed via automatic differentiation <cit.>. Moreover, the hyper-parameters {λ_ic, λ_bc, λ_r} allow the flexibility of assigning a varying learning rate to each loss term to balance their interplay during the training process, which may be user-specified or tuned automatically.
§.§ Continuous time finite element method
Evolution PDEs are often approximated by spatial finite elements together with a sequential time-marching scheme. Another choice is to construct a finite element approximation space on the space-time domain. We briefly review the time finite element method for first-order ordinary differential equations. Consider the following model problem:
u'(t) + 𝒩[u](t) = f(t), t∈ [0,T],
u(0) = 0,
where 𝒩 is a linear or nonlinear operator and f(t)∈ L_2(I) with I=[0,T].
§.§.§ Galerkin projection
We let X {u∈ H^1(I):u(0)=0} be the trial space
and Y L_2(I) the test space.
The primal variational formulation of (<ref>) is as follows.
{ u ∈ X
(u', v) + (𝒩[u], v) = (f, v), ∀ v ∈ Y,
.
where (·,·) indicates the inner product of two functions. For approximation, we consider the Galerkin projection with X_N⊂ X and Y_N⊂ Y, i.e.,
{ u_N ∈ Y_N
(u_N' , v_N) + (𝒩[u_N], v_N) = (f, v_N) , ∀ v_N ∈ Y_N.
.
Define
X_N = span{ϕ_j(t)|0≤ j≤ N}, Y_N = span{ψ_j(t)|0≤ j≤ N},
where ϕ_j(t) and ψ_j(t) are finite element basis functions. Let
u_N(t) = ∑_i=0^Nũ_iϕ_i(t) ∈ X_N
be the approximate solution with undetermined coefficients ũ_i. Taking v_N=ψ_j for j=0,⋯,N in (<ref>) leads to the following system
∑_i=0^N(∂_t ϕ_i(t),ψ_j(t)) ũ_i+ ( N[∑_i=0^Nϕ_i(t)ũ_i],ψ_j(t) )=(f(t),ψ_j(t)), ∀ j=0,1, ⋯, N.
The inner products in (<ref>) can be accurately evaluated with the Gaussian quadrature formulas, where the degree of exactness is determined by the nonlinearilty of 𝒩.
§.§.§ Collocation projection
Collocation projection provides a flexible strategy especially when 𝒩 is nonlinear <cit.>.Let {s_k}_k=1^K be Gaussian-type quadrature points on the reference interval [0,1]
0≤ s_1<s_2<⋯<s_K≤ 1.
Consider a partition of [0,T] with
0=t_0<t_1<⋯ <t_M̂=T, h_i = t_i - t_i-1, i=1,⋯,M̂.
Define
s_m,k = t_m-1 + h_ms_k, 1≤ k ≤ K, 1≤ m≤M̂.
We seek the approximate solution by enforcing the equation on the collocation points, i.e.,
{ u∈ X_N∩ C^1(I)
∂ _t u(s) + 𝒩[u](s) = f(s), s∈∪_m=1^M̂{s_m,k}_k=1^K,
.
where X_N∩ C^1(I) defines a finite element approximation space with C^1 elements and N+1 is equal to the total number of collocation points. It is shown in <cit.> by selecting the collocation points carefully collocation projection yields the same order of accuracy as Galerkin projection. Typical piecewise polynomials with at least C^1 regularity include piecewise cubic Hermite polynomials and cubic spline functions.
§ METHODOLOGY
Now we are ready to present our approach for evolution equations (<ref>).
We aim to seek an approximate solution of the following form
u_N(x,t;θ) = ∑_i=0^Nω_i(x;θ)ϕ_i(t),
where {ϕ_i(t)}_i is a pre-specified set of time finite element basis functions, ω_i: ℝ^d→ℝ are modeled by the output of a neural network ω(x,θ):ℝ^d→ℝ^N+1, and θ includes all tunable model parameters.
More precisely, ω(x,θ) is a fully-connected neural network defined as
ω(x;θ) a^T h_L-1∘ h_L-2∘⋯∘ h_1(x) for x∈ℝ^d,
where L∈ℕ^+, a∈ℝ^M_L-1× (N+1), h_ℓ(x_ℓ)σ(W_ℓx_ℓ+ b_ℓ) with W_ℓ∈ℝ^M_ℓ× M_ℓ-1 (M_0 d) and b_ℓ∈ℝ^M_ℓ for ℓ=1,2,⋯,L-1. Then θ{a, W_ℓ, b_ℓ:1≤ℓ≤ L-1 }.
σ(x) is an activation function which acts on x componentwisely to return a vector of the same size as x. We let M_ℓ=M be a fixed number for all ℓ and ℱ_L,M the set consisting of all ω with depth L and width M.
§.§ A hybrid FEM-PINN method
We consider the following hypothesis space
𝒰_N{
u_N(x,t;θ) = ∑_i=0^Nω_i(x;θ)ϕ_i(t), ω=(ω_0,⋯,ω_N)∈ℱ_L,M}.
The Galerkin projection along the time direction yields that
{[ Find u_N∈𝒰_N; (∂_t u_N(x,·), v_N) + (𝒩[u_N](x,·), v_N) = (f(x,·), v_N) ∀ v_N∈span{ψ_j(t)|0≤ j≤ N}, ∀ x∈Ω. ].
where (·,·) indicates the inner product of two functions with respect to time. More specifically, we have
∑_i=0^N(∂_t ϕ_i(t),ψ_j(t)) ω_i(x;θ) + ( N[∑_i=0^Nϕ_i(t)ω_i(x;θ)],ψ_j(t) )=(f(x,t),ψ_j(t)), ∀ j=0,1, ⋯, N, ∀x∈Ω.
If 𝒩 is linear with respect to ω and time-independent, the above system can be further simplified as follows:
∑_i=0^N(∂_t ϕ_i(t),ψ_j(t)) ω_i(x;θ)+
∑_i=0^N(ϕ_i(t),ψ_j(t))𝒩(ω_i(x;θ))
=(f(x, t),ψ_j(t)), ∀ j=0,1, ⋯, N.
Now let us turn to the collocation projection along the time direction.
Let S_t=∪_m=1^M̂{s_m,k}_k=1^K, where s_m,k is defined in equation (<ref>). The collocation formulation can be written as
{ u_N ∈𝒰_N
∂_t u_N(x,s;θ) + 𝒩[u_N](x,s;θ) = f(x, s),∀ s∈ S_t, ∀x∈Ω.
.
More specifically,
∑_i=0^N∂_t ϕ_i(s) ω_i(x;θ) + 𝒩[∑_i=0^Nϕ_i(s)ω_i(x;θ)] = f(x,s), ∀ s∈ S_t, ∀x∈Ω.
The Galerkin and collocation projections yield respectively two systems of PDEs for ω_i(x;θ). Due to the hybrid form of u_N(x,t;θ), all integrals for the Galerkin projection in the time direction can be done accurately by Gaussian quadrature formulas. Since the temporal basis functions are polynomials, collocation projection is also effective <cit.>. We then mainly focus on the integration in the physical space.
We subsequently approximate the PDE systems (<ref>) and (<ref>) in the framework of PINNs.
More specifically, we consider the following minimization problem
min_θℒ(θ) = ℒ_r(θ) + γ_1 ℒ_ic(θ) + γ_2 ℒ_bc(θ),
where
ℒ_ic(θ) = ‖ u(x,0;θ) - g(x)‖ _L_2(Ω)^2, ℒ_bc(θ) = ‖ℬ[u](x,t;θ) - b(x,t)‖ _L_2(∂Ω× [0,T])^2,
with 0<γ_1,γ_2<∞ being penalty parameters. For system (<ref>), we define
ℒ_r(θ) = ∑_j=0^Nℒ^g_r,j(θ),
ℒ^g_r,j(θ) = ‖∑_i=0^N(∂_t ϕ_i(t),ψ_j(t)) ω_i(x;θ)+ ( N[∑_i=0^Nϕ_i(t)ω_i(x;θ)],ψ_j(t) )-(f(x, t),ψ_j(t))‖^2_L_2(Ω)=r_j^g(x;θ)^2_L_2(Ω).
For system (<ref>), we define
ℒ_r(θ) = ∑_j=1^|S_t|ℒ^c_r,j(θ),
ℒ^c_r,j(θ) = ‖∑_i=0^N∂_t ϕ_i(s_j)ω_i(x;θ) + 𝒩[∑_i=0^Nω_i(x;θ)ϕ_i(s_j)] - f(x, s_j)‖^2_L_2(Ω)=r_j^c(x;θ)^2_L_2(Ω),
where we order all collocation points in S_t as s_j with j=1,…,|S_t|.
We note that |S_t|≥ (N+1) in general since there are N+1 time finite element basis functions. For simplicity, we let |S_t|=N+1 and consider the following form
ℒ_r(θ) = ∑_j=0^Nℒ_r,j(θ), ℒ_r,j(θ) = r_j(x;θ)^2_L_2(Ω),
which are shared by both the Galerkin and collocation projections, i.e., r_j=r_j^g or r_j^c.
The loss functional (<ref>) is usually discretized numerically before the optimization with respect to θ is addressed. In practice, one often chooses uniformly distributed collocation points S_r = {S_r,j}_j=0^N={{x_r,j^(i)}_i=1^N_r,j}_j=0^N, S_ic={x_ic^(i)}_i=1^N_ic on Ω and S_bc = {(x_bc^(i), t_bc^(i))}_i=1^N_bc on ∂Ω×[0,T] for the discretization of the three terms in the objective functional (<ref>), leading to the following empirical loss
ℒ(θ) = ∑_j=0^N‖ r_j(x;θ)‖^2_N_r,j + γ̂_1 ‖ℬ[u](x,t;θ) - b(x,t)‖ _N_ic ^2 + γ̂_2 ‖ u(x,0;θ) - g(x)‖^2_N_bc,
where 0<γ̂_1, γ̂_2<∞, and
‖ u(x) ‖_N_r,j = (1/N_r,j∑_i=1^N_r,ju^2(x_r,j^(i)))^1/2, ‖ u(x,0)‖ _N_ic = (1/N_ic∑_i=1^N_icu^2(x_ic^(i),0))^1/2, ‖ u(x,t)‖_N_bc = (1/N_bc∑_i=1^N_bcu^2(x_bc^(i), t_bc^(i)))^1/2.
We then seek an estimator θ̂ by minimizing the empirical loss (<ref>) via stochastic gradient descent methods, i.e.,
θ̂=_θℒ(θ).
As suggested by <cit.>, we can also employ a time-marching strategy to reduce optimization difficulties. Specifically, we partition the temporal domain [0,T] into sub-domains [0,Δ t], [Δ t, 2Δ t],⋯, [T-Δ t, T].
Neural networks are then trained on each sub-domain successively with the initial conditions given by the same model trained on previous sub-domains.
The schematic of the proposed approach is shown in Figure <ref>, and the corresponding algorithm is summarized as follows.
§.§.§ Some remarks on the hybrid form
PINN is formulated as a least-square method in terms of the hypothesis space given by the neural network. The error of u_N(x,t;θ̂) satisfies
𝔼u_exact(x,t)-u_N(x,t;θ̂)≤u_exact(x,t)-u_N(x,t;θ^*)+𝔼u_N(x,t;θ^*)-u_N(x,t;θ̂)
for a proper norm, where θ^* is the minimizer of ℒ(θ), θ̂ is the minimizer of ℒ(θ) and the expectation 𝔼[·] is with respect to random samples. On the right-hand side, the first term is the approximation error determined by the hypothesis space and the second term is the statistical error introduced by the random samples.
It is well known that PINN may fail to predict convection when the frequency is large although the hypothesis space is capable of yielding a good approximate solution <cit.>. According to the inequality (<ref>), the reason is twofold: the non-convex structure of the loss landscape and the statistical error. First, the best approximation may not be obtained due to non-convexity of the loss function even the statistical error is zero. The change of the loss landscape can be achieved by adding a regularization term. For example, bc-PINNs have a penalty term to force the model to remember what was learned before <cit.>. Second, the available strategies that improve the performance of PINNs for time integration can be understood through the reduction of statistical error. Assume that N_t random samples are used in the time direction. The most straightforward strategy is to divide the time interval as [0,T]=∪_i=0^n-1[iΔ T,(i+1)Δ T] with Δ T=T/n and train the model sequentially in each time segment. After such a decomposition, the variation in time is reduced implying that the Monte Carlo approximation of the loss given by the random samples is more accurate due to variance reduction. The better the loss is discretized by random samples, the smaller the statistical error is. Another strategy, called causal training, is proposed in <cit.>. A weighted residual loss function is defined as
ℒ_r(θ)=1/N_t∑_i=1^N_tλ_iℒ_r(t_i,θ),
where ℒ_r(t_i,θ) is the residual loss at t=t_i, and
λ_i=exp(-ϵ∑_j=1^i-1ℒ_r(t_j,θ)), i=2,3,…,N_t
with 0<ϵ<∞. The intuition is that the model will not be trained until the model is well trained for small t_i, which is consistent with the causality induced by evolution. Note that
1/N_t∑_i=1^N_tλ_iℒ_r(t_i,θ)≈1/T∫_t λ(t)ℒ_r(t,θ)dt
is the Monte Carlo approximation of a weighted loss with N_t uniform random samples. If λ(t)>0 and the exact solution is included in the hypothesis space, the same θ^* will be reached. λ(t) is a decreasing function by definition while ℒ_r(t,θ) is in general an increasing function due to the accumulation of errors with time. If λ(t) and ℒ_r(t,θ) are well balanced, their product varies much less in time, corresponding to a small variance in terms of the uniform distribution. Such a balance is mainly achieved by the selection of the so-called causality parameter ϵ. If ϵ fails to introduce a variance reduction for λ(t)ℒ_r(t,θ), the statistical error will not be reduced, implying that the training results may get worse. This explains the sensitivity of the training strategy on ϵ.
Based on the above observations, we intend to use the hybrid form (<ref>) to alleviate the difficulties induced by the statistical errors in the time direction. We also note that the coefficients for the time finite element basis functions are given by the outputs of a neural network, which corresponds to learning a set of reduced basis functions in the physical space since the output layer of the neural network is a linear combination of these basis functions.
§.§ Deep adaptive sampling method
Random samples are used for the integration in the physical space. To reduce the statistical errors, we consider the adaptive sampling method <cit.>. For simplicity, we only consider the interior residual ℒ_r(θ), and the time interval is [0,1]. As suggested in <cit.>, we relax the objective function ℒ_r(θ) as:
ℒ_r(θ) = ∑_i=0^Nλ_iℒ_r,i(θ) = ∑_i=0^Nλ_i∫_Ω r_i^2(x;θ)p_i(x)dx≈1/N_r∑_i=0^N∑_j=1^N_rλ_ir_i^2(x^(i)_j;θ),
where λ_i>0, ∑_i=0^Nλ_i=1, the set {x^(i)_j}_j=1^N_r is generated with respect to the probability density function p_i(x)>0 instead of a uniform distribution. We associate ℒ_r,i(θ) with a weight λ_i. The minimizer of ℒ_r(θ) is also the solution to the problem if the exact solution is included in the hypothesis space. To reduce the error induced by the Monte Carlo approximation, we may adjust p_i(x) to make
the residuals r_i^2(x;θ) nearly uniform.
To do this, we refine the training set gradually by adding new samples according to the distribution induced by r_i^2(x;θ^(k)), where k indicates the adaptivity iteration and θ^(k) is the optimal model parameter given by the previous training set. Once the training set is updated, the model will be retrained, based on which the training set will be refined again. In a nutshell, the model and the training set are updated alternately.
The main problem is that we need to obtain new samples from N+1 distributions induced by r_i^2(x;θ^(k)).
To handle this issue, we adopt a time-dependent density estimation strategy. Specifically, we augment the spatial variable in different terms (ℒ_r,i) with an extra dimension s, and consider a set {s_i}_i=0^N of grid points on the time interval.
For the Galerkin approach, s_i can be interpreted as pre-defined nodes of finite element mesh; for the collocation approach, s_i can be viewed as a reordering of pre-defined Gaussian nodes s_m,k (see Section <ref>). We define
a weighted empirical measure
δ_λ(A)=∑_i=0^Nλ_iδ_s_i(A)
for any A⊂[0,1] with δ_s_i being the Dirac measure
and let r(x,s;θ) be an interpolation function satisfying
r(x,s;θ) = r_i(x;θ) if s = s_i, i=0,⋯,N.
Let p_X,S(x,s)=p_X|S(x|s)p_S(s) be a joint PDF. Choosing p_S(s)ds=δ_λ(ds).
We have
∫∫ _Ω r^2(x,s;θ)p_X,S(x,s)dxds=∑_i=0^N∫_Ωr_i^2(x;θ)p_X|S(x|s_i)λ_idx,
which is consistent with equation (<ref>) if p_X|S(x|s_i)=p_i(x). Using p_X,S(x,s), the objective functional is discretized as
ℒ_r(θ)≈1/N_r∑_i=1^N_rr^2(x^(i),s^(i);θ),
where {(x^(i),s^(i))}_i=1^N_r are sampled from p_X,S(x,s). We will use a density model with the form p_X|S(x|s)p_S(s) to approximate the distribution induced by r^2(x,s;θ). New samples from the trained density model will be added to the training set for refinement.
§.§.§ Model p_S(s)
Without loss of generality, we assume that s∈[0,1]. We aim to find a invertible transformation z=f(s) such that
p_S(s) = p_Z(f(s))|∇_s f|, Z∼𝒰[0, 1],
where 𝒰 denotes the uniform distribution. We use the bounded polynomial spline layer f_poly <cit.> to parameterize f. Specifically, let 0=l_0<l_1<⋯<l_m-1<l_m=1 be a given partition of the unit interval and {k_j}_j=0^m be the corresponding weights satisfying ∑_j k_j=1. A piecewise linear polynomial can be defined as follows:
p(s) =k_j+1-k_j/l_j+1-l_j(s-l_j) + k_j, ∀ s∈[l_j,l_j+1].
Then the corresponding cumulative probability function f_poly admits the following formulation:
f_poly(s) = k_j+1-k_j/2(l_j+1-l_j)(s-l_j)^2+k_j(s-l_j)+∑_i=0^j-1k_i+k_i+1/2(l_i+1-l_i), ∀ s∈[l_j,l_j+1].
To satisfy ∫_0^1p(s) ds=1, we can model {k_j}_j=0^m as
k_j = exp(k̃_j)/C, ∀ j = 0, …, m,
where θ_f,1 = {k̃_j}_j=0^m are trainable parameters and C is a normalization constant:
C = ∑_i=0^m-1(exp(k̃_i)+exp(k̃_i+1))(l_i+1-l_i)/2.
Notice that the polynomial spline layer f_poly(·;θ_f,1) (<ref>)-(<ref>) yields explicit monotonous expressions, and its inverse can be readily computed. Then an explicit PDF model p_poly(s;θ_f,1) can be obtained by letting f=f_poly, i.e.,
p_poly(s;θ_f,1) = p_Z(f_poly(s))|∇ _sf_poly|.
§.§.§ Model p_X|S(x|s)
For x∈ℝ^d, we seek a invertible transformation z=f(x, s)∈ℝ^d for any given s such that
p_X|S(x|s) = p_Z|S(z|s)| ∂ f(x,s)/∂x|, Z|S∼𝒰[-1, 1]^d, ∀ s.
Here we employ conditional bounded KR-net f_B-KRnet(·, s) <cit.> to parameterize f(·, s). The basic idea of conditional bounded KRnet is to define the structure of f(x,s) in terms of the Knothe-Rosenblatt rearrangement. The transformation f(·, s) inspired by the Knothe-Rosenblatt (K-R) rearrangement <cit.> has a low-triangular structure
z = f(x,s) = [[ f_1(x_1,s); f_2(x_1,x_2,s); ⋮; f_d(x_1,⋯,x_d,s) ]].
The sub-transformations f_1,⋯,f_d consist of polynomial spline layers and coupling layers <cit.>. More details can be found in <cit.>. Let f_B-KRnet(·,s;θ_f,2) indicate the conditional invertible transport map induced by bounded KR-net, where θ_f,2 includes the model parameters. Then an explicit PDF model p_B-KRnet(x,s;θ_f,2) can be obtained by letting f=f_B-KRnet in equation (<ref>)
p_B-KRnet(x|s;θ_f,2) = p_Z(f_B-KRnet(x,s)) |∇_x f_B-KRnet|.
§.§.§ Adaptive sampling approach
Now we model a continuous joint density distribution p_θ_f(x,t)
p_θ_f(x,t) = p_poly(t;θ_f,1) p_B-KRnet(x|t;θ_f,2),
where θ_f = {θ_f,1, θ_f,2}. To seek the "optimal" parameter θ_f, we can minimize the following objective
D_KL(r̂_θ(x,t) || p_θ_f(x,t))
= D_KL(r̂_θ(x,t)|| p_poly
(t;θ_f,1)p_B-KRnet(x|t;θ_f,2))
=∬r̂_θ(x,t) log(r̂_θ(x,t)) dxdt
- ∬r̂_θ(x,t) log( p_poly(t;θ_f,1)p_B-KRnet(x|t;θ_f,2)) dxdt,
where D_KL indicates the Kullback-Leibler (KL) divergence and r̂_θ(x,t)∝ r^2(x,t;θ) is the induced measure by continuous residual squared r^2(x,t;θ)
r(x,t;θ) = ∂_t u_N(x,t;θ) - 𝒩[u_N](x,t;θ) - f(x,t).
The first term on the right-hand side in (<ref>) corresponds to the differential entropy of
r̂_θ(x,t), which does not affect the optimization with respect to
θ_f. So minimizing the KL divergence is equivalent to minimizing the cross
entropy between r̂_θ(x,t) and p_θ_f(x,t)
H(r̂_θ(x,t), p_θ_f(x,t))
= - ∬r̂_θ(x,t) log( p_poly(t;θ_f,1)
p_B-KRnet(x|t;θ_f,2)) dxdt,
Since the samples from r̂_θ(x,t) are not available, we approximate the cross entropy using the importance sampling technique:
H(r̂_θ(x,t),p_θ_f(x,t))≈ -1/N_r∑_i=1^N_rr̂_θ(x_i, t_i)/p_poly(t_i;θ̂_f,1)
p_B-KRnet(x_i|t_i;θ̂_f,2)(log p_poly(t_i;θ_f,1) + log p_B-KRnet
(x_i|t_i;θ_f,2)),
where
t_i∼ p_poly(·;θ̂_f,1), x_i∼ p_B-KRnet(·|t_i;θ̂_f,2).
The choice of θ̂_f={θ̂_f,1,θ̂_f,2} will be specified as follows.
Once obtaining the well-trained parameters θ_f^* = {θ_f,1^*, θ_f,2^*}, one can refine initial training set
via adding new samples and update the weights in (<ref>). Specifically, note that the
residuals r_i(x;θ) in (<ref>) are only needed at time s_i,
we need to construct the corresponding discrete distributions.
For system (<ref>) in Galerkin projection, we use the following discrete distribution
p_dis(s) ={[ ∫_Ωr_i^2(x;θ)dx/∑_i=0^N∫_Ωr_i^2(x;θ)dx, s=s_i,0≤ i≤ N; 0, otherwise. ].
For system (<ref>) in collocation projection, we simply use the following discrete distribution
p_dis(s;θ_f,1^*) = {[ ∫_0^(s_0+s_1)/2p_poly(s;θ_f,1^*)ds, s=s_0,; ∫_(s_i-1+s_i)/2^(s_i+s_i+1)/2p_poly(s;θ_f,1^*)ds, s = s_i,0<i<N; ∫_(s_N-1+s_N)/2^1p_poly(s;θ_f,1^*)ds, s=s_N,; 0, otherwise. ].
Then the weights in (<ref>) can be determined via the discrete distribution p_dis(·;θ_f,1^*).
For collocation projection, it is straightforward to refine the
training set. We first generate N_new samples {t_j}_j=1^N_new via the
well-trained model
p_dis(·;θ^*_f,1), then we generate the corresponding spatial samples
x_j∼ p_B-KRnet (·|t_j;θ^*_f,2) for each t_j.
After that we reorder newly generated data into the original spatial space
x_j ∈ S^(i)_r,new, if t_j = s_i, ∀ i=0,⋯,N, j=1,⋯,N_new.
For Galerkin projection, recall that we need to use the same training set for the basis function
with same support (see (<ref>)). Hence for each s_i in (<ref>), one
firstly identies its corresponding basis function and support, here it is denoted as Ĩ_i. Then we generate
spatial samples x_i,k∼ p_B-KRnet(·|t_i,k;θ_f,2^*), where t_i,k are Gaussian quadrature points in the interval
Ĩ_i. The total number of generated spatial points for each s_i should be proportional to the discrete weights p_dis(s_i). Again, we recoder newly
generated data into the original spatial space
x_i,k∈ S^(i)_r,new, ∀ i=0,⋯,N.
We are now ready to present our algorithms.
Let {S_r,0^(i)}_i=0^N, S_ic and S_bc be three sets of collocation points that are
uniformly sampled from Ω^N+1, Ω×{0} and ∂Ω× [0,T], respectively.
Using {S_r,0^(i)}_i=0^N, S_ic and S_bc, we minimize the empirical
loss (<ref>) to obtain u(x,t;θ^*, (1)). With
u(x,t;θ^*,(1)), we minimize the cross entropy (<ref>)
to get p_θ^*,(1)_f(x,t), where we use uniform samples for importance
sampling. To refine the training set, a new set
S_r,new={S^(i)_r,new}_i=0^N is generated according to the model p_θ_f^*,(1)(x,t), and new training set can be updated as
S_r,1 = S_r,0∪ S_r,new. Then we continue to update the approximate solution u(x,t;θ^*,(1)) using S_r,1 as the training set, which yields a refined model u(x,t;θ^*,(2)). Starting from k=2, we seek p_θ^*,k(x,t) using the previous approach. We repeat the procedure to obtain an adaptive algorithm (see Algorithm <ref>).
§ NUMERICAL EXPERIMENTS
In this section, we conduct some numerical experiments to demonstrate the effectiveness of the proposed method, including one convection equation, one Allen-Cahn equation, one two-dimensional and low regularity test problems, and two high-dimensional linear or nonlinear problems. Throughout all benchmarks, we will employ the fully-connected neural network equipped with hyperbolic tangent activation functions (Tanh) and initialized using the Glorot normal scheme <cit.>. All neural networks are trained via stochastic gradient descent using the Adam optimizer with default settings <cit.> and an exponential learning rate decay with a decay-rate of 0.9 every 1,000 training iterations. All experiments are implemented by JAX <cit.>.
In order to test the validity of the method, we use the following relative L_2 error:
err_L_2 = √(∑_i=1^num|u(x_i,t_i;θ) - u(x_i,t_i)|^2)/√(∑_i=1^num|u(x_i,t_i)|^2),
where num represents the total number of test points chosen randomly in the domain, and u(x_i,t_i;θ) and u(x_i, t_i) represent the predicted and the exact solution values, respectively.
§.§ Convection equation
We start with a one-dimensional linear convection equation of the form
∂ u/∂ t + β∂ u/∂ x = 0, (x,t)∈ [0, 2π] × [0,1],
subject to periodic boundary conditions and an initial condition u(0,x)=sin(x).
The precise solutions for varying values of β are depicted in Figure <ref>. One can observe that as β increases, the solution exhibits increasingly pronounced temporal variations.
We initially evaluate the performance with linear finite element basis functions in Galerkin projection. The latent coefficients, denoted as ω_i(x;θ), are represented using a fully-connected neural network with tanh activation function, 4 hidden layers and 128 neurons per hidden layer. To simplify the training object, we strictly impose the periodic boundary conditions by embedding the input coordinates into Fourier expansion (see <ref>). Notice that the above equation is linear, we use the linear form of loss function (<ref>) and set N_segment=1. We create a uniform mesh of size 400 in the spatial computational domain [0,2π], yielding 400 initial points and 400 collocation points for enforcing the PDE residual. We proceed by training the resulting model via full-batch gradient descent using the Adam optimizer for 40,000 iterations. As shown in Figure <ref>, for fixed N, the relative L_2 gradually increases as β increases; for fixed β=50, the relative L_2 exhibits linear convergence concerning the number of basis functions (N). Particularly, when β=30, the proposed method attains a remarkable relative L_2 error of 2.85e-3.
Furthermore, we investigate the impact of linear and quadratic finite element basis functions on the performance of the proposed model. Specifically, we set β to 50 and vary the number of mesh elements N, ranging from 32 to 128. We then train the proposed model under same hyperparameter configurations. Figure <ref> and Table <ref> present a summary of the relative L_2 errors observed in the trained models. It is not surprising that the error diminishes when we replace linear basis functions with quadratic ones, in accordance with the classical results of finite element theory.
To investigate the effects of increasing the number of basis functions, we compare the relative L_2 errors given by various settings. It is seen in Table <ref> that relative L_2 error decreases as N increases for all cases.
In Table <ref> we compare the performance of the proposed method with the DABG <cit.>
at different β. One can observe that for these two methods, when β is relatively small, as N increases, the error first decreases and then increases; when β is relatively large, the error consistently decreases as N increases. We conjecture that reason for this phenomenon is that when β is relatively small (i.e., the solution is smoother), small N can achieve good accuracy, while larger N may lead to greater optimization challenges. Another message from Table <ref> is that the proposed approach exhibits reduced sensitivity to temporal frequency variations in the solution compared to DABG.
We also compare the performance and computational time of the proposed method with causal PINNs. We set the same hyper parameters to compare these two methods, including the architecture of neural network, the initial learning rate, decay rate, the number of training points in space, and iterations. The ϵ in causal PINNs is set to {1e-2, 1e-1, 1e0, 1e1, 1e2}. All runtime statistics were computed on the same hardware, a Nvidia Tesla V100 w/ 32 GB memory. It is shown in Table <ref> that as β increases, the proposed method demonstrates superior accuracy compared to causal PINNs. The proposed method (Galerkin projection) runs approximately 100 times faster than causal PINNs. Such a significant difference in efficiency is due to the distinct computational graphs adopted by these two methods. For the proposed method, because the residual originates from the linear system defined in (<ref>), we can pre-calculate the finite element sparse matrix along the time direction. We consider the computational cost of automatic differentiation in calculating the residual. For any given spatial point x, causal PINNs need n back-propagation computations for n temporal points, while our method only requires a single back-propagation computation and one matrix-vector multiplication of the above-mentioned finite element sparse matrix and the neural network output.
§.§ Allen-Cahn equation
The next example aims to illustrate the effectiveness of time marching technology in our proposed method. Consider the following Allen-Cahn equation
[ u_t -c_1^2 ∇ ^2 u +f(u) = 0, x∈[-1, 1],t∈ [0,1],; f(u) = c_2 (u^3 - u),; u(x,0) = x^2 cos(π x),; u(1,t) = u(-1, t),; u_x(1, t) = u_x(-1, t), ]
where c_1^2=0.0001 and c_2=5.
We take the number of linear mesh elements as 100 and represent the latent coefficients {ω_i(x;θ)} by a fully-connected neural network with tanh activation function, 4 hidden layers and 128 neurons per hidden layer. Similarly, we strictly impose the periodic boundary conditions by embedding the input coordinates into Fourier expansion (see <ref>). We create a uniform mesh of size 1,000 in the spatial computational domain [-1, 1], yielding 1,000 initial points and 1,000 collocation points for enforcing the PDE residual. We proceed by training the resulting model via full-batch gradient descent using the Adam optimizer for
the total 100,000 iterations. The resulting L_2 errors for different numbers of time segments are shown in Table <ref>. We observe the relative L_2 error significantly decreases as the number of time segments increases. One can see that the predicted solution achieves an excellent agreement with the ground truth, yielding an approximation error of 7.67e-3 measured in the relative L_2 norm.
In addition, we employ quadratic basis functions and set N to 30. All other parameters remain consistent with the previous configuration. In Table <ref> similar results are observed compared to the previous scenario. Moreover, we also present a representative predicted solution in Figure <ref>, one can see that the predicted solution is in good agreement with the reference solution.
We summarize some relative L_2 errors for N_segment=2 in
Table <ref>. We observe that the collocation framework with piecewise spline functions has achieved the best accuracy. Moreover, we compare the Galerkin projection with causal PINNs in Table <ref>. Again, we set the same hyper parameters to compare these two methods. The ϵ in causal PINNs is set to {1e-2, 1e-1, 1e0, 1e1, 1e2}. It is shown that the accuracy of these two methods is comparable. Note that due to the presence of non-linear term f(u), we must employ a Gaussian quadrature formula of a sufficient degree of exactness for the Galerkin projection in the time direction. Even so, the proposed method runs approximately 40 times faster than causal PINNs.
§.§ High-dimensional linear problem
To demonstrate the effectiveness of the proposed approaches in tackling high-dimensional PDEs, we consider the following parabolic equation with non-constant coefficients
u_t - ∇· (a(x)∇ u)=f, Ω× (0,1],
u(x,0)=0, Ω,
u = 0, ∂Ω× [0,1],
with a(x)=1+ |x|^2/2. The domain is set to be the 20-D unit ball Ω = {x∈ℝ^20| |x|<1}, and the true solution is set to be
u(x,t) = sin(sin (2πω t)(|x|^2 - 1)).
We set ω to be 3 and N_segment to be 1. For simplicity, here we only test the performance of the proposed approach with quadratic finite element basis functions. We represent the latent coefficients {ω_i(x;θ)} by a fully-connected neural network with activation function tanh, 5 hidden layers, and 128 neurons per hidden layer. We impose exactly the Dirichlet boundary conditions by transforming the output into the following form:
ω_i(x;θ) = ω_i(x;θ) (|x| -1).
In addition, we set ũ_0(x;θ) to be zero such that the initial conditions are exactly satisfied. To obtain a set of training data for evaluating PDE residual, we randomly sample 100,000 collocation points in Ω. Since the problem is linear, equation (<ref>) is used to define the loss. We set the size of mini-batch to 5,000 and train the model via mini-batch stochastic descent with the Adam optimizer for 40,000 iterations. The corresponding results for different numbers of mesh elements are summarized in Table <ref>. We observe that the resulting relative L_2
error is 9.67e-4, which is more accurate than the ones in recent work <cit.> (3.07e-3 for the deep adaptive basis Galerkin approach and 4.54e-2 for the PINNs).
§.§ High-dimensional nonlinear problem
In this case, we solve the Allen-Cahn equation
u_t - Δ u + u^3 - u = f_AC, Ω× [0,1],
u(x,0) = 0, Ω,
u = 0, ∂Ω× [0,1].
The domain is set as Ω = {x∈ℝ^20: |x|<1}, and the true solution is set to be
u(x,t) = sin(sin(2πω t)(|x|^2 -1)).
Here we set ω to be 3 and N_segment to be 1. For simplicity, We only test the performance of the proposed approach with quadratic finite element basis functions. We represent the latent coefficients {ũ_i(x;θ)} by a fully-connected neural network with tanh activation function, 5 hidden layers and 128 neurons per hidden layer. The initial condition and Dirichlet boundary condition can be exactly embedded into the network structure (as described in Section <ref>). We set K in equation (<ref>) to be 30 and randomly sample 100,000 collocation points for evaluating PDE residual. We set mini-batch to be 5,000 and proceed by training the resulting model via stochastic gradient descent using the Adam optimizer for 40,000 iterations. The obtained errors for different number of finite element N are shown in Table <ref>. The resulting relative L_2 error is 1.16e-3, which is more accurate than the recent ones in <cit.> (2.07e-3 for the deep adaptive basis Galerkin approach and 7.79e-2 for the PINNs).
§.§ Low regularity test problems
Our last example aims to demonstrate the effectiveness of the proposed adaptivity strategy. We consider the following low-regularity equation
u_t - Δ u =f Ω× (0,T]
u(x_1, x_2;0) = g(x_1, x_2) Ω×{0},
u(x_1, x_2;t) = h(x_1,x_2;t) ∂Ω× (0,T].
where Ω= [0, 1]^2, T=0.5 and the true solution is chosen as
u(x_1,x_2,t)=exp(-β[(x_1 - t - 1/4)^2 + (x_2 - t - 1/4)^2]).
This solution has one peak at the point (t+1/4, t+1/4) and decreases rapidly away from (t+1/4,t+1/4), see Figure <ref>.
We first consider the case with β=200. We represent the latent coefficients
{ω̃_i(x;θ)} by a fully-connected neural network with
activation function tanh, 5 hidden layers and 128 neurons per hidden layer.
To simplify the training objective, we precisely embed the Dirichlet
boundary conditions into the neural network architecture through the
following transformation
ω̃_i(x;θ) = x_1x_2(1-x_1)(1-x_2)ω̃_i(x;θ).
For B-KRnet, we take 8 CDF coupling layers, and two fully connected layers with 32
neurons for each CDF coupling layer. The corresponding activation function is tanh. To assess the effectiveness of our adaptive sampling
approach, we generate a uniform meshgrid with size 256× 256×
100 in [0,1]^2× [0,0.5] and compute relative L_2 error on these grid points.
For collocation projection, we set the number of piecewise cubic spline basis functions to 20, and randomly sample 1,000 collocation points in Ω as our initial training set. We let N_adaptive=5 and
N_new=500. The number of epochs for training
u(x,t;θ) and p_θ_f(x,t) is set to 40,000 and
5,000, respectively.
In Figure <ref>, we plot the approximation errors with respect to the adaptivity iterations. It is clearly seen that the error rapidly decreases as the adaptivity iteration step k increases.
Figure <ref> shows the evolution of temporal distribution p_dis(t)
for k=1,3,5. It is seen that the largest density at the first adaptivity iteration is around terminal time. After the training set is refined,
the error profile becomes more flat at k=3 and 5. Similar results are observed in Figure <ref>, which indicates that the
distribution of residual becomes more uniform as adaptivity iteration increases.
For Galerkin projection, we set the number of piecewise quadratic basis functions
to 20, the remaining hyper-parameter settings remain the same. The corresponding numerical results, included in <ref>, are very similar to the results for collocation projection.
We also investigate the case when β=1000. Here we take the Galerkin projection as an example.
In Figure <ref>, we compare the performance of uniform sampling and adaptive sampling using the relative L_2 error, which demonstrates the necessity of adaptive sampling for this case.
Figure <ref> shows the evolution of temporal weights λ_i for k=1,3,5. The observed temporal weights exhibit an alternating high-low pattern, which can be attributed to the properties of the test basis function in time dimension. One also finds that the error profiles become more flat as the adaptivity iterations increase.
Figure <ref> shows the
generated samples of bounded KRnet with respect to adaptivity iterations k=1, 5.
It is seen that the largest density of generated samples at the first update is
around the peak of the reference solution, which is consistent with the residual-induced
distribution. Moreover, as k increases, we expect the tail of the residual-induced
distribution becomes heavier since the adaptivity tries to make the residual-induced
distribution more uniform. It is illustrated by the last update in the Figure
<ref>, which is consistent with previous findings
reported in <cit.>.
§ CONCLUSION
In this paper we have developed a hybrid numerical method for solving evolution partial differential equations
(PDEs) by merging the time finite element method with deep neural networks. The key idea of the proposed method is to represent the
solution as a tensor product comprising a series of
predetermined local finite element basis functions in
time and a sequence of unspecified neural networks in
space. Subsequently, we apply the Galerkin or collocation
formulation to the original evolution equation, eliminating
the temporal variable and resulting in a differential system
exclusively involving unknown neural networks with respect to
the spatial variable. Furthermore, to address the evolution
problems characterized by high dimensionality and low regularity, we have developed an adaptive
sampling strategy where the training set and the model are updated alternately to improve both efficiency and accuracy.
Numerical experiments have demonstrated the effectiveness of our proposed approaches.
Several issues deserve further investigation. First, a rigorous error analysis is still missing especially for the collocation projection. Second, while herein we only consider linear and quadratic finite element methods over a uniform mesh, the strategy can also be generalized to hp-finite element method in the temporal domain. Finally, the proposed methods need to be further investigated for convection-dominated problems.
§ ACKNOWLEDGEMENTS
T. Tang was partially supported by the Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science under UIC 2022B1212010006 and National Natural Science Foundation of China (Grants Nos. 11731006 and K20911001).
§ EXACT PERIODIC BOUNDARY CONDITIONS
Following the work from <cit.>, we can exactly enforce C^∞ periodic boundary conditions by applying a Fourier feature encoding to the spatial input of the network. The spatial encoding is
v(x) = { 1,cos(ω x),sin(ω x) ,⋯,cos(Mω x),sin(Mω x)}
where ω =2π/L, L = x_max - x_min, and M is a non-negative integer representing the sinusoidal frequency of the input. In this work, we take M=10.
§ SUPPLEMENTARY FIGURES
|
http://arxiv.org/abs/2409.02406v1 | 20240904031448 | Hadamard Row-Wise Generation Algorithm | [
"Brayan Monroy",
"Jorge Bacca"
] | cs.DS | [
"cs.DS",
"cs.CC",
"cs.CV"
] |
Hadamard Row-Wise Generation Algorithm
Brayan Monroy & Jorge Bacca
Universidad Industrial de Santander, Bucaramanga, Colombia
<https://github.com/bemc22/hadamard-spc>
September 9, 2024
===========================================================================================================================================
§ ABSTRACT
We present an efficient algorithm for generating specific Hadamard rows, addressing the memory demands of pre-computing the entire matrix. Leveraging Sylvester's recursive construction, our method generates the required i-th row on demand, significantly reducing computational resources. The algorithm uses the Kronecker product to construct the desired row from the binary representation of the index without creating the full matrix. This approach is particularly useful for single-pixel imaging systems that need only one row at a time.
§ METHOD
Computing the i-th row h_i of a Hadamard matrix H of order 2^n usually requires pre-computing the entire matrix. This process can be memory-intensive, particularly when n is large. However, in certain applications, such as Hadamard single-pixel imaging, only individual Hadamard rows are needed at any given time <cit.>. To address this, we have developed an algorithm for row-wise generation that calculates the specific coefficients of the i-th row without generating the entire matrix. Specifically, following Sylvester's construction, a Hadamard matrix of order 2^n can be recursively constructed from a base matrix of order two and Kronecker products as follows
H_2^n =
[ H_2^n-1 H_2^n-1; H_2^n-1 - H_2^n-1 ] = H_2 ⊗H_2^n-1,
with 2 ≤ n ∈ℕ where ⊗ denotes the Kronecker product. Hence, each row of the Hadamard matrix of order 2^n can be expressed as the Kronecker product of the first or second row of the Hadamard matrix of order 2. The sequence of Kronecker products is derived from the binary representation i_10 = (b_nb_n-1… b_1 b_0)_2 of the i-th row as follows
h_i = ⊗_k=n^0 h_b_k = h_b_n⊗h_b_n-1⊗⋯⊗h_b_1⊗h_b_0
In this context, we present the Algorithm <ref>, which involves setting the base 2-order Hadamard matrix and mapping the specified index to its binary representation. An iterative loop processes the binary digits to select the appropriate rows of the 2-order Hadamard matrix, ultimately constructing the desired Hadamard row via the cumulative Kronecker product as presented in Figure <ref>. Additionally, this algorithm can be adapted for other Hadamard ordering strategies, as it relies mainly on permuting ordering indexes. The code implementation is available on GitHub.
It is important to note that the generation of a Hadamard matrix row without constructing the entire matrix is less discussed in the literature, although it is a natural extension of their recursive nature, as proposed in <cit.>. We believe that the algorithm provided contributes to a more detailed documentation of this strategy.
Computational complexity. The computational complexity of Algorithm <ref> relies on the complexity in compute n times Kronecker product of vector of size 2. In this sense, for each Kronecker product between the vector of size 2^k-1 (resulting from the previous k-1 Kronecker products) and a vector of size 2, the number of multiplication required is 2 × 2^k-1 = 2^k. Thus, the total computational complexity 𝒞(n) for performing n Kronecker products is the sum of the number of multiplications for each step
𝒞(n) = 2^1 + 2^2 + 2^3 + ⋯ + 2^n = ∑_k=1^n 2^k,
which consists of a geometric series that can be simplified as 𝒞(n) = 2^n+1 - 2, with the dominant term in 𝒞(n) being 2^n+1, so the asymptotic computational complexity of Algorithm <ref> is 𝒪(2^n+1) ∼𝒪(2^n).
ieee_fullname
|
http://arxiv.org/abs/2409.02360v1 | 20240904011227 | A Comprehensive Study of Open Cluster Chemical Homogeneity using APOGEE and Milky Way Mapper Abundances | [
"Amaya Sinha",
"Gail Zasowski",
"Peter Frinchaboy",
"Katia Cunha",
"Diogo Souto",
"Jamie Tayar",
"Keivan Stassun"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
0009-0005-0182-7186]Amaya Sinha
Department of Physics & Astronomy, University of Utah, 115 South 1400 East, Salt Lake City, UT 84112, USA
0000-0001-6761-9359]Gail Zasowski
Department of Physics & Astronomy, University of Utah, 115 South 1400 East, Salt Lake City, UT 84112, USA
0000-0002-0740-8346]Peter Frinchaboy
Department of Physics & Astronomy, Texas Christian University, Fort Worth, TX 76129, USA
0000-0001-6476-0576]Katia Cunha
Steward Observatory, University of Arizona, Tucson, AZ 85721, USA
Observatório Nacional, Rua General José Cristino, 77, 20921-400 São Cristóvão, Rio de Janeiro, RJ, Brazil
0000-0002-7883-5425]Diogo Souto
Departamento de Física, Universidade Federal de Sergipe, Av. Marcelo Deda Chagas, Cep 49.107-230, São Cristóvão, SE, Brazil
0000-0002-4818-7885]Jamie Tayar
Department of Astronomy, University of Florida, Bryant Space Science Center, Stadium Road, Gainesville, FL 32611, USA
0000-0002-3481-9052]Keivan Stassun
Department of Physics and Astronomy, Vanderbilt University, VU Station 1807, Nashville, TN 37235, USA
\begin@twocolumnfalse
§ ABSTRACT
Stars in an open cluster are assumed to have formed from a broadly homogeneous distribution of gas, implying that they should be chemically homogeneous. Quantifying the level to which open clusters are chemically homogeneous can therefore tell us about ISM pollution and gas-mixing in progenitor molecular clouds. Using SDSS-V Milky Way Mapper and SDSS-IV APOGEE DR17 abundances, we test this assumption by quantifying intrinsic chemical scatter in up to 20 different chemical abundances across 26 Milky Way open clusters.
We find that we can place 3σ upper limits on open cluster homogeneity within 0.02 dex or less in the majority of elements, while for neutron capture elements, as well as those elements having weak lines, we place limits on their homogeneity within 0.2 dex. Finally, we find that giant stars in open clusters are ∼0.01 dex more homogeneous than a matched sample of field stars.
\end@twocolumnfalse
§ INTRODUCTION
Immediately after the Big Bang, the only elements in the universe were hydrogen, helium, and trace lithium. It took the formation of stars and galaxies to populate the universe with the rest of the periodic table. Therefore understanding where and how stars produce and disperse heavy elements is essential to understanding the enrichment of the universe. However, many questions regarding the chemical enrichment of the universe still remain unanswered. Specifically, there is still much uncertainty on how well-mixed giant molecular clouds are or how heavy elements get from their production sites into stars <cit.>. Fortunately, with a few exceptions, the surface abundances of stars are fossil records of the gas composition from the molecular cloud in which they formed. As a result, we can use the present-day chemistry of stars to learn about the chemistry of the Milky Way in the past.
In the age of large astronomical surveys such as GALAH <cit.>, LAMOST <cit.>, RAVE <cit.>, APOGEE <cit.>, and Gaia <cit.>, we can now probe the chemistry of stars in the Milky Way on the scale of ∼ 0.1 dex or smaller in multiple elements across different nucleosynthetic families, allowing us to trace different chemical enrichment pathways. Furthermore, we can now study the chemistry of the Milky Way at multiple different scales, from the simplest population in conatal binaries <cit.> to large populations of dispersed field stars <cit.>.
Stars in an open cluster (OC) are assumed to have formed from a broadly homogeneous distribution of gas at the same time, implying that they should all have the same age and be at the same distance <cit.>. Using the chemistry of OC stars, we can infer the chemistry of the gas available at that point in the Milky Way's history, in particular within the thin disk. In the past, it has been suggested that using assumptions of chemical homogeneity from simple stellar populations like open clusters, it would be possible to reconstruct a dissolved cluster purely by its members' chemistry <cit.>. This technique, known as chemical tagging, has been a strong motivator for studies of cluster chemistry.
While many studies support this assumption of OC chemical homogeneity <cit.>, there has been work showing that at least some clusters are chemically inhomogeneous <cit.>. For example, <cit.> argued that NGC 6791 may not be chemically homogeneous, due to the presence of a potential Na-O anti-correlation, a relationship most commonly found in globular clusters <cit.>. This would be an exciting result as NGC 6791 already unique in the Milky Way, as both the most massive and the most metal rich open cluster. However other studies, such as <cit.>, have shown that it is chemically homogeneous within measurement uncertainties. For a more detailed discussion on previous studies of open cluster chemical homogeneity, see Section <ref>.
There are reasons why OCs could be heterogeneous in specific elements. Slow neutron capture element abundances, such as Sr, Ba, and Zr, can change over a star's lifetime as it enters the AGB phase of its evolution <cit.>. Dredge-up, which occurs in stars on the giant branch, causes the star's convective envelope to expand, and it eventually gets deep enough to pull CNO-cycled elements to the surface, thereby altering the surface abundances we measure <cit.>. NGC 6705 is an interesting OC regarding this effect, as it has been observed to also be enhanced in Na due to dredge up <cit.>.
The surface abundances of elements such as Mg can be affected by effects like mass transfer and atomic diffusion <cit.>. However, the latter only weakly impacts the upper giant branch. Lastly, mass transfer <cit.> and pollution events such as planetary engulfment <cit.>
can also alter a star's surface abundances.
However, if an open cluster was measured to have nonzero chemical scatter even accounting for these factors, that could point to interesting and understudied physics that may have occurred during the formation of the OC. Simulations have shown that turbulent mixing during cloud assembly naturally produces a stellar abundance scatter that is ∼ 5 times smaller than that in the natal gas <cit.>; suggest that this mixing could explain the observed chemical homogeneity of stars forming from the same molecular cloud. This is supported by recent work by <cit.> who find open clusters in FIRE-2 simulations <cit.> to have chemical scatter within 0.02 dex on average.
However, chemical inhomogeneity in real clouds could be due to effects not fully captured by simulations, related to internal turbulence and gas mixing within the progenitor molecular cloud or pollution events such as core collapse supernovae (CCSNe) that occurred earlier in the cluster's lifetime <cit.>.
Quantifying the level of chemical homogeneity in open clusters across a broad set of elements from various nucleosynthetic families would provide the basis for understanding the physics of early OC formation.
This work aims to constrain the chemical homogeneity in a large set of abundances and clusters to disentangle the causes of those chemical variances. The structure of the paper is as follows: Section <ref> outlines the survey data, verification of the abundance uncertainties, and determination of the cluster membership. Section <ref> details the methodology and calculation of the intrinsic scatter within each [X/Fe] across the final cluster sample. Section <ref> presents the results of our work, and Section <ref> compares our results to previous findings.
§ DATA
§.§ SDSS
§.§.§ SDSS-V/MWM
The abundances and radial velocities (RVs) we use are primarily drawn from the Milky Way Mapper (MWM; J.A. Johnson, in prep), a component of the fifth generation of the Sloan Digital Sky Survey <cit.>. We use data from Internal Product Launch 3 (IPL-3), which will form the basis for SDSS DR19 (K. Hawkins, in prep). This dataset builds off of
the observing strategies and survey goals outlined in SDSS Data Release 18 and includes observations of over a million targets <cit.>.
SDSS-V/MWM uses two telescopes: the Sloan Foundation Telescope at APO <cit.> and the duPont Telescope at LCO <cit.>. Both are outfitted with nearly identical custom-built 300-fiber APOGEE spectrographs <cit.>, which reach a resolution of R ∼ 22,500, spanning the range of wavelengths between 1.51-1.70 μm. Unlike in SDSS-IV, which used a plug-plate system, SDSS-V now uses robotic fiber positioners <cit.>, which benefited from the adoption of a three-element corrector for the Sloan telescope at APO <cit.>.
Within IPL-3, three different data pipelines were used to analyze the data taken from APO and LCO:
The Payne <cit.>, The Cannon <cit.>, and the APOGEE Stellar Parameters and Abundances Pipeline <cit.>. Both The Payne and The Cannon are label-transfer methods that determine stellar labels from spectra after being trained on a set of spectra with known labels. They are differentiated by the fact that The Cannon is a data-driven model which requires no information about stellar models. Rather measurements from the Cannon inherit information from the models from its training data. The Payne incorporates physical models directly into its analysis. While these datasets are similar in many aspects, comparing the limits derived through each of them will provide a stronger constraint on the true homogeneity of the OCs in our sample.
§.§.§ SDSS-IV/APOGEE
In addition to IPL-3, we use abundances and RVs from the seventeenth and final data release <cit.> of SDSS-IV's <cit.> Apache Point Observatory Galaxy Evolution Experiment <cit.>, which contains over 700,000 stars. The initial targeting strategy for APOGEE-1 and APOGEE-2 are outlined in <cit.> and <cit.>, respectively, and the final targeting for APOGEE-2 is outlined in <cit.> for APOGEE-2 north, and <cit.> for APOGEE-2S.
The details for the APOGEE data reduction pipeline are described in <cit.>, and the details for ASPCAP are found in <cit.>. The description for the updates to these pipelines for DR17 is included in Holtzman et al. (in prep). The MARCS model atmospheres and interpolation methodology used in APOGEE are described by <cit.> and <cit.>. The line lists used for DR17 are outlined in <cit.>, and the spectral fitting used for ASPCAP is described in <cit.>. The details describing the APOGEE spectral grids can be found in <cit.>, <cit.>, and <cit.>, and lastly, the details for Turbospectum can be found in <cit.> and <cit.>.
We also include abundances from the BACCHUS Analysis of Weak Lines in APOGEE Spectra <cit.>, a value-added catalog (VAC) in DR17. This VAC provided abundances for several chemical species having weak and blended lines that cannot be reliably analyzed using ASPCAP. This sample consists of high signal to noise (SNR > 150) red giant stars with no flags in either or and analyzed using the BACCHUS code <cit.>, which measures line-by-line elemental abundances from on-the-fly spectral synthesis. High quality measurements are stacked to create a sample of elemental abundances for elements with weak or blended lines. We use the BAWLAS VAC abundances and uncertainties for the following elements: Na, P, S, V, Cu, Ce, and Nd.
Two separate uncertainties are reported for each BAWLAS abundance measurement. One is the describing the measured uncertainty from the combined spectra using the same methodology as ASPCAP. The other is , which describes the uncertainty derived from the spectral lines themselves. Here to remain consistent in our analysis we use as it is the closest to the uncertainty calculation method that we verify in Section <ref>.
The calculation of the abundances in the BAWLAS catalog is outlined fully in <cit.>. Each spectrum has an associated , with values , , or . Both and indicate either suspicion with the final fit or total failure, and only indicates that the spectral fit is trustworthy. To ensure the highest quality sample, we require all the stars used in this study to have measurements with = 1.
§.§.§ Quality Cuts
We limit our sample in APOGEE DR17 to stars with 0.1 km s^-1 and 5000 km s^-1 to ensure our stars have reliable radial velocities. We also restrict our sample to stars with 1 km s^-1, to remove potential binaries within our sample <cit.>. Here refers to the average radial velocity derived from individual RVs that are drawn from cross-correlation of individual spectra with combined spectrum. refers to the uncertainty on that radial velocity, and refers to the scatter of individual visit RVs around the average. To ensure the sample has reliable measurements we enforce a . These limits are identical between DR17 and IPL-3. We also limit our sample to stars between 3000 K and 6500 K, and -1 ≤ [Fe/H] ≤ 1.
We also exclude from our sample any stars that have [X/Fe] flags in more than 2 elements. Lastly, we only sample from stars with in order to ensure that every star we study is a member of the giant branch. These requirements result in a sample of 305,201 stars. When using IPL-3 we use the corresponding columns and limits, with the exception of which is not included in IPL-3.
In DR17's allStar file, we enforce quality cuts on the and columns, the details of which are included here: <https://data.sdss.org/datamodel/files/APOGEE_ASPCAP/APRED_VERS/ASPCAP_VERS/allStar.html>. The details of the APOGEE bitmasks are located here <https://www.sdss4.org/dr17/algorithms/bitmasks/>. Within the column we enforced the following requirements:
* == 0; BAD overall for star: set if any of , , , , , SN error are set, or any parameter is near grid edge ( is set in any )
* == 0; FERRE failed to return a value for metals.
* == 0; Elemental abundance from window differs > 0.5 dex from parameter abundance for [α/Fe].
and within the we made the following cuts:
* == 0; Star has neighbor more than 10 times brighter.
* == 0; Star has neighbor more than 100 times brighter.
* == 0; WARNING: cross-correlation peak with template significantly broader than autocorrelation of template.
The IPL-3 ASPCAP allStar file does not have a published column. As such, we require that all the stars in our sample have and . For all three IPL-3 pipelines we use we enforce a requirement. Where possible we enforce a requirement within each pipeline. Within the three IPL-3 allStar files we use, we enforce these quality cuts on the column, which has bits that correspond to DR17's column.
§.§ Cluster Membership
We start from the catalog of cluster members published in <cit.>, hereafter , which contains membership information for over 200,000 stars across ∼ 2000 OCs using Gaia DR2 <cit.>. Due to the more recent availability of Gaia DR3 <cit.>, we first match the stars identified as cluster members in to their DR3 kinematics and positions. used two spatial (RA α, DEC β) and two kinematic (proper motion-RA δ _α *, proper motion-DEC δ _β *) parameters, as well as parallax ϖ, as inputs into an unsupervised machine learning algorithm to determine cluster membership. We limit the initial cluster membership candidacy to stars from within three cluster radii of their cluster centers. Using each cluster's distribution in radial velocity, we find that a minimum probability cut of P ≥ 0.5 in the catalog maximized the overlap in membership within DR17 and IPL-3 while also minimizing contamination from non-cluster members. Of the 2019 OCs identified in , only 145 clusters have any members within both DR17 and IPL-3.
§.§.§ Kinematic Selection
To further ensure that the clusters identified had minimal contamination, we use a Kernel Density Estimator with a variable bandwidth, following Silverman's Rule, to measure the dispersion in four dimensions (radial velocity, δ _α *, δ _β *, and ϖ). We reject stars further than two standard deviations from the cluster median. For an example of this methodology, see Figure <ref>, which shows the final distributions after these cuts in M67. The kinematic selection plots for all clusters in our sample can be found in Appendix <ref>.
We plot the [Fe/H] distribution of each cluster against the distribution of nearby non-cluster members between 2–5 cluster radii, as published in CG2018. We visually inspect the HR diagrams for the cluster members compared to the annulus to ensure no contamination, as each cluster should follow a single distinct isochrone. We use MIST isochrones <cit.>, generated using ages from <cit.>, and the median cluster [Fe/H]. Lastly, we only select clusters with N ≥ 6 members, resulting in a final sample set of 26 open clusters. Determination of this minimum limit and the final sample size is related to material in Section <ref>.
§.§.§ Final Cluster Sample
The distribution of the final cluster sample in radius, age, and [Mg/Fe] versus [Fe/H] can be seen in Figure <ref>.
We calculate the positional, kinematic, and orbital information for each cluster in our final cluster sample. Using Astropy SkyCoords <cit.> we calculate the Cartesian X, Y, Z galactocentric coordinates for all the clusters in our sample, as well as the galactocentric radius.
We integrate each cluster's orbit to measure its Z_max, eccentricity, and guiding radius using Galpy, with the gravitational potential <cit.>
While this potential lacks a bar, no member of our sample is close enough to the Galactic center to produce a noticeable difference in the final measured eccentricity, guiding radius, or maximum height above the galactic plane. While there have been measured effects on these parameters due to the Milky Way's spiral arms, as seen in <cit.>, as we do not use these parameters in the final science results, we do not include spiral arms in our potential.
We use a Monte Carlo method with N = 100 iterations to estimate uncertainties on these parameters. We measure 3D space velocity dispersion as a proxy for cluster mass and using the methodology outlined in <cit.>, we quantify the ratio of nucleosynthetic enrichment within each cluster from CCSNe and Type Ia supernovae.
Lastly, we include age estimates on all our clusters from <cit.>. These ages were derived using an artificial neural network trained on a sample of reference clusters. For further description see <cit.>. At this stage we also flag red clump stars by eye within each cluster. All the cluster parameters are included in Table <ref>, and will be included as a machine readable table.
§.§ Verification of Abundances & Uncertainties
The values of intrinsic scatter in OC abundances are on the order of ∼0.01 dex <cit.>, as are the abundance uncertainties within DR17. Therefore, verifying that the uncertainties on the abundances we are studying were not underestimated or overestimated was necessary. The method we use is the same as <cit.>. Since each star has some intrinsic true abundance measurement, repeated observations of the same star should result in a normal distribution around that value, where the width of the distribution is due only to the measurement uncertainties. Therefore we can use stars that have multiple visits in different APOGEE fields to measure the true uncertainty, and compare it to reported values within DR17 and IPL-3. To quantify our multiple-visit empirical uncertainty, we use Equation <ref> from <cit.>.
e_[X/Fe],k = √(π)/2median(|[X/Fe]_j-[X/Fe]_i]|),
where [X/Fe]_j and [X/Fe]_i are the abundance measurements from the same star's ith and jth visits. e_[X/Fe],k is the resulting [X/Fe] uncertainty after median stacking the pairwise measurements in the kth bin.
We group stars by SNR in bins spanning 50–70, 70–100, 100–130, 130–200, and greater than 200, as shown in Fig <ref>. Within each SNR range, stars are binned by T_ eff and [M/H], where Δ[M/H] = 0.2 dex and Δ T_ eff = 200 K. We use the K-S statistical test to ensure the distribution of abundance differences in each bin is consistent with a normal distribution, and flag those that are not to ensure they do not contaminate our sample. Bins that are not consistent with a normal distribution have two main causes. Firstly, some are at the edge of the [M/H] parameter space, in particular at low metallicities where ASPCAP is less reliable. Secondly, some bins have poor completion, with less than 10 measurements.
Within those bins where the empirical uncertainties are well-measured and normally distributed, we find very good agreement with the native pipeline uncertainties; thus for the rest of this work, we adopt those pipeline uncertainties directly from DR17 and IPL-3.
§ MEASURING CHEMICAL SCATTER
§.§ Paired Stars Method
Our primary method of determining the intrinsic scatter uses the difference in abundances between stars close to one another on the HR diagram: Δ T_ eff < 100 K and Δlogg < 0.1 dex. These limits are chosen because the maximum induced abundance offset between pair members due to the systematics discussed in Section <ref> is on the order ∼0.001 dex.
We measure the intrinsic dispersion within each pair using Equation <ref>, where e_1 and e_2 are the abundance uncertainties of each star in the pair, |Δ[X/Fe]| is the absolute value of the difference in abundance measurements between the pair, and σ is the inferred intrinsic dispersion.
σ = √(π/2|Δ[X/Fe]|^2 - (e_1^2 + e_2^2))/2)
We use a Monte Carlo method with N = 100 iterations to vary the abundance measurement of each star within the pair to estimate a final uncertainty on the measured pairwise scatter. Within each cluster, we separate the red clump and RGB stars as to not induce any scatter from potential evolutionary effects. Within each sub-sample, we sort the stars into pairs and then measure the intrinsic scatter between them. Finally, we take the median scatter of all the pairs within an OC as the true intrinsic scatter of the cluster. This method allows for extremely precise results, with final uncertainties on the order of ∼ 0.001 in most elements.
To determine the number of pairs needed for a reliable measurement with this method, we use a synthetic “cluster” of points, with “true” and “observed” [X/H] abundances, and the same temperature and log(g) distribution as our real clusters. The true abundances for the synthetic stars reflect a given intrinsic scatter for the synthetic cluster. The observed abundances are generated by perturbing each true abundance by a random value drawn from a normal distribution with σ set to the uncertainty of a real APOGEE star with the same temperature, log(g), and metallicity.
We then pair the stars as described in Section 3.1 and perform the intrinsic scatter measurement described using N=3 to N=20 pairs. We find that the measured cluster scatters are noisy and have both systematic offsets and larger uncertainties up until N=8, at which point the difference between the true and measured scatter does not change at larger N. Thus, we require a minimum of eight stellar pairs for clusters using this method.
Due to the restrictions outlined above regarding the separation of the pairs in T_ eff and logg, as well as the minimum number of pairs required, this method can only be applied to 15 of the 26 clusters studied in this paper. However due to the significantly higher precision of these values as compared to those derived using the method outlined in Section <ref>, within these 15 clusters we only publish results from this method.
§.§ Maximum Likelihood Estimator
We adopt a second method in the form of a Maximum Likelihood Estimator (MLE) to calculate the intrinsic scatter across our sample. This method produces larger uncertainties than our pairwise method, but it also has fewer sampling restrictions and can be applied to a larger set of OCs. The form of the MLE is shown in Equation <ref> below.
lnL = ∏_i=1^∞1/√(2π)(σ _[X/Fe]^2 + e_i^2)^1/2exp(-(x_i-μ _[X/Fe])^2/2(σ _[X/Fe]^2 + e_i^2)).
In Equation <ref>, σ_ [X/Fe] is the intrinsic scatter being tested, and e_i is the uncertainty on the [X/Fe] measurement for the ith star in that cluster. We sample a narrow range of mean [X/Fe] (μ), where Δμ = 0.05 dex around the calculated mean of the cluster members, with an initial range of 0.1 dex for intrinsic scatter and spacing of 0.003 dex. We then do a second iteration with finer spacing in the intrinsic scatter dimension, with spacing on the order of 10^-4 dex, centered on the likeliest value from the coarser grid. An example of this is shown in Figure <ref>. We calculate a variance from the Fisher information matrix, and from that we calculate the uncertainty the intrinsic scatter. We apply the MLE method to all twenty-six clusters in our sample. Of these twenty-six clusters in our sample, ten
use the MLE intrinsic scatters for their final measurement.
§.§.§ MLE Corrections for Small Samples
Based on tests with synthetic data, we find that the measured intrinsic scatter is unreliable when the number of cluster stars (N) is low, with a consistent systematic bias at N<9 in the measured scatter. This is due to the fact that the MLE is a biased estimator and at small sample sizes has a negative bias that causes it to underestimate the true parameter value, which can be accounted for with a multiplicative factor <cit.>:
Δσ_true = √(N/N-1)σ_MLE
This correction is applied to clusters with 6 ≤ N < 9 members. Below six members, the measured scatters are too unpredictably noisy. This small sample size correction results in an additional six clusters added to our sample, creating the final sample of 26 OCs.
§.§.§ Systematics
As discussed in <cit.>, the methodology for measuring a star's [X/Fe] involves a multi-parameter fit that includes fitting the observed stellar spectrum to synthetic models. First a global fit to the spectrum is done to determine the best fit values for temperature, surface gravity, microturbulent velocity, and [M/H]. Holding these parameters constant, individual elemental abundances are extracted from narrow spectral windows.
To test for any systematic trend between the global stellar parameters and elemental abundances, we quantify the slope of the uncalibrated log(g) vs. [X/Fe] within all the clusters in each pipeline, approximating it as a linear relationship, as seen in Figure <ref>. We exclude red clump stars as they are further along their evolutionary track than red giant stars, and potentially have slightly different surface abundances due to evolutionary effects or internal systematics. As a result, including them in the slope measurement could artificially drive any measurements of chemical homogeneity. However, these RC stars are adjusted afterwards along with the other giants in their cluster. This process ensures a uniform abundance correction across the entire cluster. From this, we find that the existing slopes in the cluster sample are nonzero, with a median slope of ∼0.02 dex/dex.
This systematic bias is present in DR17, and in the Cannon and ASPCAP allStar files from IPL-3. However it is not present in the IPL-3 allStar release analyzed using the Payne. We adjust the measured [X/Fe] for each cluster star using the following equation:
[X/Fe]_i, corrected = [X/Fe]_i - mlogg_i + ZP,
where each ith index is a star in a cluster and m is the best fit slope of [X/Fe] and log(g) within a specific cluster. We set the zero-point (ZP) of the cluster [X/Fe] after the correction using the abundances of the stars on the giant branch below the
red clump to ensure that the median cluster [X/Fe] is reflective of its true value. The fitting uncertainties are propagated to uncertainties on the correction, which are then added in quadrature with the existing abundance uncertainties.
§ RESULTS
Within one standard deviation, the only abundance that showed evidence of inhomogeneity, or consistent nonzero intrinsic scatter, across multiple clusters was [M/H]. This is because [M/H] has small uncertainties compared to other [X/Fe] measurements (∼0.008 dex as compared to ∼0.015 dex for the rest of the DR17 and IPL-3 abundances and ≥0.03 for the BAWLAS abundances). Given that the [M/H] uncertainties are smaller but not under-reported, as we verified in Section 2, it potentially implies that we could detect the existence of inhomogeneities that were then masked by larger uncertainties in the other abundances.
However, within three standard deviations (a 99.7% confidence interval), none of the clusters show measurable inhomogeneities in any of the measured elements. Furthermore, the scale of the limits derived using the paired stars method are also on the scale of ∼0.001 dex in many elements. And using that method we find no measurement of scatter across in any of the elements and clusters. As a result, we are confident that in the majority of elements we can constrain the homogeneity of the OCs to less than 0.01 dex in a 99.7% confidence interval.
All the measured quantities for each [X/Fe] are presented in Table <ref>. We show that across DR17 and all MWM pipelines, we do not find any elements that show consistent chemical inhomogeneity across our cluster sample, and in Figure <ref> we show that we do not find any clusters with consistent scatter across their abundance samples. We only show the results from the IPL-3 ASPCAP data in the here because it includes values for weak lined elements. The literature comparison plots using DR17, IPL-3 Cannon, and IPL-3 Payne releases are comparable and are shown in Appendix <ref>.
The upper limits on the intrinsic scatter measured in elements included in the BAWLAS catalog are higher than the limits on intrinsic scatter placed on the more well-measured elements, such as Mg or Ni. The reasons for this are twofold: Firstly, while many of the clusters studied did have enough stars to measure an intrinsic scatter, the number of stars that contained BAWLAS abundances within each cluster was smaller than the number of stars used to calculate α and iron-peak elements. Secondly, the associated uncertainties for the weak-lined elements were appreciably larger (0.03–0.08 dex) than the ones included in DR17 and MWM (0.01–0.04 dex).
§ DISCUSSION
§.§ Comparison to Milky Way Field Stars
To quantify the difference in chemical homogeneity between our OCs and the Milky Way field, we create a matched field star sample (MFS) that mirrors our existing cluster sample.
For each of the 15 clusters in our study with enough members to apply the pairwise method, we match each star within the cluster to a field star within two sigma[We tested the effects of using 1σ and 3σ to match stars and found no difference in our conclusions.] in Galactocentric radius, [M/H], [α/M], T_ eff, and logg. Here we consider the parameter uncertainties of both the cluster star as well as any candidate field stars. We use each star's [M/H] and [α/M] value from prior to the stellar parameter correction. We then measure the intrinsic scatter in each of the MFS samples, replicating the methodology outlined in Section <ref>. Finally, we compare the difference in intrinsic scatter between our MFS sample and the OC sample (Figure <ref>).
We find that on average, across all abundances, the matched field star samples have +0.012_-0.01^+0.02 dex more intrinsic scatter than the open clusters in our sample. This is in strong agreement with <cit.>, which states that stars in the Milky Way are chemically similar (within ∼0.01–0.02 dex) when given a fixed Galactocentric radius, [M/H], and [α/Fe]. The median difference between OC intrinsic scatter and field star intrinsic scatter (Δσ_[X/Fe]) for each nucleosynthetic family is given below:
-2pt
* α-elements (Mg, Si, Ca, Ti, P, S): 0.002 dex
* Iron-peak elements (Cr, Mn, Fe, Co, Ni, V): 0.012 dex
* Odd-Z (Na, Al, K): 0.023 dex
* Neutron-capture (Nd, Ce): 0.02 dex
Due to our selection criteria for the field star sample, we expect similar intrinsic scatter in the α, and iron-peak elements.
Interestingly, two of the odd-Z elements, Al and K, both have nonzero scatter in the majority of our MFS samples despite being measured as homogeneous in our OCs. While in [Na/Fe] the distribution in Δσ_[X/Fe] is roughly symmetric, Na is an element with weak lines in APOGEE's wavelength range and was included in our sample with the BAWLAS catalog. As a result, not only does it have comparatively larger uncertainties than the other odd-Z elements, but it has poorer completion as well, as only a subset of high SNR stars in our study have BAWLAS abundances. This implies that odd-Z elements may be a useful tool in differentiating otherwise chemically similar populations. However, given that the distribution for all of these elements is consistent with zero in at least a subset of clusters and field star comparisons, more precise limits are needed to accurately test this
Neutron capture elements also show slightly larger differences in scatter than their field star counterparts. <cit.> showed that neutron-capture elements have more discriminatory power in distinguishing “doppelganger stars”. What we find potentially corroborates that, but we also show that for co-eval and co-natal stars within an OC, the expectation of chemical homogeneity within neutron capture elements is broadly comparable to that of elements from other well-studied and well-measured nucleosynthetic families. In a field star sample with a high expectation of chemical similarity to an OC, as shown by the relative lack of difference in measured α-element intrinsic scatter, there is a 0.02 ± 0.02 dex difference in measured scatter for neutron capture elements. Due to this, it seems possible that neutron capture elements could be useful in distinguishing otherwise highly similar stellar populations but significantly more precise limits would be required to accomplish that goal.
In both cases of odd-Z and neutron capture elements, the differences between each OC and their respective MFS are consistent with zero within 3 uncertainties. More precise limits are required to make any conclusive statements on their effectiveness in distinguishing co-natal and co-eval stellar populations.
§.§ Previous Studies of Chemical Homogeneity
There have been numerous studies focusing on measuring the chemical homogeneity of star clusters; however, most of these studies have been focused on larger and more complicated globular clusters. Within open clusters, different studies have found a wide range of limits on inhomogeneity. This is further complicated by the fact that not every study investigates the same set of abundances, nor is every analysis method comparable to one another. One of the differences between this study and many others is that their published limits on homogeneity are either 68% and 95% limits; the values we publish and show in Figures <ref> and Appendix <ref> are all 99.7% limits.
Given the number of studies done on M67, NGC 6791, and NGC 6819, they are shown in Figure <ref>, while the literature comparisons for the remaining clusters are located in Appendix <ref>. The figures comparing the intrinsic scatter measured using DR17, Cannon, and Payne abundances for all 26 OCs are also shown in Appendix <ref>.
We compare our results to four previous studies <cit.>. We find that within the α and iron peak elements, which are well-measured in APOGEE and MWM, the upper limits derived in this study are consistent with previous findings. In the weak-lined elements such as V, Cu, Ce, and Nd, there is more variance. But even within those elements, in many clusters we find comparable limits to previous works.
It is worth outlining the differences in the analyses and sample sets between these different studies. <cit.> is the most similar to ours in terms of sample size, analysis method, and dataset <cit.>. While they measure upper limits that are far less constraining than ours, they also measure lower limits that strongly imply the existence of true intrinsic scatter. However, the stellar parameter systematic that we found in DR17 and MWM is also present in DR16 but was unaccounted for in <cit.>'s final results. Therefore, it is possible that the measured scatters in <cit.> are impacted by a relationship between stellar parameters and abundance measurements. Our uncorrected σ_[X/Fe] values (not shown in this paper) are in strong agreement with the ones published in <cit.>, which lends evidence to this hypothesis.
<cit.> uses high resolution spectroscopy (R ∼ 50,000) to study two solar twins in M67. This, makes it less likely that their final abundances are driven by systematics due to stellar atmospheres or poor line fitting, as the stars are in very similar parts of parameter space. This method is similar, but not identical, to the pairwise method of deriving intrinsic scatter outlined in Section <ref>. They derive abundances for a total of 26 elements as well as [Fe/H], with an average measurement uncertainty of e_[X/Fe] ≤ 0.02 dex. As a result, within their sample they were able to very tightly constrain the difference in [X/Fe] between the two stars, showing that for elements with Z<30, there was an average difference of < 0.06 dex, and for the neutron capture elements there was an average difference of < 0.05 dex. It is encouraging that in the strong-lined elements included in our study, we place similar upper limits on abundance scatter.
<cit.> uses APOGEE DR13 <cit.> abundances and spectra to measure intrinsic dispersions within a set of seven open clusters, six of which are included in this study. However, their methodology in constraining scatter was noticeably different than this work. Using The Cannon <cit.>, they derive abundances and uncertainties in 20 different elements from APOGEE spectra and a training set of open cluster stars in APOGEE DR13. <cit.> notes that while their Cannon abundance measurements were broadly comparable to ASPCAP's, the uncertainties are between 20-50% smaller. Beyond that, using a chi-squared fit they determined that the uncertainties, a quadrature sum of formal abundance uncertainty and cross-validation uncertainty, are overestimated given the widths of calculated abundance distributions. Therefore, they derive a scaling factor to correct the uncertainties to match the theoretically predicated value; however, this methodology also introduces the risk of artificially down-scaling the measured limits on the intrinsic scatter. However, it is also worth noting that in the majority of elements, the values they publish are comparable to the ones derived in this work.
<cit.> studied the abundance spread of 15 different elements in three clusters, all of which are included in this study: M67, NGC 2420, and NGC 6819. The data came from APOGEE DR12 <cit.>, with cluster membership from <cit.>. However, the key difference between their study and this one is the methodology. <cit.> made the assumption that in the absence of any intrinsic chemical scatter, the main driver for variation in the photometric and spectroscopic attributes of OC stars is their mass, which can be modeled as a one dimensional sequence. They correct for any systematic variations in the spectra driven by mass, using T_ eff as a proxy. They then use detailed forward modeling of the spectra and Approximate Bayesian Computation to measure the intrinsic scatter as well as upper limits. Overall <cit.> find no indication of chemical inhomogeneity in any of the three clusters they studied; the upper limits they derived are largely in agreement with the ones calculated using the MLE method in this work. However, using the paired stars method we can constrain tighter upper limits in the majority of elements. Similar to studies discussed previously, the limits placed on the BAWLAS neutron capture elements by <cit.> are lower than ours.
Finally, <cit.> use spectroscopic data from APOGEE DR14 <cit.> to measure intrinsic scatter in M67, NGC 6791, and NGC 6819 in 15 different elemental abundances. The analysis method is very similar to the one outlined in <cit.>, though there are a few differences — notably, that they use DR14 instead of DR12, which includes several differences in the line lists <cit.>. Furthermore, unlike <cit.>, they use spectroscopic effective temperatures in their one-dimensional model as opposed to photometric effective temperatures. <cit.> found the clusters to be chemically homogeneous, placing upper limits comparable to this study across its sample of elements. While they measured fewer abundances than this work, in the three clusters studied the upper limits they derive are similar to <cit.>.
Thus, within the majority of the elements included in DR17 and IPL-3, such as the α-elements and iron-peak elements, the upper limits we calculate here are in agreement with what has been previously found across all the clusters studied. This lends credibility to the limits placed on the numerous clusters studied in this work that did not have previously derived limits in the literature. However, the limits derived in this study for the neutron capture elements are larger than what has been previously found in any of the literature. This is likely driven by the comparatively large uncertainties on the elements (0.03–0.08 dex).
§ CONCLUSION
The purpose of this study was to quantify the level of chemical homogeneity within the largest sample to date of Milky Way open clusters for a broad set of elements. Using SDSS-V Milky Way Mapper IPL-3 abundances and Gaia DR3 kinematics, we identify a sample of 26 open clusters with large enough membership to measure the intrinsic scatter in up to 20 elements. Using the abundance differences between paired stars along the HR diagram, as well as a Maximum Likelihood Estimator, we then measure the intrinsic scatter within each element for each cluster. We find the following:
* We assemble a sample of 26 open clusters across a broad range of metallicity, age, mass, and galactic radii. Within a 99.7% confidence interval, we do not find any evidence of intrinsic scatter on the giant branch or in the red clump in any element across all the open clusters in our sample.
* Within the majority of abundances included in APOGEE DR17 and Milky Way Mapper IPL-3, we constrain the chemical homogeneity to ≤0.02 dex within a 99.7% confidence interval, and within ≤0.2 dex for the weak lined elements, such as those included in the DR17 BAWLAS catalog. Our limits are consistent with those in the literature for well-studied elements and clusters, and we add roughly a dozen clusters to this literature sample. Given the limited dataset in some of the elements, we recommend follow up measurements to better quantify their upper limits.
* When compared to a sample of field stars with similar Galactocentric radii, [α/M], and [M/H], we find our OCs to be more chemically homogeneous, with an average difference of ∼0.012 dex between the two samples. This corroborates previous findings that the dimensionality of chemical enrichment of the Milky Way is low, and can likely be explained through a few processes. In the future this could be useful in placing constraints on radial mixing and azimuthal variations within the Milky Way.
* We identify surface-gravity-dependent abundance shifts within APOGEE DR17 and Milky Way Mapper IPL-3 (corrected for in this analysis). This systematic needs to be accounted for in similar future work. We also find that the abundance uncertainties within both APOGEE and MWM are accurately estimated.
* These findings have implications for attempts to implement chemical tagging, especially strong chemical tagging, specifically showing that within the light elements alone it is not possible to confidently separate field stars and co-natal stars given similar stellar parameters and Galactic radii. The tightest abundance variation constraints in OCs may also help set limits on the rate of binary interactions and planetary engulfment in different environments.
§ ACKNOWLEDGMENTS
We thank the anonymous referee for their helpful and insightful comments. This material is based upon work supported by the National Science Foundation under Grant No. AST-2206542. A.S. also acknowledges support from the University of Utah's J. Irvin and Norma K. Swigart First-Year Summer Graduate Research Fellowship. P.F. acknowledges support from NSF Grant AST-2206541, and K.C. acknowledges support from NSF Grant AST-2206543.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions.
SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is <www.sdss4.org>.
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
Funding for the Sloan Digital Sky Survey V has been provided by the Alfred P. Sloan Foundation, the Heising-Simons Foundation, the National Science Foundation, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is <www.sdss.org>. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration, including the Carnegie Institution for Science, Chilean National Time Allocation Committee (CNTAC) ratified researchers, the Gotham Participation Group, Harvard University, Heidelberg University, The Johns Hopkins University, L’Ecole polytechnique fédérale de Lausanne (EPFL), Leibniz-Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Extraterrestrische Physik (MPE), Nanjing University, National Astronomical Observatories of China (NAOC), New Mexico State University, The Ohio State University, Pennsylvania State University, Smithsonian Astrophysical Observatory, Space Telescope Science Institute (STScI), the Stellar Astrophysics Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Illinois at Urbana-Champaign, University of Toronto, University of Utah, University of Virginia, and Yale University.
astropy <cit.>, scipy <cit.>,matplotlib <cit.>, numpy <cit.>, pandas <cit.>, and seaborn <cit.>.
§
Here we show the literature comparison plots for the well studied OCs in our sample, as discussed in Section <ref>.
§ OPEN CLUSTER SAMPLE MEMBERSHIP
Here we show the plots outlining the kinematic selection of our open cluster sample, as described in Section <ref>.
|
http://arxiv.org/abs/2409.02772v1 | 20240904145136 | Unifying Causal Representation Learning with the Invariance Principle | [
"Dingling Yao",
"Dario Rancati",
"Riccardo Cadei",
"Marco Fumero",
"Francesco Locatello"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
A generalization of K-theory to operator systems
Walter D. van Suijlekom
September 4, 2024
================================================
§ ABSTRACT
=-1Causal representation learning aims at recovering latent causal variables from high-dimensional observations to solve causal downstream tasks, such as predicting the effect of new interventions or more robust classification.
A plethora of methods have been developed, each tackling carefully crafted problem settings that lead to different types of identifiability.
The folklore is that these different settings are important, as they are often linked to different rungs of Pearl's causal hierarchy, although not all neatly fit.
Our main contribution is to show that many existing causal representation learning approaches methodologically align the representation to known data symmetries.
Identification of the variables is guided by equivalence classes across different “data pockets” that are not necessarily causal.
This result suggests important implications, allowing us to unify many existing approaches in a single method that can mix and match different assumptions, including non-causal ones, based on the invariances relevant to our application.
It also significantly benefits applicability, which we demonstrate by improving treatment effect estimation on real-world high-dimensional ecological data. Overall, this paper clarifies the role of causality assumptions in the discovery of causal variables and shifts the focus to preserving data symmetries.
§ INTRODUCTION
Causal representation learning <cit.> posits that many real-world high-dimensional perceptual data can be described through a simplified latent structure specified by a few interpretable low-dimensional causally-related variables. Discovering hidden causal structures from data has been a long-standing goal across many scientific disciplines, spanning neuroscience <cit.>, communication theory <cit.>, economics <cit.> and social science <cit.>. From the machine learning perspective, algorithms and models integrated with causal structure are often proven to be more robust at distribution shift <cit.>, providing better out-of-distribution generalization results and reliable agent planning <cit.>.
Formally, the general goal of causal representation learning approaches is formulated as to
provably identify ground-truth latent causal variables and their causal relations (up to certain ambiguities).
Many existing approaches in causal representation learning carefully formulate their problem settings to guarantee identifiability and justify the assumptions within the framework of Pearl's causal hierarchy, such as “observational, interventional, or counterfactual CRL" <cit.>. However, some causal representation learning works may not perfectly fit within this causal language framework; for instance, the problem setting of temporal CRL works <cit.> does not always align straightforwardly with existing categories.
They often assume that an individual trajectory is “intervened” upon, but this is not an intervention in the traditional sense, as noise variables are not resampled. It is also not a counterfactual as the value of non-intervened variables can change due to default dynamics.
Similarly, domain generalization <cit.> and certain multi-task learning approaches <cit.> are sometimes framed as informally related to causal representation learning.
However, the precise relation to causality is not always clearly articulated. This has resulted in a variety of methods and findings, some of which rely on assumptions that might be too narrowly tailored for practical, real-world applications.
For example, <cit.> collected a data set for estimating treatment effects from high-dimensional observations in real-world ecology experiments. Despite the clear causal focus of the benchmark, they note that, despite having access to multiple views and being able to perform some interventions, neither existing mutli-view nor intervertional causal representation learning methods are directly applicable due to mismatching assumptions.
=-1 This paper contributes a unified rephrasing of many existing nonparametric CRL works through the lens of invariance.
We observe that many existing causal representation approaches share methodological similarities, particularly in aligning the representation with known data symmetries, while differing primarily in how the invariance principle is invoked.
This invariance principle is usually formulated implicitly in the assumed data-generating process.
Instead, we make this explicit and show that latent causal variable identification broadly originates from multiple data pockets with certain underlying equivalence relations. These are not necessarily causal and (with minor exceptions) have to be known apriori.
Unifying causal representation learning approaches using the invariance principle brings several potential benefits:
First, it helps clarify the alignment between seemingly different categories of CRL methods, contributing to a more coherent and accessible framework for understanding causal representation learning.
This perspective may also allow for the integration of multiple invariance relations in latent variable identification, which could improve the flexibility of these methods in certain practical settings.
Additionally, our theory underscores a gap in the necessary causal assumptions for graph learning, which is essential for generalizing to unseen interventions and helps distinguish it from the problem of identifying causal variables.
These invariances can be expressed in causal terms, such as interventions, but do not always need to be.
Last but not least, this formulation of invariance relation links causal representation learning to many existing representation learning areas outside of causality, including invariant training <cit.>, domain adaptation <cit.>, and geometric deep learning <cit.>.
We highlight our contributions as follows:
* We propose a unified rephrasing for existing nonparmaetric causal representation learning approaches leveraging the invariance principles and prove latent variable identifiability in this general setting.
We show that 31 existing identification results can be seen as special cases directly implied by our framework.
This approach also enables us to derive new results, including latent variable identifiability from one imperfect intervention per node in the non-parametric setting.
* In addition to employing different methods, many works in the CRL literature use varying definitions of “identifiability." We formalize these definitions at different levels of granularity, highlight their connections, and demonstrate how various definitions can be addressed within our framework by leveraging different invariance principles.
* Upon the identifiability of the latent variables, we discuss the necessary causal assumptions for graph identification and the possibility of partial graph identification using the language of causal consistency.
With this, we draw a distinction between the causal assumptions necessary for graph discovery and those that may not be required for variable discovery.
* Our framework is broadly applicable across a range of settings.
We observe improved results on real-world experimental ecology data using the causal inference benchmark from high-dimensional observations provided by <cit.>.
Additionally, we present a synthetic ablation to demonstrate that existing methods, which assume access to interventions, actually only require a form of distributional invariance to identify variables. This invariance does not necessarily need to correspond to a valid causal intervention.
§ PROBLEM SETTING
This section formalizes our problem setting and states our main assumptions. We first summarize standard definitions and assumptions of causal representation learning in <ref>. Then, we describe the data generating process using the language of invariance properties and equivalence classes subsec:dgp.
Notation. [N] is used as a shorthand for {1, …, N}. We use bold lower-case for random vectors and normal lower-case z for their realizations. A vector can be indexed either by a single index i ∈ [()] via _i or a index subset A ⊆ [()] with _A := {_i: i ∈ A}. P_ denotes the probability distribution of the random vector and p_(z) denotes the associated probability density function. By default, a "measurable" function is measurable w.r.t. the Borel sigma algebras and defined w.r.t. the Lebesgue measure. A more comprehensive summary of notations and terminologies is provided in <ref>
§.§ Preliminaries
In this subsection, we revisit the common definitions and assumptions in identifiability works from causal representation learning. We begin with the definition of a latent structural causal model:
Let = {_1, …, _N} denote a set of causal “endogenous" variables with each _i taking values in , and let = {_1, …, _N} denotes a set of mutually independent “exogenous" random variables. The latent SCM consists of a set of structural equations
{_i := m_i(_pa(i)), _i}_i=1^N,
where _pa(i) are the causal parents of _i and m_i are the deterministic functions that are termed “causal mechanisms". We indicate with P_ the joint distribution of the exogenous random variables, which due to the independence hypothesis is the product of the probability measures of the individual variables. The associated causal diagram is a directed graph with vertices and edges _i →_j iff. _i ∈_pa(j); we assume the graph to be acyclic.
The latent SCM induces a unique distribution P_ over the endogenous variables as a pushforward of P_ via <ref>. Its density p_ follows the causal Markov factorization:
p_(z) = ∏_i=1^N p_i(z_i | zi)
Instead of directly observing the endogenous and exogenous variables and , we only have access to some “entangled" measurements of generated through a nonlinear mixing function:
A deterministic smooth function f: ^N →^D mapping the latent vector ∈^N to its observable ∈^D, where D ≥ N denotes the dimensionality of the observational space.
[Diffeomorphism]
The mixing function f is diffeomorphic onto its image, i.e. f is C^∞, f is injective and f^-1|_ℐ(f): ℐ(f) →^D is also C^∞.
Remark: Settings with noisy observations (= f() + ϵ, ⊥ϵ) can be easily reduced to our denoised version by applying a standard deconvolution argument as a pre-processing step, as indicated by <cit.>.
§.§ Data Generating Process
We now define our data generating process using the previously introduced mathematical framework subsec:premilinaries. Unlike prior works in causal representation learning, which typically categorize their settings using established causal language (such as 'counterfactual,' 'interventional,' or 'observational'), our approach introduces a more general invariance principle that aims to unify diverse problem settings.
In the following, we introduce the following concepts as mathematical tools to describe our data generating process.
Let A ⊆ [N] be an index subset of the Euclidean space ^N and let ∼_ι be an equivalence relationship on ^|A|, with A of known dimension. Let := ^|A|∼_ι be the quotient of ^|A| under this equivalence relationship; is a topological space equipped with the quotient topology. Let ι: ^|A|→ be the projection onto the quotient induced by the equivalence relationship ∼_ι.
We call this projection ι the invariant property of this equivalence relation. We say that two vectors , ∈^|A| are invariant under ι if and only if they belong to the same ∼_ι equivalence class, i.e.:
ι() = ι() ⇔∼_ι.
Extending this definition to the whole latent space ^N, a pair of latents , ∈^N are non-trivially invariant on a subset A ⊆ [N] under the property ι only if
* the invariance property ι holds on the index subset A ⊆ [N] in the sense that ι(_A) = ι(_A);
* for any smooth function h_1, h_2: ^N →^|A|, the invariance property between , breaks under the h_1, h_2 transformations if h_1 or h_2 directly depends on some other component _q with q ∈ [N]∖ A. Taking h_1 and as an example, we have:
∃ q ∈[N]∖ A, ^* ∈^N, s.t. ∂h_1∂_q(^*) exists and is non zero ⇒ ι(h_1()) ≠ι(h_2())
which means: given that the partial derivative of h_1 w.r.t. some latent variable _q ∈_[N]∖ A is non-zero at some point ^* ∈^N, h_1(), h_2() violates invariance principle in the sense that ι(h_1()) ≠ι(h_2()). That is, the invariance principle is non-trivial in the sense of not being always satisfied.
=-1 Intuition: The invariance property ι maps the invariant latent subset _A to the space representing the identified factor of variations. For example, in the multi-view literature <cit.>, it is the identity map because the pre-and post action views are sharing the exact value of the invariant latents; for the interventional and temporal CRL <cit.>, this invariant property holds on a distributional level, and the property manifold can play the role of parameter space for the parameteric latent distributions or the general distribution space for the nonparametric case; for the multi-task line of work <cit.>, ι can be interpreted as the ground truth relation between the task-related latent variables and the task label, mapping the latents to the space of labels .
Remark: <Ref> (ii) is essential for latent variable identification on the invariant partition A, which is further justified in <ref> by showing a non-identifiable example violating (ii). Intuitively, <ref> (ii) present sufficient discrepancy between the invariant and variant part in the ground truth generating process, paralleling various key assumptions for identifiability in CRL that were termed differently but conceptually similar, such as sufficient variability <cit.>, interventional regularity <cit.> and interventional discrepancy <cit.>. On a high level, these assumptions guarantee that the intervened mechanism sufficiently differs from the default causal mechanism to effectively distinguish the intervened and non-intervened latent variables, which serves the same purpose as <ref> (ii). We elaborate this link further in <ref>.
We denote by _ :={^1, …, ^K} the set of latent random vectors with ^k ∈^N and write its joint distribution as P__.
The joint distribution P__ has a probability density p__(z^1, …, z^K). Each individual random vector ^k ∈_ follows the marginal density p_^k with the non-degenerate support ^k ⊆^N, whose interion is a non-empty open set of ^N.
Consider a set of random vectors _ :={^1, …, ^K} with ^k ∈^N, the corresponding set of observables _ := {^1, …, ^K} is generated by:
_ = F(_),
where the map F defines a push-forward measure F_#(P__) on the image of F as:
F_#(P__)(x_1, …, x_K) = P__(f_1^-1(x_1), …, f_K^-1(x_K))
with the support := Im(F) ⊆^K × D. Note that F satisfies <ref> as each f_k is a diffeomorphism onto its image according to <ref>.
Intuition. <ref> formulates the generating process of the set of observables as a joint pushforward of a set of latent random vectors, providing a formal definition of the non-iid. data pockets employed in causal representation learning algorithms. It conveniently explains various underlying data symmetries given inherently by individual problem settings. For example, in the multiview scenario <cit.>, we can observe the joint data distribution P_𝒮_ because the data are “paired" (non-independent). In the interventional CRL that relies on multi-environment data, the joint data distribution can be factorized as a product of individual non-identical marginals {P_^k}_k ∈ [K], originating from partially different latent distributions P_^k that are modified by, e.g., interventions. In the supervised setting, such as multi-task CRL, we have an extended data pocket augmented by the task labels that is formally defined as S_ := {_1, …, _K} with _k := (, ^k). Note that the observable is shared across all tasks k ∈ [K] whereas the tasks labels ^k are specific to individual tasks, thus introducing different joint data-label distributions P__k.
In the following, we denote by ℑ := {ι_i: ^|A_i|→_i} a finite set of invariance properties with their respective invariant subsets A_i ⊆ [N] and their equivalence relationships ∼_ι_i, each inducing as a projection onto its quotient and invariant property ι_i defn:invariance_projector.
For a set of observables _ :={^1, …, ^K}∈ generated from the data generating process described in <ref>, we assume:
For each ι_i ∈ℑ, there exists a unique known index subset V_i ⊆ [K] with at least two elements (i.e., |V_i| ≥ 2) s.t. _V_i = F([]_∼_ι_i) forms the set of observables generated from an equivalence class []_∼_ι_i := {∈^N: _A_i∼_ι_i_A_i}, as given by <ref>. In particular, if ℑ = {ι} consists of a single invariance property ι: ^|A|→, we have _ = F([]_∼_ι).
=-1 Remark: While does not need to be fully described, which observables should belong to the same equivalence class is known (denoted as V_i ⊆ [K] for the invariance property ι_i ∈). This is a standard assumption and is equivalent to knowing e.g., two views are generated from partially overlapped latents <cit.>.
Problem setting.
Given a set of observables _∈ satisfying <ref>, we show that we can simultaneously identify multiple invariant latent blocks A_i under a set of weak assumptions. In the best case, if each individual latent component is represented as a single invariant block through individual invariance property ι_i ∈, we can learn a fully disentangled representation and further identify the latent causal graph by additional technical assumptions.
§ IDENTIFIABILITY THEORY VIA THE INVARIANCE PRINCIPLE
=-1 This section contains our main identifiability results using the invariance principle, i.e., to align the learned representation with known data symmetries.
First, we present a general proof for latent variable identification that brings together many identifiability results from existing CRL works, including multiview, interventional, temporal, and multitask CRL.
We compare different granularity of latent variable identification and show their transitions through certain assumptions on the causal model or mixing function subsec:latent_variable_identification.
Afterward, we discuss the identification level of a causal graph depending on the granularity of latent variable identification under certain structural assumptions subsec:causal_graph_identification.
Detailed proofs are deferred to <ref>.
§.§ Identifying latent variables
High-level overview.
=-1 Our general theory of latent variable identifiability, based on the invariance principle, consists of two key components:
(1) ensuring the encoder's sufficiency, thereby obtaining an adequate representation of the original input for the desired task;
(2) guaranteeing the learned representation to preserve known data symmetries as invariance properties.
The sufficiency is often enforced by minimizing the reconstruction loss <cit.> in auto-encoder based architecture,
maximizing the log likelihood in normalizing flows or maximizing entropy <cit.> in contrastive-learning based approaches.
The invariance property in the learned representations is often enforced by minimizing some equivalence relation-induced pseudometric between a pair of encodings <cit.> or by some iterative algorithm that provably ensures the invariance property on the output <cit.>.
As a result, all invariant blocks A_i, i ∈ [n_] can be identified up to a mixing within the blocks while being disentangled from the rest.
This type of identifiability is defined as block-identifiability <cit.> which we restate as follows:
[Block-identifiability <cit.>]definitionblockID
A subset of latent variable _A := {_j}_j ∈ A with A ⊆ [N] is block-identified by an encoder g: ^D →^N on the invariant subset A if the learned representation _Â := [g()]_Â with Â⊆ [N], |A| = |Â| contains all and only information about the ground truth _A, i.e. _Â = h(_A) for some diffeomorphism h: ^|A|→^|A|.
Intuition: Note that the inferred representation _Â can be a set of entangled latent variables rather than a single one. Block-identifiability can be considered as a coarse-grained definition of disentanglement <cit.>, which seeks to disentangle individual latent factors. In other words, disentanglement can be considered as a special case of block-identifiability with each latent constituting a single invariant block. Notably, in <cit.> disentangled factors were identified in blocks, with fine-grained identifiability achieved by intersecting different blocks.
=-1 The encoders G:={g_k: ^k →^k}_k ∈ [K] consist of smooth functions mapping from the respective observational support ^k to the corresponding latent support ^k, as elaborated in <ref>.
Intuition: For the purpose of generality, we design the encoder g_k to be specific to individual observable ^k ∈_.
However, multiple g_k can share parameters if they work on the same modality.
Ideally, we would like the encoders to preserve as much invariance (from ) as possible.
Thus, a clear separation between different encoding blocks is needed.
To this end, we introduce selectors.
A selection ⊘ operates between two vectors a ∈{0, 1}^d , b ∈^d s.t.
a ⊘ b := [b_j: a_j = 1, j ∈ [d]]
The invariant block selectors Φ:= {ϕ^(i, k)}_i ∈ [n_], k ∈ V_i with ϕ^(i, k)∈{0,1}^N perform selection defn:selection on the encoded information:
for any invariance property ι_i ∈, any observable ^k, k ∈ V_i we have the selected representation:
ϕ^(i, k)⊘^k = ϕ^(i, k)⊘ g_k(^k) = [ [g_k(^k)]_j: ϕ^(i, k)_j = 1, j ∈ [N] ],
with ϕ^(i, k)_0 = ϕ^(i, k^')_0 = |A_i| for all ι_i ∈, k, k^'∈ V_i.
Intuition: Selectors are used to select the relevant encoding dimensions for each individual invariance property ι_i ∈.
Each selector ϕ^(i, k) gives rise to a index subset Â_i^k := {j: ϕ^(i, k)_j = 1}⊆ [N] that is specific to the invariance property ι_i and the observable ^k.
The assumption of known invariance size |A_i| can be lifted in certain scenarios by, e.g., enforcing sharing between the learned latent variables, as shown by <cit.>, or leveraging sparsity constraints <cit.>.
[Invariance constraint]
For any ι_i ∈, i ∈ [n_], the selected representations ϕ^(i, k)⊘ g_k(^k), k ∈ [K] must be ι_i-invariant across the observables from the subset V_i ⊂ [K]:
ι_i(ϕ^(i, k)⊘ g_k(^k))
=
ι_i(ϕ^(i, k^')⊘ g_k^'(^k^'))
∀ i ∈ [n_] ∀ k, k^'∈ V_i
[Sufficiency constraint]
For any encoder g_k, k ∈ [K], the learned representation preserves at least as much information as of any of the invariant partitions _A_i that we aim to identify in the sense that
I(_A_i, g_k(^k)) = H(_A_i) ∀ i ∈ [n_], k ∈ V_i.
=-1Remark: The regularizer enforcing this sufficiency constraint can be tailored to suit the specific task of interest.
For example, for self-supervised training, it can be implemented as the mutual information between the input data and the encodings, i.e., I(, g()) = H(), to preserve the entropy from the observations;
for classification, it becomes the mutual information between the task labels and the learned representation I(, g()).
Sometimes, sufficiency does not have to be enforced on the whole representation.
For example, in the multiview line of work <cit.>, when considering a single invariant block A,
enforcing sufficiency on the shared partition (implemented as entropy on the learned encoding H(g()_1:|A|)) is enough to block-identify these shared latent variables _A.
[Identifiability of multiple invariant blocks]theoremidmulti
Consider a set of observables _ = {^1, ^2, …, ^K} with ^k ∈^k generated from <ref> satisfying <ref>.
Let G, Φ be the set of smooth encoders defn:encoders and selectors defn:block_selectors that satisfy <ref>, then the invariant component _A_i^k is block-identified defn:blockID by ϕ^(i, k)⊘ g_k for all ι_i ∈, k ∈ [K].
Discussion: Intuitively, <ref> enforces all invariance properties ι_i ∈ jointly and thus learns a representation that block-identifies all invariance blocks simultaneously.
It allows mixing multiple invariance principles, thus better adapting to complex real-world scenarios in which various invariance relations typically occur.
In practice, this constrained optimization problem can be solved in many different flavors, e.g., <cit.> employ a two-stage learning process first to solve the sufficiency constraint, then the invariance constraint, <cit.> instead formulate it as a bi-level constrained optimization problem. Some works <cit.> propose a loss that directly solves the constrained optimization problem, while the others <cit.> develop step-by-step algorithms as solutions.
=-1 What about the variant latents? Intuitively, the variant latents are generally not identifiable, as the invariance constraint cons:invariance is applied only to the selected invariant encodings, leaving the variant part without any weak supervision <cit.>. This non-identifiability result is formalized as follows:
[General non-identifiability of variant latent variables]propositionvarNonID
Consider the setup in <ref>, let A := ⋃_i ∈ [n_] A_i denote the union of block-identified latent indexes and A^c := [N] ∖ A the complementary set where no ι-invariance ι∈ applies, then the variant latents _A^c cannot be identified.
Although variant latent variables are generally non-identifiable, they can be identified under certain conditions. The following demonstrates that variant latent variables can be identified under invertible encoders when the variant and invariant partitions are mutually independent.
[Identifiability of variant latent under independence]propositionvarIDindependence
Consider an optimal encoder g∈ G^* and optimal selector ϕ∈Φ^* from <ref> that jointly identify an invariant block _A (we omit subscriptions k, i for simplicity), then _A^c (A^c := [N] ∖ A) can be identified by the complementary encoding partition (1 - ϕ) ⊘ g only if
* g is invertible in the sense that I(, g()) = H();
* _A^c is independent on _A.
Discussion: The generalization of new interventions has been a long-standing goal in causal representation learning. The generalization can be categorized into two layers: (1) generalize to unseen interventional values and (2) generalize to non-intervened nodes. The former includes the out-of-distributional value of the intervened node in the training set or a combination of multiple singly intervened nodes during training, which has been successfully demonstrated in various existing works <cit.>. However, we argue that the second layer of generalization, namely generalizing to unseen nodes, is fundamentally impossible, as shown by <ref>; only under certain conditions such as independence and sufficient latent representation for reconstruction, non-intervened nodes in the training phase can be identified during inference prop:id-variant-indep. This result aligns with the identifiability algebra given by <cit.> and is evidenced by numerous previous works, including disentanglement <cit.> and temporal causal representation learning without instantaneous effect <cit.>.
§.§ On the granularity of identification
Different levels of identification can be achieved depending on the degree of underlying data symmetry. Below, we present three standard identifiability definitions from the CRL literature, each offering stronger identification results than block-identifiability defn:blockID.
Let be the learned representation, for a subset A⊆ [N] it satisfies that:
_π(A) = D ·_A + ,
where D ∈^|A| × |A| is an invertible matrix, π(A) denotes the index permutation of A, then _A is block affine-identified by _π(A).
The learned representation ∈^N satisfies that:
= _π· h(),
where _π∈^N × N is a permutation matrix, h():= (h_1(_1), … h_N(_N)) ∈^N is a is an element-wise diffeomorphism.
The learned representation ∈^N satisfies that:
= Λ·_π· + ,
where _π∈^N × N is a permutation matrix, Λ∈^N × N is a diagonal matrix with nonzero diagonal entries.
=-1Remark: Block affine-identifiability defn:block_affineID is defined by <cit.>, stating that the learned representation is related to the ground truth latents through some sparse matrix with zero blocks.
<Ref> indicates element-wise identification of latent variables up to individual diffeomorphisms.
Element-identifiability for the latent variable identification together with the graph identifiability defn:graphID is defined as ∼_CRL-identifiability <cit.>, perfect identifiability <cit.>.
Affine identifiability defn:affineID describes when the ground truth latent variables are identified up to permutation, shift, and linear scaling. In many CRL works, affine identifiability defn:affineID is also termed as follows: perfect identifiability under linear transformation <cit.>, CD-equivalence <cit.>, disentanglement <cit.>.
[Granularity of identification]propositiongranularityID
Affine-identifiability defn:affineID implies element-identifiability defn:crlID and block affine-identifiability defn:block_affineID while element-identifiability and block affine-identifiability implies block-identifiability defn:blockID.
[Transition between identification levels]propositiontransID
The transition between different levels of latent variable identification fig:id_granularity can be summarized as follows:
* Element-level identifiability defn:crlID,defn:affineID can be obtained from block-wise identifiability defn:block_affineID,defn:blockID when each individual latent constitutes an invariant block;
* Identifiability up to an affine transformation defn:block_affineID,defn:affineID can be obtained from general identifiability on arbitrary diffeomorphism defn:crlID,defn:blockID by additionally assuming that both the ground truth mixing function and decoder are finite degree polynomials of the same degree.
=-1 Discussion.
We note that the granularity of identifiability results is primarily determined by the strength of invariance and parametric assumptions (such as those on mixing functions or causal models) rather than by the specific algorithmic choice.
For example, for settings that can achieve -identifiability <cit.>, affine-identifiability results can be obtained by additionally assuming finite degree polynomial mixing function (proof see <ref>).
Similarly, one reaches -identifiability from block-identifiability by enforcing invariance properties on each latent component <cit.> instead of having only one fat-hand invariant block <cit.>.
<ref> provides an overview of recent identifiability results along with their corresponding invariance and parametric assumptions, illustrating the direct relationship between these assumptions and the level of identifiability they achieve.
§.§ Identifying the causal graph
In addition to latent variable identification, another goal of causal representation learning is to infer the underlying latent dependency, namely the causal graph structure. Hence, we restate the standard definition of graph identifiability in causal representation learning.
The estimated graph is isomorphic to the ground truth through a bijection h : V() → V() in the sense that two vertices _i, _j ∈ V() are adjacent in if and only if h(_i), h(_j) ∈ V() are adjacent in .
We remark that the “faithfulness" assumption <cit.> is a standard assumption in the CRL literature, commonly required for graph discovery. We restate it as follows:
[Faithfulness (or Stability)]
P_ is a faithful distribution induced by the latent SCM defn:latent_scm in the sense that P_ contains no extraneous conditional independence; in other words, the only conditional independence relations satisfied by P_ are those given by {_i ⊥i | i} where i denotes the non-descends of _i.
As indicated by <ref>, the preliminary condition of identifying the causal graph is to have an element-wise correspondence between the vertices in the ground truth graph (i.e., the ground truth latents) and the vertices of the estimated graph.
Therefore, the following assumes that the learned encoders G defn:encoders achieve element-identifiability defn:crlID, that is, for each _i ∈, we have a differmorphism h_i: → such that _i = h_i(_i).
However, to identify the graph structure, additional assumptions are needed: either on the source of invariance or on the parametric form of the latent causal model.
Graph identification via interventions.
Under the element-identifiability defn:crlID of the latent variables , the causal graph structure can be identified up to its isomorphism defn:graphID, given multi-domain data from paired perfect interventions per-node <cit.>.
Using data generated from imperfect interventions is generally insufficient to identify the direct edges in the causal graph, it can only identify the ancestral relations, i.e., up to the transitive closure of <cit.>.
Unfortunately, even imposing the linear assumption on the latent SCM does not provide a solution <cit.>.
Nevertheless, by adding sparsity assumptions on the causal graph and polynomial assumption on the mixing function f, <cit.> has shown isomorphic graph identifiability defn:graphID under imperfect intervention per node.
In general, access to the interventions is necessary for graph identification if one is not comfortable making other parametric assumptions about the graph structure. Conveniently, in this setting, the graph identifiability is linked with that of the variables since the latter leverages the invariance induced by the intervention.
Graph identification via parametric assumptions.
It is well known in causal discovery that the additive noise model <cit.> is identifiable under certain mild assumptions <cit.>. In the following, we assume an additive exogenous noise in the latent SCM defn:latent_scm:
[Additive noise]
The endogenous variable _i ∈ in the previously defined latent SCM defn:latent_scm relates to the corresponding exogenous noise variable _i ∈ through additivity. Namely, the causal mechanism eq:scm can be rewritten as:
{_i = m_i(i) + _i}.
As a generalization of the additive noise model, the post-nonlinear acyclic causal model <cit.> allows extra nonlinearity on the top of the additive causal mechanism, providing additional flexibility on the latent model assumption:
The following causal mechanism describes a post-nonlinear acyclic causal model:
_i = h_i(m_i(i) + _i),
where h_i: → is a diffeomorphism and m_i is a non-constant function.
Assume the latent variable _i is element-wise identified through a bijective mapping h_i: → for all i ∈ [N], define the estimated causal parents _pa(i) := {h_j(_j): _j ∈i}, then the latent SCM defn:latent_scm is translated to a post-nonlinear acyclic causal model defn:post-nonlinear-model because
_i
= h_i(_i)
= h_i(m_i(i) + _i)
= h_i(m_i({h^-1_j(_j): _j ∈i}) + _i )
= h_i(m̃_i(i) + _i ),
where
m̃_i(i) := m_i({h^-1_j(_j): _j ∈i}).
Thus, the underlying causal graph can be identified up to an isomorphism defn:graphID following the approach given by <cit.>
What happens if variables are identified in blocks? Consider the case where the latent variables cannot be identified up to element-wise diffeomorphism; instead, one can only obtain a coarse-grained version of the variables (e.g., as a mixing of a block of variables defn:blockID). Nevertheless, certain causal links between these coarse-grained block variables are of interest. These block variables and their causal relations in between form a “macro" level of the original latent SCM, which is shown to be causally consistent under mild structural assumptions <cit.>. In particular, the macro-level model can be obtained from the micro-level model through an exact transformation <cit.> and thus produces the same causal effect as the original micro-level model under the same type of interventions, providing useful knowledge for downstream causal analysis. More formal connections are beyond the scope of this paper. Still, we see this concept of coarse-grained identification on both causal variables and graphs as an interesting avenue for future research.
§ REVISITING RELATED WORKS AS SPECIAL CASES OF OUR THEORY
=-1 This section reviews related causal representation learning works and frames them as specific instances of our theory sec:identifiability. These works were originally categorized into various causal representation learning types (multiview, multi-domain, multi-task, and temporal CRL) based on the level of invariance in the data-generating process, leading to varying degrees of identifiability results subsec:granularity_id.
While the practical implementation of individual works may vary, the
methodological principle of aligning representation with known data symmetries remain consistent, as shown in <ref>.
In this section, we revisit the data-generating process of each category and explain how they can be viewed as specific cases of the proposed invariance framework subsec:dgp.
We then present individual identification algorithms from the CRL literature as particular applications of our theorems, based on the implementation choices needed to satisfy the invariance and sufficiency constraints cons:invariance,cons:sufficiency. A more detailed overview of the individual works is provided in <ref>.
§.§ Multiview Causal Representation Learning
High-level overview.
The multiview setting in causal representation learning <cit.> considers multiple views that are concurrently generated by an overlapping subset of latent variables, and thus having non-independently distributed data. Multiview scenarios are often found in a partially observable setup.
For example, multiple devices on a robot measure different modalities, jointly monitoring the environment through these real-time measurements. While each device measures a distinct subset of latent variables, these subsets probably still overlap as they are measuring the same system at the same time.
In addition to partial observability, another way to obtain multiple views is to perform an “intervention/perturbation" <cit.> and collect both pre-action and post-action views on the same sample. This setting is often improperly termed “counterfactual"[Traditionally, counterfactual in causality refers to non-observable outcomes that are “counter to the fact” <cit.>. In the works we refer to here, they rather represent pre- and post- an action that affect some latent variables but not all. This can be mathematically expressed as a counterfactual in a SCM, but is conceptually different as both pre- and post- action outcomes are realized <cit.>. The “counterfactual” terminology silently implies that this is a strong assumption, but nuance is needed and it can in fact be much weaker than an intervention.] in the CRL literature, and this type of data is termed “paired data". From another perspective, the paired setting can be cast in the partial observability scenario by considering the same latent before and after an action (mathematically modelled as an intervention) as two separate latent nodes in the causal graph, as shown by <cit.>. Thus, both pre-action and post-action views are partial because neither of them can observe pre-action and post-action latents simultaneously. These works assume that the latents that are not affected by the action remain constant, an assumption that is relaxed in temporal CRL works. See <ref> for more discussion in this regard.
Data generating process.
In the following, we introduce the data-generating process of a multi-view setting in the flavor of the invariance principle as introduced in <ref>.
We consider a set of views {^k}_k∈ [K] with each view ^k ∈^k generated from some latents ^k ∈^k. Let S_k ⊆ [N] be the index set of generating factors for the view ^k, we define ^k_j = 0 for all j ∈ [N]∖ S_k to represent the uninvolved partition of latents.
Each entangled view ^k is generated by a view-specific mixing function f_k: ^k →^k:
^k = f_k(^k) ∀ k ∈ [K]
Define the joint overlapping index set A := ⋂_k ∈ [K] S_k, and assume A ⊆ [N] is a non-empty interior of [N].
Then the value of the sharing partition _A remain invariant for all observables {^k}_k ∈ [K] on a sample level.
By considering the joint intersection A, we have one single invariance property ι: ^|A|→^|A| in the invariance set ; and this invariance property ι emerges as the identity map id on ^|A| in the sense that id(^k_A) = id(^k^'_A) and thus ^k_A ^k^'_A for all k, k^'∈ [K]. Note that <ref> (ii) is satisfied because any transformation h_k that involves other components _q with q ∉ A violates the equity introduced by the identity map.
For a subset of observations V_i ⊆ [K] with at least two elements |V_i| > 1, we define the latent intersection as A_i := ⋂_k ∈ V_i⊆ [N], then for each non-empty intersection A_i, there is a corresponding invariance property ι_i: ^|A_i|→^|A_i| which is the identity map specified on the subspace ^|A_i|. By considering all these subsets := {V_i ⊆ [K]: |V_i| > 1, |A_i| > 0}, we obtain a set of invariance properties := {ι_i: ^|A_i|→^|A_i|} that satisfy <ref>.
Identification algorithms.
Many multiview works <cit.> employ the L_2 loss as a regularizer to enforce sample-level invariance on the invariant partition, cooperated with some sufficiency regularizer to preserve sufficient information about the observables cons:sufficiency.
Aligned with our theory thm:ID_from_multi_sets,thm:ID_from_multi_sets, these works have shown block-identifiability on the invariant partition of the latents across different views.
Following the same principle, there are certain variations in the implementations to enforce the invariance principle, e.g. <cit.> directly average the learned representations from paired data g(^1), g(^2) on the shared coordinates before forwarding them to the decoder; <cit.> enforces L_2 alignment up to a learnable sparse perturbation δ. As each latent component constitutes a single invariant block in the training data, these two works element-identifies defn:crlID the latent variables, as explained by <ref>.
§.§ Multi-environment Causal Representation Learning
High-level overview.
Multi-environment / interventional CRL considers data generated from multiple environments with respective environment-specific data distributions; hence, the considered data is independently but non-identically distributed.
In the scope of causal representation learning,
multi-environment data
is often instantiated through interventions on the latent structured causal model <cit.>. Recently, several papers attempt to provide a more general identifiability statement where multi-environment data is not necessarily originated from interventions; instead, they can be individual data distributions that preserve certain symmetries, such as marginal invariance or support invariance <cit.> or sufficient statistical variability <cit.>.
Data generating process
The following presents the data generating process described in most interventional causal representation learning works.
Formally, we consider a set of non-identically distributed data {P_^k}_k ∈ [K] that are collected from multiple environments (indexed by k ∈ [K]) with a shared mixing function f: ^k = f(^k) defn:mixing_fn satisfying <ref> and a shared latent SCM defn:latent_scm.
Let k=0 denote the non-intervened environment and _k ⊆ [N] denotes the set of intervened nodes in k-th environment, the latent distribution P_^k is associated with the density
p_^k(z^k) = ∏_j ∈_kp̃(z_j^k | zj^k) ∏_j ∈ [N] ∖_k p(z_j^k | zj^k),
where we denote by p the original density and by p̃ the intervened density.
Interventions naturally introduce various distributional invariance that can be utilized for latent variable identification: Under the intervention _k in the k-th environment, we observe that both (1) the marginal distribution of _A with A:=[N] ∖TC(_k), with TC denoting the transitive closure and (2) the score [S(^k)]_A^' := ∇_^k_A^'log p_^k on the subset of latent components A^' := [N]∖pa(_k) with pa(_k):={j: j ∈_k ∪pa(_k)} remain invariant across the observational and the k-th interventional environment. Formally, under intervention _k, we have
* Marginal invariance:
p_^0(z_A^0) = p_^k(z_A^k) A:=[N] ∖TC(_k);
* Score invariance:
[S(^0)]_A^' = [S(^k)]_A^' A^' := [N]∖pa(_k).
According to our theory <ref>, we can block-identify both _A, _A^' using these invariance principles eq:marginal_invariance,eq:score_invariance.
Since most interventional CRL works assume at least one intervention per node <cit.>, more fine-grained variable identification results, such as element-wise identification defn:crlID or affine-identification defn:affineID, can be achieved by combining multiple invariances from these per-node interventions, as we elaborate below.
Identifiability with one intervention per node.
By applying <ref>, we demonstrate that latent causal variables can be identified up to element-wise diffeomorphism defn:crlID under single node imperfect intervention per node, given the following assumption.
[Topologically ordered interventional targets]
Specifying <ref> in the interventional setting, we assume there are exactly N environments
{k_1, …, k_N}⊆ [K] where each node j ∈ [N] undergoes one imperfect intervention in the environment k_j ∈ [K]. The interventional targets 1≼…≼ N preserve the topological order, meaning that i ≼ j only if there is a directed path from node i to node j in the underlying causal graph .
Remark: <ref> is directly implied by <ref> as we need to know which environments fall into the same equivalence class.
We believe that identifying the topological order is another subproblem orthogonal to identifying the latent variables, which is often termed “uncoupled/non-aligned problem" <cit.>.
As described by <cit.>, the topological order of unknown interventional targets can be recovered from single-node imperfect intervention by iteratively identifying the interventions that target the source nodes.
This iterative identification process may require additional assumptions on the mixing functions <cit.> and the latent structured causal model <cit.>, or on the interventions, such as perfect interventions that eliminate parental dependency <cit.>, or the need for two interventions per node <cit.>.
Given N environments {k_1, …, k_N}⊆ [K] satisfying <ref>, the ground truth latent variables can be identified up to element-wise diffeomorphism defn:crlID by combining both marginal and score invariances eq:marginal_invariance,eq:score_invariance under our framework thm:ID_from_multi_sets.
We consider a coarse-grained version of the underlying causal graph consisting of a block-node _[N-1] and the leaf node _N with _[N-1] causing _N (i.e., _[N-1]→_N).
We first select a pair of environments V = {0, k_N} consisting of the observational environment and the environment where the leaf node _N is intervened upon. According to <ref>, the marginal invariance holds for the partition A = [N-1], implying identification on _[N-1] from <ref>.
At the same time, when considering the set of environments V^' = {0, k_1, …, k_N-1},
the leaf node N is the only component that satisfy score invariance across all environments V^', because N is not the parent of any intervened node (also see <cit.>).
So here we have another invariant partition A^' = {N}, implying identification on _N thm:ID_from_multi_sets.
By jointly enforcing the marginal and score invariance on A and A^' under a sufficient encoder cons:sufficiency, we identify both _[N-1] as a block and _N as a single element.
Formally, for the parental block _[N-1], we have:
^k_[N-1] = g_:N-1(^k) ∀ k ∈{0, k_1, …, k_N}
where g_:N-1(^k) := [g(^k)]_:N-1 relates to the ground truth _[N-1] through some diffeomorphism h_[N-1]: ^N-1→^N-1 defn:blockID.
Now, we can remove the leaf node N as follows:
For each environment k ∈{0, k_1, …, k_N-1}, we compute the pushforward of P_^k using the learned encoder g_:N-1: ^k →^N-1:
P_^k_[N-1] = g_# (P_^k)
Note that the estimated representations P_^k_[N-1] can be seen as a new observed data distribution for each environment k that is generated from the subgraph _-N without the leaf node N.
Using an iterative argument, we can identify all latent variables element-wise defn:crlID, concluding the proof.
Upon element-wise identification from single-node intervention per node, existing works often provide more fine-grained identifiability results by incorporating other parametric assumptions, either on the mixing functions <cit.> or the latent causal model <cit.> or both <cit.>.
This is explained by <ref>, as element-wise identification can be refined to affine-identification defn:affineID given additional parametric assumptions.
Note that under this milder setting, the full graph is not identifiable without further assumptions, see <cit.>.
Identifiability with two interventions per-node
Current literature in interventional CRL targeting
the general nonparametric setting <cit.> typically assumed a pair of sufficiently different perfect interventions per node.
Thus, any latent variable _j, j ∈ [N], as an interventional target, is uniquely shared by a pair of interventional environment k, k^'∈ [K], forming an invariant partition A_i = {j} constituting of individual latent node j ∈ [N].
Note that this invariance property on the interventional target induces the following distributional property:
[S(^k) - S(^k^')]_j ≠ 0 only if _k = _k^' = {j}.
According to <ref>, each latent variable can thus be identified separately, giving rise to element-wise identification, as shown by <cit.>.
Identifiability under multiple distributions.
More recently, <cit.> explains previous interventional identifiability results from a general weak distributional invariance perspective.
In a nutshell, a set of variables _A can be block-identified if certain invariant distributional properties hold: The invariant partition _A can be block-identified defn:blockID from the rest by utilizing the marginal distributional invariance or invariance on the support, mean or variance.
<cit.> additionally assume the mixing function to be finite degree polynomial, which leads to block-affine identification defn:block_affineID, whereas we can also consider a general non-parametric setting;
they consider one single invariance set, which is a special case of <ref> with one joint ι-property.
=-1 Identification algorithm.
Instead of iteratively enforcing the invariance constraint across the majority of environments as described in <ref>,
most single-node interventional works develop equivalent constraints between pairs of environments to optimize.
For example, the marginal invariance eq:marginal_invariance implies the marginal of the source node is changed only if it is intervened upon, which is utilized by <cit.> to identify latent variables and the ancestral relations simultaneously.
In practice, <cit.> propose a regularized loss that includes Maximum Mean Discrepancy(MMD) between the reconstructed "counterfactual" data distribution and the interventional distribution, enforcing the distributional discrepancy that reveals graphical structure (e.g., detecting the source node).
Similarly, by enforcing sparsity on the score change matrix, <cit.> restricts only score changes from the intervened node and its parents.
In the nonparametric case, <cit.> optimize for the invariant (aligned) interventional targets through model selection, whereas <cit.> directly solve the constrained optimization problem formulated using score differences.
Considering a more general setup, <cit.> provides various invariance-based regularizers as plug-and-play components for any losses that enforce a sufficient representation cons:sufficiency.
§.§ Temporal Causal Representation Learning
High-level overview. Temporal CRL <cit.> focuses on retrieving latent causal structures from time series data, where the latent causal structured is typically modeled as a Dynamic Bayesian Network (DBN) <cit.>.
Existing temporal CRL literature has developed identifiability results under varying sets of assumptions. A common overarching assumption is to require the Dynamic Bayesian Network to be first-order Markovian, allowing only causal links from t-1 to t, eliminating longer dependencies <cit.>. While many works assume that there is no instantaneous effect, restricting the latent components of ^t to be mutually dependent <cit.>, some approaches have lifted this assumption and prove identifiability allowing for instantaneous links among the latent components at the same timestep (<cit.>).
=-1 Data generating process. We present the data generating process followed by most temporal causal representation works and explain the underlying latent invariance and data symmetries. Let ^t ∈^N denotes the latent vector at time t and ^t = f(^t) ∈^D the corresponding entangled observable with f: ^N →^D the shared mixing function defn:mixing_fn satisfying <ref>. The actions ^t with cardinality |^t| = N mostly only target a subset of latent variables while keeping the rest untouched, following its default dynamics <cit.>.
Intuitively, these actions ^t can be interpreted as a component-wise indicator for each latent variable ^t_j, j ∈ [N] stating whether _j follows the default dynamics p(^t+1_j | ^t) or the modified dynamics induced by the action _j^t. From this perspective, the non-intervened causal variables at time t can be considered the invariant partition under our formulation, denoted by ^t_A_t with the index set A_t defined as A_t:= {j: _j = 0}. Note that this invariance can be considered as a generalization of the multiview case because the realizations z_j^t, z_j^t+1 are not exactly identical (as in the multiview case) but are related via a default transition mechanism p(^t+1_j | ^t). To formalize this intuition, we define ^t := ^t | ^t as the conditional random vector conditioning on the action ^t at time t. For the non-intervened partition A_t ⊆ [N] that follows the default dynamics, the transition model should be invariant:
p(^t_A_t | ^t-1) = p(^t_A_t | ^t-1),
which gives rise to a non-trivial distributional invariance property defn:invariance_projector. Note that the invariance partition A_t could vary across different time steps, providing a set of invariance properties :={ι_t: ^|A_t|→_t}_t=1^T, indexed by time t.
Given by <ref>, all invariant partitions ^t_A_t can be block-identified;
furthermore, the complementary variant partition can also be identified under an invertible encoder and mutual independence within ^t prop:id-variant-indep, aligning with the identification results without instantaneous effect <cit.>.
On the other hand, temporal causal variables with instantaneous effects are shown to be identifiable only if “instantaneous parents” (i.e., nodes affecting other nodes instantaneously) are cut by actions <cit.>, reducing to the setting without instantaneous effect where the latent components at t are mutually independent. Upon invariance, more fine-grained latent variable identification results, such as element-wise identifiability, can be obtained by incorporating additional technical assumptions, such as the sparse mechanism shift <cit.> and parametric latent causal model <cit.>.
=-1 Identification algorithm.
From a high level, the distributional invariance eq:temp_invariance indicates full explainability and predictability of _A_t^t from its previous time step ^t-1, regardless of the action ^t. In principle, this invariance principle can be enforced by directly maximizing the information content of the proposed default transition density between the learned representation p(^t_A_t | ^t-1) <cit.>. In practice, the invariance regularization is often incorporated together with the predictability of the variant partition conditioning on actions, implemented as a KL divergence between the observational posterior q(^t | ^t) and the transitional prior p(^t | ^t-1, ^t) <cit.>, estimated using variational Bayes <cit.> or normalizing flow <cit.>. We additionally show that minimizing this KL-divergence (q(^t | ^t) p(^t | ^t-1, ^t)) is equivalent to maximizing the conditional entropy p(^t_A_t | ^t-1) in <ref>.
§.§ Multi-task Causal Representation Learning
=-1 High-level overview.
Multi-task causal representation learning aims to identify latent causal variables via external supervision, in this case, the label information of the same instance for various tasks. Previously, multi-task learning <cit.> has been mostly studied outside the scope of identifiability, mainly focusing on domain adaptation and out-of-distribution generalization. One of the popular ideas that was extensively used in the context of multi-task learning is to leverage interactions between different tasks to construct a generalist model that is capable of solving all classification tasks and potentially better generalizes to unseen tasks <cit.>. Recently, <cit.> systematically studied under which conditions the latent variables can be identified in the multi-task scenario and correspondingly provided identification algorithms.
=-1 Data generating process.
The multi-task causal representation learning considers a supervised setup: Given a latent SCM as defined in <ref>, we generate the observable ∈^D through some mixing function f: ^N →^D satisfying <ref>.
Given a set of task = {t_1, …, t_k}, and let ^k ∈_k denote the corresponding task label respect to the task t_k. Each task only directly depends on a subset of latent variables S_k ⊆ [N], in the sense that the label ^k can be expressed as a function that contains all and only information about the latent variable _S_k:
^k = r_k(_S_k),
where r: ^|S_k|→_k is some deterministic function which maps the latent subspace ^|S_k| to the task-specific label space _k, which is often assumed to be linear and implemented using a linear readout in practice <cit.>.
For each task t_k, k ∈ [K], we observe the associated data distribution P_, ^k. Consider two different tasks t_k, t_k^' with k, k^'∈ [K], the corresponding data , ^k and , ^k^' are invariant in the intersection of task-related features _A with A = S_k ∩ S_k^'. Formally, let r_k^-1({^k}) denotes the pre-image of ^k, for which it holds
r_k^-1({^k})_A = r_k^'^-1({^k^'})_A,
showing alignment on the shared partition of the task-related latents.
In the ideal case, each latent component j ∈ [N] is uniquely shared by a subset of tasks, all factors of variation can be fully disentangled, which aligns with the theoretical claims by <cit.>.
=-1 Identification algorithms.
We remark that the sharing mechanism in the context of multi-task learning fundamentally differs from that of multi-view setup, thus resulting in different learning algorithms.
Regarding learning, the shared partition of task-related latents is enforced to align up to the linear equivalence class (given a linear readout) instead of sample level L_2 alignment.
Intuitively, this invariance principle can be interpreted as a soft version of the that in the multiview case.
In practice, under the constraint of perfect classification, one employs (1) a sparsity constraint on the linear readout weights to enforce the encoder to allocate the correct task-specific latents and (2) an information-sharing term to encourage reusing latents across various tasks.
Equilibrium can be obtained between these two terms only when the shared task-specific latent is element-wise identified defn:crlID.
Thus, this soft invariance principle is jointly implemented by the sparsity constraint and information sharing regularization <cit.>.
§.§ Domain Generalization Representation Learning
=-1 High-level overview. Domain generalization aims at out-of-distribution performance.
That is, learning an optimal encoder and predictor that performs well at some unseen test domain that preserves the same data symmetries as in the training data. At a high level, domain generalization representation learning <cit.> considers a similar framework as introduced for interventional CRL, with independent but non-identically distributed data, but additionally incorporated with external supervision and focusing more on model robustness perspective. While interventional CRL aims to identify the true latent factors of variations (up to some transformation), domain generalization learning focuses directly on out-of-distribution prediction, relying on some invariance properties preserved under the distributional shifts.
Due to the non-causal objective, new methodologies are motivated and tested on real-world benchmarks (e.g., VLCS <cit.>, PACS <cit.>, Office-Home <cit.>, Terra Incognita <cit.>, DomainNet <cit.>)
and could inspire future real-world applicability of causal representation learning approaches.
=-1 Data generating process. The problem of domain generalizations is an extension of supervised learning where training data from multiple environments are available
<cit.>.
An environment is a dataset of i.i.d. observations from a joint distribution P_^k, ^k of the observables ^k ∈^D and the label ^k ∈.
The label ^k ∈^m only depends on the invariant latents through a linear regression structural equation model <cit.>, described as follows:
^k = ^* _A^k + ϵ_k, _A^k ⊥ϵ_k
^k = f(^k)
where ^* ∈^D × m represents the ground truth relationship between the label ^k and the invariant latents ^k_A. ϵ_k is some white noise with bounded variance and f: ^N →^D denotes the shared mixing function for all k ∈ [K] satisfying <ref>.
The set of environment distributions {P_^k, ^k}_k ∈ [K] generally differ from each other because of interventions or other distributional shifts such as covariates shift and concept shift. However, as the relationship between the invariant latents and the labels ^* and the mixing mechanism f are shared across different environments, the optimal risk remains invariant in the sense that
_k^*(^* ∘ f^-1) = _k^'^*(^* ∘ f^-1),
where ^* denotes the ground truth relation between the invariant latents _A^k an the labels ^k and f^-1 is the inverse of the diffeomorphism mixing f(see <ref>).
Note that this is a non-trivial ι property as the labels ^k only depend on the invariant latents _A^k, thus satisfying <ref> (ii).
=-1 Identification algorithm. Different distributional invariance are enforced by interpolating and extrapolating across various environments. Among the countless contribution to the literature, mixup <cit.> linearly interpolates observations from different environments as a robust data augmentation procedure, Domain-Adversarial Neural Networks <cit.> support the main learning task discouraging learning domain-discriminant features, Distributionally Robust Optimization (DRO) <cit.> replaces the vanilla Empirical Risk objective minimizing only with respect to the worst modeled environment, Invariant Risk Minimization <cit.> combines the Empirical Risk objective with an invariance constraint on the gradient, and Variance Risk Extrapolation <cit.>, similar in spirit combines the empirical risk objective with an invariance constraint using the variance among environments. For a more comprehensive review of domain generalization algorithms, see <cit.>.
§ EXPERIMENTS
This section demonstrates the real-world applicability of causal representation learning under the invariance principle, evidenced by superior treatment effect estimation performance on the high-dimensional causal inference benchmark <cit.> using a loss for the domain generalization literature <cit.>. Additionally, we provide ablation studies on existing interventional causal representation learning methods <cit.>, showcasing that non-trivial distributional invariance is needed for latent variable identification. This distributional invariance could, but does not have to arise from a valid intervention in the sense of causality.
§.§ Case Study: ISTAnt
Problem. Despite the majority of causal representation learning algorithms being designed to enforce the identifiability of some latent factors and tested on controlled synthetic benchmarks, there are a plethora of real-world applications across scientific disciplines requiring representation learning to answer causal questions <cit.>. Recently, <cit.> introduced ISTAnt, the first real-world representation learning benchmark with a real causal downstream task (treatment effect estimation). This benchmark highlights different challenges (sources of biases) that could arise from machine learning pipelines even in the simplest possible setting of a randomized controlled trial. Videos of ants triplets are recorded, and a per-frame representation has to be extracted for supervised behavior classification to estimate the Average Treatment Effect of an intervention (exposure to a chemical substance).
Beyond desirable identification result on the latent factors (implying that the causal variables are recovered without bias), no clear algorithm has been proposed yet on minimizing the Treatment Effect Bias (TEB) <cit.>. One of the challenges highlighted by <cit.> is that in practice, there is both covariate and concept shifts due to the effect modification from training on a non-random subset of the RCT because, for example, ecologists do not label individual frames but whole video recordings.
=-1 Solution. Relying on our framework, we can explicitly aim for low TEB by leveraging known data symmetries from the experimental protocol. In fact, the causal mechanism (P(Y^e|do(X^e=x)) stays invariant among the different experiment settings (i.e., individual videos or position of the petri dish).
This condition can be easily enforced by existing domain generalization algorithms. For exemplary purposes, we choose Variance Risk Extrapolation <cit.>, which directly enforces both the invariance sufficiency constraints cons:invariance,cons:sufficiency by minimizing the the Empirical Risk together with the risk variance inter-enviroments.
=-1 Experiment settings. In our experiments, we consider both experiment and position hard annotation sampling criteria, as described by <cit.>, defining environments as different video recording (with different experiment settings and treatment).
We varied the strength of the invariant component in the objective, varying the invariance regularization multiplier λ_INV from 0 (ERM) to 10 000 and repeated the experiments 20 times for each different value to estimate some confidence interval. All other implementational details follow <cit.>.
We evaluate the performance with the Treatment Effect Relative Bias (TERB), which is defined by <cit.> as the ratio between the bias in the predictions across treatment groups and the true average treatment effect estimated with ground-truth annotations over the whole trial. We also report the balanced accuracy for the discussion.
=-1 Results. The results on both TERB and balanced accuracy for the best performing model (pretrained model DINO, with one hidden layer head with 256 hidden nodes, learning rate 0.0005 and batch size 128), are reported in <ref>.
As expected, the balanced accuracy initially increases with the invariance regularization strength λ_INV, as our prediction problem benefits from the invariance. At some point, the sufficiency component is not sufficiently balanced with the invariance, and performance decreases. Similarly, the TERB improves positively, weighting the invariance component until a certain threshold.
In particular, on average with λ_INV=100 the TERB decreases to 20% (from 100% using ERM) with experiment subsampling.
In agreement with <cit.>, a naive estimate of the TEB on a small validation set is a reasonable (albeit not perfect) model selection criterion.
Although it performs slightly worse than model selection based on ERM loss in the position sampling case, it proves to be more reliable overall. This experiment underscores the advantages of flexibly enforcing known invariances in the data, corroborating our identifiability theory sec:identifiability.
Discussion. Interestingly, <cit.> empirically demonstrated that no domain generalization algorithm consistently outperforms Empirical Risk Minimization in out-of-distribution prediction. However, in this application, our goal is not to achieve high out-of-distribution accuracy but rather to identify a representation that is invariant to the effect modifiers introduced by the data labeling process.
This experiment serves as a clear example of the paradigm shift of CRL via the invariance principle.
While existing CRL approaches design algorithms based on specific assumptions that are often challenging to align with real-world applications, our approach begins from the application perspective. It allows for the specification of known data symmetries and desired properties of the learned representation, followed by selecting an appropriate implementation for the distance function (potentially from existing methods).
§.§ Synthetic Ablation with “Ninterventions”
This subsection presents identifiability results under controversial (non-causal) conditions using simulated data. We consider the synthetic setup with full control over the latent space and the data-generating process. We consider a simple graph of three causal variables as _1 →_2 →_3. The corresponding joint density has the form of
p_(z_1, z_2, z_3) = p(z_3 | z_2) p(z_2 | z_1) p(z_1),
The goal of this experiment is to demonstrate that existing methods for interventional CRL rely primarily on distributional invariance, regardless of whether this invariance arises from a well-defined intervention or some other arbitrary transformation. To illustrate this, we introduce the concept of a “nintervention," which has a similar distributional effect to a regular intervention, maintaining certain conditionals invariant while altering others, but without a causal interpretation.
We define a “nintervention” on a causal conditional as the process of changing its distribution but cutting all incoming and outgoing edges. Child nodes condition on the old, pre-intervention, random variable. Formally, we consider the latent SCM as defined in <ref>, an nintervention on a node j ∈ [N] is gives rise to the following conditional factorization
p̃_(z) = p̃(z_j) ∏_i ∈ [N]∖{j} p(z_i | zi^old)
Note that the marginal distribution of all non-nintervened nodes P__[N] ∖ j remain invariant after nintervention.
In previous example, we perform a nintervention by replacing the conditional density p(z_2 | z_1) using a sufficiently different marginal distribution p(z̃_2) that satisfies <ref> (ii), which gives rise to the following new factorization:
p̃_(z_1, z_2, z_3) = p(z_3 | z_2^old) p̃(z_2) p(z_1).
Note that _3 conditions on the random variable _2 before nintervention, whose realization is denoted as z_2^old. Differing from a intervention from the causal sense, we cut not only the incoming link of _2 but also its outgoing connection by keeping the marginal distribution of _3 the same.
Clearly, this is a non-sensical intervention from the causal perspective because we eliminates the causal effect from _2 to its descendants.
=-1 Experimental setting.
As a proof of concept, we choose a linear Gaussian additive noise model and a nonlinear mixing function implemented as a 3-layer invertible MLP with activation. We average the results over three independently sampled ninterventional densities p̃(z_2) while guaranteeing all ninterventional distributions satisfy <ref> (ii). We remark that the marginal distribution of both _1, _3 remains the same after a nintervention. Hence, we expect _1, _3 to be block-identified defn:blockID according to <ref>.
In practice, we enforce the marginal invariance constraint cons:invariance by minimizing the MMD loss, as implemented by the interventional CRL works <cit.> and train an auto-encoder for a sufficient representation cons:sufficiency. Further details are included in <ref>.
=-1 Results. To validate block-identifiability, we perform between the estimated block [_1, _3] and the ground truth latents _1, _2, _3 respectively. The results indicate that both _1, _3 are block-identified, showing a high R^2 score of 0.863 ± 0.031 and 0.872 ± 0.035, respectively. By contrast, the latent variable _2 is not identified, evidenced by a low R^2 of 0.065 ± 0.017.
=-1 Discussion. By showing the block-identifiability results of marginal invariant latent variables _1, _3 under nintervention, we successfully validate <ref>, demystifying the spurious link between latent variable identification and causal interventions.
Throughout the experiment, we realize that a sufficiently different ninterventional distribution (<ref> (ii)) is the key to validate the identification theory properly and to observe the expected outcome, as elaborated in <ref>.
§ CONCLUSIONS
In this paper, we take a closer look at the wide range of causal representation learning methods.
Interestingly, we find that the differences between them may often be more related to “semantics" than to fundamental methodological distinctions.
We identified two components involved in identifiability results: preserving information of the data and a set of known invariances.
Our results have two immediate implications. First, they provide new insights into the “causal representation learning problem," particularly clarifying the role of causal assumptions.
We have shown that while learning the graph requires traditional causal assumptions such as additive noise models or access to interventions, identifying the causal variables may not.
This is an important result, as access to causal variables is standalone useful for downstream tasks, e.g., for training robust downstream predictors or even extracting pre-treatment covariates for treatment effect estimation <cit.>, even without knowledge of the full causal graph.
Second, we have exemplified how causal representation can lead to successful applications in practice.
We moved the goal post from a characterization of specific assumptions that lead to identifiability, which often do not align with real-world data, to a general recipe that allow practitioners to specify known invariances in their problem and learn representations that align with them.
In the domain generalization literature, it has been widely observed that invariant training methods often do not consistently outperform empirical risk minimization (ERM).
In our experiments, instead, we have demonstrated that the specific invariance enforced by vREX <cit.> entails good performance in our causal downstream task subsec:istant.
Our paper leaves out certain settings concerning identifiability that may be interesting for future work, such as discrete variables and finite samples guarantees.
One question the reader may ask, then, is “so what is exactly causal in causal representation learning?”.
We have shown that the identifiability results in typical causal representation learning are primarily based on invariance assumptions, which do not necessarily pertain to causality. We hope this insight will broaden the applicability of these methods.
At the same time, we used causality as a language describing the “parameterization” of the system in terms of latent causal variables with associated known symmetries. Defining the symmetries at the level of these causal variables gives the identified representation a causal meaning, important when incorporating a graph discovery step or some other causal downstream task like treatment effect estimation. Ultimately, our representations and latent causal models can be “true” in the sense of <cit.> when they allow us to predict “causal effects that one observes in practice”.
Overall, our view also aligns with “phenomenological” accounts of causality <cit.>, that define causal variables from a set of elementary interventions. In our setting too, the identified latent variables or blocks thereof are directly defined by the invariances at hand. From the methodological perspective, all is needed to learn causal variables is for the symmetries defined over the causal latent variables to entail some statistical footprint across pockets of data. If variables are available, learning the graph has a rich literature <cit.>, with assumptions that are often compatible with learning the variables themselves.
Our general characterization of the variable learning problem opens new frontiers for research in representation learning:
§.§ Representational Alignment and Platonic Representation
Several works (<cit.>) have highlighted the emergence of similar representations in neural models trained independently. In <cit.> is hypothesized that neural networks, trained with different objectives on various data and modalities, are converging toward a shared statistical model of reality within their representation spaces. To support this hypothesis, they measure the alignment of representations proposing to use a mutual nearest-neighbor metric, which measures the mean intersection of the k-nearest neighbor sets induced by two kernels defined on the two spaces, normalized by k.
This metric can be an instance to the distance function in our formulation in <ref>.
Despite not being optimized directly, several models in multiple settings (different objectives, data and modalities) seem to be aligned, hinting at the fact that their individual training objectives may be respecting some unknwon symmetries.
A precise formalization of the latent causal model and identifiability in the context of foundational models remains open and will be objective for future research.
§.§ Environment Discovery
Domain generalization methods generalize to distributions potentially far away from the training, distribution, via learning representations invariant across distinct environments.
However this can be costly as it requires to have label information informing on the partition of the data into environments. Automatic environment discovery (<cit.>) attempts to solve this problem by learning to recover the environment partition. This is an interesting new frontier for causal representation learning, discovering data symmetries as opposed to only enforcing them. For example, this would correspond to having access to multiple interventional distributions but without knowing which samples belong to the same interventional or observational distribution. Discovering that a data set is a mixture of distributions, each being a different intervention on the same causal model, could help increase applicability of causal representations to large obeservational data sets. We expect this to be particularly relevant to downstream tasks were biases to certain experimental settings are undesirable, as in our case study on treatment effect estimation from high-dimensional recordings of a randomized controlled trial.
§.§ Connection with Geometric Deep Learning
Geometric deep learning (GDL) (<cit.>) is a well estabilished learning paradigm which involves encoding a geometric understanding of data as an inductive bias in deep learning models, in order to obtain more robust models and improve performance. One fundamental direction for these priors is to encode symmetries and invariances to different types of transformations of the input data, e.g. rotations or group actions (<cit.>), in representational space. Our work can be fundamentally related with this direction, with the difference that we don't aim to model explicitly the transformations of the input space, but the invariances defined at the latent level. While an initial connection has been developed for disentanglement <cit.>, a precise connection between GDL and causal representation learning remains a open direction. We expect this to benefit the two communities in both directions: (i) by injecting geometric priors in order to craft better CRL algorithms and (ii) by incorporating causality into successful GDL frameworks, which have been fundamentally advancing challenging real-world problems, such as protein folding (<cit.>).
§ ACKNOWLEDGEMENTS
We thank Jiaqi Zhang, Francesco Montagna, David Lopez-Paz, Kartik Ahuja, Thomas Kipf, Sara Magliacane, Julius von Kügelgen, Kun Zhang, and Bernhard Schölkopf for extremely helpful discussion. Riccardo Cadei was supported by a Google Research Scholar Award to Francesco Locatello. We acknowledge the Third Bellairs Workshop on Causal Representation Learning held at the Bellairs Research Institute, February 9/16, 2024, and a debate on the difference between interventions and counterfactuals in disentanglement and CRL that took place during Dhanya Sridhar's lecture, which motivated us to significantly broaden the scope of the paper. We thank Dhanya and all participants of the workshop.
unsrtnat
|
http://arxiv.org/abs/2409.03683v1 | 20240905163821 | Towards a self-consistent evaluation of gas dwarf scenarios for temperate sub-Neptunes | [
"Frances E. Rigby",
"Lorenzo Pica-Ciamarra",
"Måns Holmberg",
"Nikku Madhusudhan",
"Savvas Constantinou",
"Laura Schaefer",
"Jie Deng",
"Kanani K. M. Lee",
"Julianne I. Moses"
] | astro-ph.EP | [
"astro-ph.EP"
] |
These authors contributed comparably to this work.
Institute of Astronomy, University of Cambridge,
Madingley Road, Cambridge CB3 0HA, UK
These authors contributed comparably to this work.
Institute of Astronomy, University of Cambridge,
Madingley Road, Cambridge CB3 0HA, UK
These authors contributed comparably to this work.
Institute of Astronomy, University of Cambridge,
Madingley Road, Cambridge CB3 0HA, UK
These authors contributed comparably to this work.
Institute of Astronomy, University of Cambridge,
Madingley Road, Cambridge CB3 0HA, UK
These authors contributed comparably to this work.
Institute of Astronomy, University of Cambridge,
Madingley Road, Cambridge CB3 0HA, UK
Nikku Madhusudhan
[email protected]
Department of Geological Sciences, School of Earth, Energy, and Environmental Sciences,
Stanford University, Stanford, CA 94305, USA
Department of Geosciences, Princeton University, Princeton, NJ 08544, USA
Department of Physics, United States Coast Guard Academy, New London, CT 06320, USA
Space Science Institute, 4765 Walnut Street, Suite B, Boulder, CO 80301, USA
§ ABSTRACT
The recent JWST detections of carbon-bearing molecules in a habitable-zone sub-Neptune have opened a new era in the study of low-mass exoplanets. The sub-Neptune regime spans a wide diversity of planetary interiors and atmospheres not witnessed in the solar system, including mini-Neptunes, super-Earths, and water worlds. Recent works have investigated the possibility of gas dwarfs, with rocky interiors and thick H_2-rich atmospheres, to explain aspects of the sub-Neptune population, including the radius valley. Interactions between the H_2-rich envelope and a potential magma ocean may lead to observable atmospheric signatures. We report a coupled interior-atmosphere modelling framework for gas dwarfs to investigate the plausibility of magma oceans on such planets and their observable diagnostics. We find that the surface-atmosphere interactions and atmospheric composition are sensitive to a wide range of parameters, including the atmospheric and internal structure, mineral composition, volatile solubility and atmospheric chemistry. While magma oceans are typically associated with high-temperature rocky planets, we assess if such conditions may be admissible and observable for temperate sub-Neptunes. We find that a holistic modelling approach is required for this purpose and to avoid unphysical model solutions. We find using our model framework and considering the habitable-zone sub-Neptune K2-18 b as a case study that its observed atmospheric composition is incompatible with a magma ocean scenario. We identify key atmospheric molecular and elemental diagnostics, including the abundances of CO_2, CO, NH_3 and, potentially, S-bearing species. Our study also underscores the need for fundamental material properties for accurate modelling of such planets.
§ INTRODUCTION
Sub-Neptune planets, with radii 1 R_⊕≲ R_p≲ 4 R_⊕, have emerged as the new frontier of exoplanet science and constitute the most numerous class of planets detected to date (e.g., ). The nature of the sub-Neptune population remains debated, as their bulk densities can be explained by a number of degenerate interior compositions <cit.>. These include rocky planets with diverse atmospheric compositions, mini-Neptunes with volatile-rich interiors and deep H_2-rich atmospheres, and water worlds with substantial water mass fractions, including Hycean worlds <cit.>.
The James Webb Space Telescope (JWST) is revolutionising our understanding of sub-Neptunes through high-precision atmospheric spectroscopy. JWST observations have led to confident detections and precise abundance constraints for CH_4 and CO_2 in the atmospheres of the habitable-zone sub-Neptune and candidate Hycean world <cit.> K2-18 b <cit.>, demonstrating the promise of JWST for detailed atmospheric characterisation. Furthermore, such observations are starting to be available for other temperate sub-Neptunes, including TOI-270 d <cit.> – where abundance constraints for CH_4 and CO_2 were also retrieved – and LHS 1140 b <cit.>. Such precise abundance measurements pave the way towards understanding the interactions between the planet's atmosphere and interior, including the presence and nature of an underlying surface, as well as the planetary formation processes that give rise to such planets.
One of the most distinct features of the sub-Neptune population is the radius valley, a bimodal distribution of sub-Neptune radii with a minimum around 1.8 R_⊕ <cit.>. Two competing hypotheses have been proposed to explain the origin of the radius valley. One explanation suggests that the valley is a consequence of differential atmospheric mass loss between planets of different masses. In this hypothesis, both populations would be composed of planets with predominantly rocky interiors. The more massive planets would retain their primary H_2-rich atmospheres, while the less massive ones would instead largely lose their envelope and hence have a smaller radius. We refer to the larger population, with rocky interiors and a deep H2-rich atmospheres, as gas dwarfs. The mechanism for the mass loss is debated, with the predictions of two hypotheses – photoevaporation <cit.> and core-powered mass loss <cit.> – both proposed to explain the observations <cit.>. A second explanation <cit.> suggests that the valley could instead be due to planets having different interior compositions. The smaller radius population would be rocky, as in the atmospheric mass-loss scenario, while the larger population would be composed of planets with water-rich interiors due to significant accumulation of icy planetesimals/pebbles during their formation and migration. Atmospheric observations of planets in the sub-Neptune range may be able to distinguish between these two scenarios <cit.>.
While the gas dwarf hypothesis has garnered significant attention in the literature <cit.>, several open questions remain. Firstly, it is unclear whether it is possible for rocky cores to accrete a substantial H_2-rich envelope without significant accretion of other volatiles and ices <cit.>. Secondly, should such planets exist, would the atmosphere-interior interactions give rise to distinct atmospheric signatures? This might be expected if the rocky surface were to be molten, giving rise to a magma ocean scenario <cit.>. It is, however, not fully clear whether this scenario is possible, particularly for planets with a low equilibrium temperature. For these planets, only a subset of atmospheric structures, combining sufficient but not exceedingly high surface pressure and very high surface temperature, could result in magma at the base of the atmosphere.
Several recent studies have explored the implications of a magma ocean on the atmosphere and interior compositions of diverse planets, both with terrestrial-like <cit.> and H_2-rich atmospheres <cit.>. These works identify several key factors, including temperature and oxygen fugacity at the bottom of the atmosphere, that influence the composition of the atmosphere, driven by thermochemical equilibrium at the gas-melt interface. For example, some notable atmospheric signatures of reduced conditions in a rocky interior include potential nitrogen depletion <cit.> and high CO/CO_2 ratio for H_2-rich atmospheres <cit.>. However, the interplay between the atmosphere, interior, and the corresponding surface-atmosphere interactions in sub-Neptunes is only beginning to be explored in a realistic manner <cit.>.
In this work, we develop an integrated magma ocean framework for temperate, H_2-rich sub-Neptunes. Our framework, presented in Section <ref>, includes atmospheric and internal structure modelling, melt-gas interactions, and both equilibrium and disequilibrium processes in the atmosphere, resulting in spectroscopic predictions of atmospheric observables. We consider thermochemical equilibrium at the magma-atmosphere interface, and the solubility of volatile (H, C, N, O, S) bearing species in magma. We explore the extreme case of the habitable-zone sub-Neptune and Hycean candidate K2-18 b <cit.> to investigate the plausibility of a magma ocean <cit.> and, if present, its atmospheric signatures. In doing so, we first use our framework to perform a comparative assessment of previous works in this direction in Section <ref>, both on terrestrial-like atmospheres <cit.> and on H2-rich ones <cit.>, with a focus on the case study of the candidate Hycean world K2-18 b. We then present our model predictions in Section <ref>. Finally, we summarize our findings and discuss future work in Section <ref>, highlighting the need for physically consistent models, and new experimental and theoretical work to derive accurate fundamental material properties.
§ METHODS
We develop an integrated modelling framework to evaluate gas dwarf scenarios for planets in the sub-Neptune regime. A schematic flowchart of the framework is shown in Figure <ref>.
We start by considering the constraints that the observed bulk parameters (mass, radius, and hence density) and known atmospheric properties impose on the planet's atmospheric and internal structure. This enables us to infer the possible conditions at the surface-atmosphere boundary, and, by considering a relevant mineral phase diagram, assess whether such conditions can in principle lead to a magma ocean scenario. If they can, we proceed by modelling the chemistry at the magma-atmosphere interface, which is determined by equilibrium processes including the solubility of relevant volatiles in the silicate melt, providing us with the elemental abundances in the gas phase at the interface.
These are then evolved to the rest of the atmosphere, assuming chemical equilibrium in the lower atmosphere, and non-equilibrium processes (photochemistry and vertical mixing) in the upper atmosphere.
This allows us to compute the observable composition of the atmosphere, which can be compared with the molecular abundances retrieved through observations to finally assess the plausibility of a magma ocean scenario for the planet.
We now describe in detail each of the steps outlined above.
§.§ Atmospheric Structure and Composition
We begin by modelling the atmospheric temperature structure in a self-consistent manner. In order to do so, the atmospheric chemical composition needs to be assumed. This can be done either by assuming the elemental abundances and atmospheric chemistry, or by directly assuming the molecular mixing ratios in the atmosphere. Other parameters that need to be taken into account include the internal temperature T_int representing an internal heat flux, the incident irradiation, the stellar properties, the presence and characteristics of clouds/hazes in the planet's atmosphere, and the efficiency of day-night energy redistribution. The self-consistent calculation will yield a pressure-temperature (P-T) profile, which will be coupled to the internal structure model, as discussed in Section <ref>.
In order to carry out the self-consistent modelling of the atmospheric structure, we use the GENESIS framework <cit.> adapted for sub-Neptunes (). GENESIS solves for radiative-convective equilibrium throughout the atmosphere, which is assumed to be plane-parallel, using the Rybicki scheme. It carries out line-by-line radiative transfer calculations through the Feautrier method <cit.> and the discontinuous finite element method <cit.>, while taking into account all of the parameters mentioned earlier in this section.
For the atmospheric composition, we adopt uniform mixing ratios of molecular species based on the retrieved values at the terminator region of K2-18 b <cit.>. We use the median retrieved abundances for the one-offset case: logX_CH4 = -1.72 and logX_CO2 = -2.04. For H2O, we consider the 95% one-offset upper limit, logX_H2O = -3.01. We also assume the incident irradiation and stellar properties of K2-18 b, and uniform day-night energy redistribution. We then explore the remaining parameter space. In particular, we consider two end-member values for T_int, 25 K and 50 K, following <cit.> and <cit.>, and three values for a, the hazes' Rayleigh enhancement factor: 100, 1500 and 10000. We consider four combinations of these parameters, obtaining a cold case (designated C1, corresponding to T_int = 25 K, a = 10000), two canonical cases (both with a = 1500, designated C2 for T_int = 25 K and C3 for T_int = 50 K) and a hot case (C4, with T_int = 50 K and a = 100). These profiles are shown in Figure <ref>.
We place the upper boundary of the atmosphere at 10^-6 bar, and calculate the P-T profile self-consistently down to 10^3 bar, below the radiative-convective boundary. At higher pressures, we extrapolate the profile as an adiabat, using the H2/He equation of state (EOS), ρ=ρ(P,T), and adiabatic gradient from <cit.>.
We note that, in principle, an appropriate P-T profile may be even colder than C1, considering the constraints on clouds/hazes at the terminator from observations of K2-18 b <cit.>.
§.§ Internal Structure Modelling
We model planetary internal structures using the HyRIS framework, outlined in <cit.>. The model calculates the planet radius (R_p) from the planet mass (M_p), the mass fractions of the planet's components (x_i=M_i/M_p), and the corresponding EOS and P-T profile. HyRIS solves the equations for mass continuity and hydrostatic equilibrium using a fourth-order Runge-Kutta method, and solves for R_p using a bisection procedure. For the purpose of this study investigating magma-ocean scenarios, the internal structure model includes a H_2-rich envelope, a silicate mantle, and an iron core.
The silicate mantle is described by EOSs valid for the liquid and solid phases – for simplicity, we adopt a separate EOS prescription on either side of a melting curve. The composition is nominally assumed to be peridotitic. The magma is described by an EOS for peridotitic melt compiled similarly to <cit.> by combining the densities of molten enstatite, forsterite, fayalite, anorthite and diopside, described by third-order Birch-Murnaghan/Mie-Gruneisen EOSs from <cit.>, weighted by their mass fractions. For the purpose of this initial study, we assume complete melting occurs at the liquidus, and hence do not include an EOS prescription for the partial melt between the solidus and liquidus curves. We use the peridotite liquidus from <cit.>, based on <cit.> – both the liquidus and solidus are shown in Figure <ref> <cit.>. The solid portion of the silicate mantle is described by the EOS of <cit.> for the high-pressure peridotite assemblage. At extreme mantle pressures beyond the pressure range of these experiments (107 GPa), we use the temperature-independent EOS of <cit.> for MgSiO_3 perovskite, originally derived at room temperature. The thermal effects for solid silicates at these pressures are small <cit.> with negligible effect on the internal structure. The iron core is described by the EOS of <cit.> for hexagonal close-packed Fe.
The temperature structure in the melt is assumed to be adiabatic. The adiabatic gradient is calculated using the specific heat for peridotite from <cit.> and the volume expansion coefficient that we calculate from the combined peridotite melt EOS. The adiabatic gradient in the upper portion of the solid mantle is calculated following <cit.>. Following previous studies <cit.>, the remaining solid portion of the interior is taken to be isothermal, as the EOSs used are temperature-independent <cit.>.
The mass of the magma ocean follows from the adiabatic temperature profile in the melt, similarly to the calculation of water ocean depths by <cit.> and <cit.>. The melt adiabat and hence the magma base pressure are defined by the surface pressure and temperature. For a given interior composition and surface conditions, the mass of the melt can thus be calculated. We adapt HyRIS to automate the extraction of the relevant melt characteristics, similar to the methods for water oceans in <cit.>. The mass fraction of the melt is an important quantity for considerations of the available volatile reservoir, as discussed below. We note that the moderate increase of the magma ocean mass fraction that may result from partial melting is partly accounted for by our range of considered melt masses in Section <ref>.
§.§ Melt-atmosphere Interface Chemistry
The atmospheric chemistry is constrained by the elemental composition at the bottom of the atmosphere, which is governed by the interactions at the magma ocean and atmosphere interface. At this boundary, we model the reactions and solubility of the gas species in thermochemical equilibrium. We include 82 H-C-N-O-S gas species and He, the set of which we denote 𝐗, and their equilibrium reactions, nominally excluding other effects such as condensation and exsolution. Of these volatile species, we consider the solubility in the melt of H_2 <cit.>, H_2O <cit.>, CO <cit.>, CO_2 <cit.>, CH_4 <cit.>, N_2 <cit.>, S_2 <cit.> and H2S <cit.>, as further motivated in Appendix <ref>. We note that the solubility of H2S is uncertain at high temperatures/pressures and may be higher if, for example, its solubility approaches that of S2. Furthermore, we remark that we are not considering the possible exsolution of FeS, which may affect the abundance of sulfur in the atmosphere. Likewise, the overall solubility of nitrogen is calculated here through N2, and may be higher if the solubility of NH3 is significant. The data on NH3 solubility in magma is currently limited and it is difficult to make any quantitative estimates of NH3 solubility. For the explicitly composition-dependent laws, we use the <cit.> Etna basalt melt composition. Similarly to <cit.>, we assume that the magma is well-stirred such that the equilibration at the surface sets the volatile abundance throughout the melt.
These solubility laws relate the partial pressures in the atmosphere to the concentrations of the volatiles in the melt. The amount of volatiles in the melt thus depends on the equilibrium chemistry, the solubility and the total mass of the melt, M_melt. For a given mass of the atmosphere and the melt, we have the following mass balance condition for each species i <cit.>,
M_tot w_i = M_atm w_i, atm + M_melt w_i, melt ,
where w_i is the total mass fraction of each species i.
To determine the chemical composition of the atmosphere and the melt, we solve the element conservation equations
ε_j = ∑_i ∈ 𝐗ν_ijn_i/n_⟨ H ⟩ ,
where n_i is the total amount of moles of species i, n_⟨ H ⟩ = n_H + 2n_H_2 + 2n_H_2O + … is the total amount of moles of hydrogen, ν_ij are the coefficients of the stoichiometric matrix, and ε_j is the elemental abundance of element j relative to hydrogen. Equation (<ref>) is coupled to Equation (<ref>) via n_i ∝ w_i / μ_i, where μ_i is the molar mass, which in turn is coupled to the law of mass action
p_i/p^ = K_i ∏_j ∈ E( p_j/p^)^ν_ij ,
and the solubility laws, determining both w_i, atm and w_i, melt. Here, E is the set of all elements, p_i is the partial pressure of species i, K_i is the temperature-dependent equilibrium constant, and p^ is a standard pressure of 1 bar. For each gas species, we approximate the equilibrium constant as
ln K(T) = a_0/T + a_1 ln T + b_0 + b_1 T + b_2 T^2 ,
using the coefficients provided by FASTCHEM <cit.>, mainly derived using thermochemical data from <cit.>.
Overall, Equation (<ref>) depends on the elemental partial pressures, with 6 unknowns, corresponding to the 6 elements considered. Nominally, we solve for these using the 5 equations in (<ref>) for all elements apart from H, together with
P_s = ∑_i ∈ 𝐗 p_i ,
to fix the total pressure. This treatment of oxygen yields a first-order estimate of the redox state as set by the atmosphere. Alternately, we consider oxygen fugacity (f_O2) as a free parameter, by determining p_O in Equation (<ref>) via f_O2 = p_O2 = K_O2 p_O^2 / p^, allowing us to consider different redox conditions. In this framework, we assume ideal gas behavior such that fugacity and partial pressure are equivalent <cit.>.
As a cross-check, we validate our new framework against a self-consistent atmosphere composition model <cit.> which uses the Gibbs energy minimization code IVTANTHERMO <cit.>. IVTANTHERMO uses a thermodynamic database based on <cit.>, which we modify to include the silicate-melt dissolved volatile species H_2, OH^-, O^2-, CO, CO_2, CH_4, N^3-, and S^2-. We calculate equilibrium between a total possible 366 gas species and 201 condensed species. For the dissolution reactions, we assume ΔC_p = 0 and that any temperature dependence in the equilibrium constant is due to the heat of the reaction. However, data is available only in limited temperature ranges for most dissolution reactions, so we assume a simple Henry's law solubility relation for all of the dissolved species except S^2-, OH^-, H_2, and CH_4. We also neglect non-ideality in both the gas phase and melt. Using IVTANTHERMO, we then compute self-consistent equilibrium between the gas phase and melt species as a function of pressure and temperature.
For this comparison, we use 50×solar bulk elemental abundances (not including He), P_s=10^4 bar, T_s=3000 K, and M_melt / M_atm = 0.20, which is given by the gas-to-melt mass ratio as calculated by IVTANTHERMO. We find that all major H-C-N-O-S gas species agree to within at most 0.35 dex (standard deviation of 0.1 dex), with the largest deviation coming from CO2. This deviation mostly stems from the oxygen fugacities being somewhat different between the two approaches, with IVTANTHERMO yielding a 0.35 dex lower value. Furthermore, we verify that we recover the atmospheric abundances given by FASTCHEM 2 <cit.> and GGCHEM <cit.> when setting M_melt = 0.
§.§ Atmospheric Chemistry
We carry out equilibrium and disequilibrium chemistry calculations to determine the atmospheric composition above the magma/rock surface. We use the VULCAN photochemical kinetics framework <cit.>, with the initial atmospheric chemistry obtained using the FASTCHEM equilibrium chemistry code <cit.>.
For equilibrium chemistry calculations, we consider thermochemical equilibrium involving H-C-N-O-S species as well as He, along with H_2O condensation. For calculations considering disequilibrium processes, we additionally include the effects of vertical mixing and photochemistry. We follow the K_zz parameterisation of <cit.>:
K_zz / cm^2 s^-1 =
min(5.6 × 10^4/ (P/bar)^1/2, 10^10), P ≤ 0.5 bar
10^6, P > 0.5 bar,
although we note that the K_zz in the troposphere could be higher (e.g., ∼ 10^7-10^8 cm^2 s^-1 in the deep convective region of the atmosphere) or lower (e.g., ∼10^4 cm^2 s^-1) in any radiative regions if moist convection is inhibited by the high molecular weight of water in the H_2-rich atmosphere (see ). Accordingly, we consider a wider range of K_zz values than our canonical treatment in Section <ref> and Appendix <ref>.
Additionally, we consider photochemical reactions including H-C-N-O-S species, using a nominal stellar spectrum from the HAZMAT spectral library <cit.> corresponding to a median 5 Gyr star of radius 0.45 R_⊙ following previous work <cit.>. We also specifically consider the condensation of H_2O to liquid and solid droplets, which fall at their terminal velocity, as described in <cit.>. We note that while the H-C-N-O chemistry has been extensively explored for sub-Neptunes in various studies <cit.>, the S chemistry has not been explored in significant detail and may be incomplete. Nevertheless, we include S using the VULCAN framework <cit.> for completeness.
With the above calculations we obtain the vertical mixing ratio profiles for a number of relevant chemical species in the atmosphere. The abundances of key species in the observable part of the atmosphere can then be compared against constraint retrieved from an atmospheric spectrum.
§.§ Spectral Characteristics
We use the results of the chemistry calculation described in Section <ref> to simulate how such an atmosphere would appear in transmission spectroscopy, including the spectral contributions of relevant species. For this, we use the forward model generating component of the VIRA retrieval framework <cit.>, which treats the planet’s terminator as a 1D atmosphere in hydrostatic equilibrium. We consider atmospheric opacity contributions from H_2O <cit.>, CH_4 <cit.>, NH_3 <cit.>, CO <cit.>, CO_2 <cit.>, C_2H_2 <cit.>, HCN <cit.>, H_2S <cit.> and SO_2 <cit.>. We do not include N_2 in the model, as it has no significant absorption features in the near-infrared and it is not present in significant enough quantities to affect the atmospheric mean molecular weight. We additionally consider atmospheric extinction arising from H_2-H_2 and H_2-He collision-induced absorption <cit.>, which provide the spectral baseline, as well as H_2 Rayleigh scattering. We simulate transmission spectra using the vertical mixing ratio profiles computed using VULCAN as described above, and the P-T profile appropriate to each case considered.
§ RESULTS: COMPARISON WITH PREVIOUS WORK
We now apply the framework described in Section <ref> and compare with previous works on both terrestrial-like and sub-Neptune atmospheres.
§.§ Terrestrial-like Atmospheres
Many previous studies have investigated surface-atmosphere interactions for magma oceans underneath terrestrial-like atmospheres <cit.>. Recent studies have explored the implications of diverse interiors of exoplanets for their atmospheric compositions. <cit.> proposed that, for reduced conditions, nitrogen is expected to be preferentially sequestered in the mantle, providing a valuable way to study the interior composition of such exoplanets. More recently, <cit.> investigated the primordial distribution of volatiles within the framework of melt-atmosphere interactions and discussed applications for Venus and Earth. For the early Earth, they find that reduced conditions, with oxygen fugacity two dex below the iron-wüstite (IW) buffer, f_O2≲IW -2, result in an atmosphere abundant in H2, CO and CH4 but depleted in CO2 and N2. On the other hand, for f_O2≳IW+2, CO2 becomes the main atmospheric component, with significant levels of SO2, N2 and H2O. In particular, the behaviour of nitrogen is a consequence of the high solubility of N2 as N^3- in silicate melt at reducing conditions <cit.>, via the following reaction
1/2N2_ (gas) + 3/2O^2-_ (melt) ⇌ N^3-_ (melt) + 3/4O2_ (gas) .
As a result, the melt concentration of N^3- is proportional to f_N2^1/2 f_O2^-3/4, thus favouring low f_O2. In conclusion, these works predict that the abundance of atmospheric nitrogen may be used as a diagnostic for the redox state of a rocky planet's mantle.
As a benchmark, we compare our melt-atmosphere equilibrium chemistry framework with <cit.>. We use their case with a magma ocean mass of half the bulk silicate mantle at T = 1773 K and with volatile contents of 90, 102, 3.3, and 126 ppm-wt for C, H, N, and S, respectively. With this, we reproduce their atmospheric composition as shown in Figure <ref>. Compared to our nominal setup in Section <ref>, we added a constraint for the hydrogen abundance and solved for the resulting mass of the atmosphere, coupled to the surface pressure using <cit.>
P_s = g M_atm/4 π R_p^2 ,
where g is the gravitational acceleration at R_p. For a like-to-like comparison, we added the condensation of graphite and used the same gas species (excluding Ar) and solubility laws as <cit.>. Overall, we find good agreement between both implementations, with the most deviation coming from N2 and CH4. We find that the N2 discrepancy comes from an inconsistency in the code by <cit.>, whereby they use a molar mass of 14 g/mol for N2 instead of 28 g/mol. The remaining discrepancy is likely a result of minor differences in the implementations of the different reactions. We find that by accounting for some of these differences we can better match the result by <cit.>, as shown in Figure <ref>. For this purpose, in addition to considering their adopted molar mass, we implemented the reactions CH4 + 2 O2⇌ 2 H2O + CO2 and H2O⇌ 0.5 O2 + H2 using the equilibrium constants by <cit.> to obtain the partial pressures of CH4 and H2O, instead of deriving these from the elemental partial pressures as described in Section <ref>. We also used the oxygen fugacity of the IW buffer from <cit.> instead of <cit.>.
§.§ Sub-Neptunes with H_2-rich Atmospheres
Several recent studies have also explored magma-atmosphere interactions in sub-Neptunes with rocky interiors and H_2-rich atmospheres. <cit.> considered the impact of H2 solubility in silicate melts on the radius distribution of sub-Neptunes, addressing the radius cliff, a sharp decline in the abundance of planets with R_p ≳ 3 R_⊕.
They find that the high solubility of H2 in magma, especially at high pressure, limits the maximum radius that can be attained by sub-Neptunes through accretion of atmospheric H2.
For a 10 M_⊕ core they find a limiting mass fraction of 1.5 wt% H2 in the atmosphere – corresponding to > 20 wt% H2 in the planet –, as any additional H2 would be stored almost exclusively in the interior. Looking at smaller planets (2 R_⊕≤ R_p≤ 3 R_⊕), <cit.> find that magma-atmosphere interactions would significantly affect the atmosphere's composition and mass. For example, a key insight is that the H2O/H2 ratio in the atmosphere reflects not only external water delivery, but also water production as a result of atmosphere-magma interactions. This would make the H2O/H2 a good diagnostic for atmospheric origin, as well as for magma composition. In particular, it is found to be proportional to the magma FeO content.
Further investigations were carried out by <cit.>, <cit.> and, most recently, <cit.>.
Considering a surface temperature T_ s = 4500 K, 1 % to 14 % H mass fractions (of overall planet mass) and model parameters resulting in f_O2≲IW - 2, <cit.> find that the atmosphere is expected to be dominated across the explored parameter space by H2, SiO, CO, Mg and Na, followed by H2O, which should exceed CO2 and CH4 by two to three orders of magnitude. It should be noted they do not include N in their model. <cit.> and <cit.>, instead, consider total hydrogen pressures ranging between 10^-6 and 10^6 bar, temperatures between 1800 and 3500 K, and do not include any volatiles in their calculations, but also show that detectable absorption features of H2O and SiO should be expected. Additionally, the volatile-free investigation by <cit.> finds that silane (SiH4) should also be expected, dominating over SiO at P ≳ 0.1 bar for an isothermal T = 1000 K pressure-temperature (P-T) profile in the upper atmosphere. Most recently, <cit.> also investigated the outgassing mechanism for hybrid atmospheres in sub-Neptunes, but without considering solubilities in magma.
§.§ End-member Scenario of K2-18 b
Some of the principles described above were recently applied to the habitable-zone sub-Neptune K2-18 b by <cit.>, hereafter S24. Similarly to <cit.> and <cit.>, S24 point to a high CO/CO2 ratio and, like <cit.> and <cit.>, a depletion in atmospheric N as signatures for the presence of a magma ocean and/or a reduced interior. It should be noted that the case of K2-18 b constitutes an end-member scenario. While most of the work on magma oceans has focused on very hot planets <cit.>, K2-18 b is a temperate sub-Neptune with equilibrium temperature T_eq = 272 K (assuming an albedo of 0.3), close to that of the Earth. Here, we assess the findings of S24 using the framework described in Section <ref> and Figure <ref>.
We briefly note that in addition to gas dwarf and Hycean world scenarios, a mini-Neptune scenario with a thick H_2-rich atmosphere has also been proposed for K2-18 b <cit.>. <cit.> conduct photochemical modelling of mini-Neptune cases for K2-18 b, suggesting a plausible solution. However, as noted in <cit.>, the calculated abundances are unable to match the retrieved abundances <cit.>. In particular, the mixing ratios of CO and NH_3 are too large compared to the retrieved abundances, and so is the CO/CO_2 ratio.
§.§.§ Consistency with bulk parameters
At the outset, it is important to ensure that any assumption about the internal structure is consistent with the planetary bulk parameters. Previous studies have shown that the bulk parameters of K2-18 b allow a degenerate set of solutions between a mini-Neptune, a Hycean world, or a rocky world with a thick H_2-rich atmosphere, i.e. a gas dwarf <cit.>. Considering the present gas dwarf scenario, a purely rocky interior would require a minimum H_2-rich envelope mass fraction of ∼1% <cit.>, as discussed below.
The model grid of S24 contains four values of mantle mass fraction relative to the total planet mass (0.001, 0.01, 0.1, and 1) and five values of the hydrogen mass fraction relative to the mantle mass (1, 10, 100, 1000 and 10000 ppm). Firstly, all the cases with a mantle mass fraction of 1 violate mass balance, as the sum of the mantle and atmospheric masses would exceed the total planet mass. Secondly, for the gas dwarf scenario, as noted above, the bulk density of K2-18 b requires an H_2-rich atmosphere with a minimum mass fraction of ∼1%. In the S24 model grid, there is only one model which has an atmospheric mass fraction of 1%, and it corresponds to a mantle mass fraction of 1, as noted above. It follows that all the remaining cases, with H_2 mass fraction below 1%, are incompatible with the planet's bulk density.
In order to estimate the allowed atmospheric mass fractions for K2-18 b in the gas dwarf scenario, we consider four possible interior compositions, illustrated in Table <ref>: f_silicate=100 %, Earth-like (f_silicate=67 %), Mercury-like (f_silicate=30 %) and f_silicate=5 %, where f_silicate is the mass fraction of the interior (i.e. excluding the envelope) in the silicate mantle. We include f_silicate=5 % as an end-member case, close to the upper limit for the allowed envelope mass fraction. Similarly, the extreme pure-silicate interior case is included as an end-member, yielding the lower limit on the allowed envelope mass fraction for a gas dwarf scenario. We adopt the median planetary mass M_p=8.63 M_⊕ <cit.> and radius R_p=2.61 R_⊕ <cit.> of K2-18 b. The allowed envelope mass also depends on the choice of P-T profile, with hotter profiles leading to lower envelope masses for a given interior composition, as shown in Table <ref>.
Considering the four self-consistent P-T profiles described in Section <ref>, we find that an envelope mass fraction x_env≥ 1.34 % is required for consistency with the bulk parameters. This limit corresponds to the extreme case of a 100 % silicate interior, adopting C4 for the envelope P-T profile. For a like-to-like comparison with the S24 model grid, we also consider their P-T profile, which is the profile from <cit.> log-linearly extrapolated to higher pressures. For this profile, we find envelope mass fractions of x_env≥ 0.90% are required, again corresponding to the extreme 100 % silicate interior case. Overall, we find that all the models in the model grid of S24 are incompatible with mass balance and/or the bulk density of the planet considered. We demonstrate a self-consistent approach of accounting for the observed bulk parameters of K2-18 b in such calculations in Section <ref>.
§.§.§ Feasibility of a magma ocean
As described in Section <ref> and shown in Figure <ref>, given an interior composition, the choice of P-T profile affects the resulting envelope mass fraction. This, in turn, determines the surface pressure and temperature and the liquid/solid phase of the rocky surface underneath. Therefore, it is important to consider a physically motivated P-T profile in the envelope. As mentioned above, S24 consider the P-T profile from <cit.> at low pressures (P ≤ 4 bar) and perform a log-linear extrapolation to the deep atmosphere (P ≳ 10^5 bar). The resulting temperature gradient can be significantly different from other self-consistent model P-T profiles for the H_2-rich envelope <cit.>; an example is shown in Figure <ref>.
We also note, however, that the actual surface temperature at the magma-atmosphere interface (T_ s) used in S24 appears to be a free parameter rather than self-consistently determined from their P-T profile. The T_ s ranges between 1500 K and 3000 K, but the corresponding pressure is not clear, considering their assertion that the maximum surface pressure allowed by the model is 10^8 bar. This pressure also appears to be inconsistent with their maximum envelope mass fraction of 1%. Across the range of rocky compositions we consider, the maximum pressure reached is ∼5-7×10^5 bar for envelope mass fractions ∼5-7% depending on the P-T profile as shown in Table <ref> and Figure <ref>.
Nevertheless, in order to establish the feasibility of achieving melt conditions in the S24 model, we consider the five highest envelope mass fractions used in S24. We adopt their mantle mass fraction of 1 and the corresponding five H2 mass fractions in their model grid, with a maximum of 1%. We then use these envelope mass fractions and the S24 P-T profile to determine the corresponding expected surface pressures and temperatures, independent of satisfying the planetary bulk properties. These model points are shown in Figure <ref> along with the liquidus and solidus curves for peridotite <cit.>. We find that only two of these five cases result in a magma surface in our framework. Finally, since we considered only the five highest envelope mass fractions of S24, it follows that all of the other models would also be unlikely to result in melt. We further note that for the two cases that result in a magma surface in S24, the magma mass fraction they consider is equal to the planet mass. However, based on the temperature structures shown in Figure <ref>, we find that the maximum magma mass fraction across the different interior compositions is ∼13%, potentially somewhat higher as a result of partial melting, but not 100%.
§.§.§ Magma-atmosphere interactions
If the plausibility of a magma ocean is established, the melt-atmosphere interaction must be considered to determine its effect on the atmospheric composition. As described in Section <ref>, the gas phase composition depends on the pressure and temperature at the interface, the elemental abundances, the amount of magma available, the solubilities of the chemical species, and the chemical properties of the melt.
For the case of K2-18 b, S24 consider oxygen fugacity as a free parameter and assess the abundances of several H-C-N-O species in the lower atmosphere following melt-atmosphere interactions. They determine the atmospheric composition by considering three reactions, CO2 + 2 H2⇌CH4 + O2, 2CO2⇌ 2 CO + O2, and 2H2O⇌ 2 H2 + O2, in thermochemical equilibrium, and solubilities of CH_4, N_2, CO_2, and H_2O in the magma. However, we note that these reactions do not encompass all the prominent H-C-N-O molecules at the considered conditions. In particular, NH_3 is expected to be the dominant N-bearing species at the base of the atmosphere.
By not including NH_3 and its equilibrium with N_2 and H_2, S24 may be overestimating the nitrogen depletion in the atmosphere, given that all of the nitrogen is assumed to be in N_2, which is very soluble in magma at reducing conditions, as we show in Figure <ref> in Appendix <ref>.
In our framework, described in Section <ref>, we find that nitrogen depletion in the atmosphere increases by several orders of magnitude by not including NH_3. Ultimately, this highlights the importance of the completeness of the reactions and solubilities considered. Finally, we note that it is also possible to not have significant N depletion even in the presence of a molten surface depending on the pressure and temperature, as shown in Table <ref>.
§.§.§ Atmospheric composition and observables
The properties at the surface determine the composition in the upper layers of the atmosphere, and hence its observable characteristics. These are strongly influenced by model assumptions on elemental abundances. S24 allow the C/H ratio to vary between 0.01 × solar and 100 × solar, while keeping the N/H ratio fixed to solar, i.e., N/H = 6.76 × 10^-5 by number. This itself limits the NH3 log-mixing ratio to at most logX_NH3∼ -4, close to the upper bound of -4.46 found by <cit.>, and biases the model by construction to allow for up to 100 × more (or down to 100 × fewer) C-based molecules than N-based ones. The dependence of the S24 model outcomes on the choice of C/H values is not reported. It should be noted that a 100× enhancement or depletion of C/H without any change in N may be difficult to reconcile with potential formation mechanisms.
We note two further points regarding the abundance of C- and N-bearing species predicted by S24. Firstly, S24 appear to indicate that the total abundances of carbon in their models reach up to 3.8 wt% of the planet mass. It is, however, unclear how this may be compatible with their assumptions of a C/H ratio of at most 100 × solar and an H mass fraction ≤ 1 %, given they adopt the <cit.> value for (C/H)_⊙, i.e., 3.2 × 10^-3 by mass. Secondly, as argued in Section <ref>, only the largest atmospheric mass fractions S24 consider can potentially lead to a magma ocean. At the resulting surface pressures, however, their model predicts a log-mixing ratio for CO2 of log X_CO2≲ -3. This is at the lowest end, if not outside, of the 1 σ confidence interval presented in <cit.>. Furthermore, the CO abundance or the CO2/CO ratio are not reported in S24, making it difficult to assess the validity of the chemical estimates.
Finally, S24 argue that the model spectra from their model ensemble provide a qualitatively reasonable match to the data. Even if the model spectra were taken at face value, the lack of a reported goodness-of-fit metric precludes a reliable assessment of the match to data. More generally, a limited grid of forward models is insufficient to robustly explore the full model space taking into account all the degeneracies involved in an atmospheric spectral model and to obtain a statistically robust fit to the data; that is the purpose of atmospheric retrievals <cit.>. A more reliable approach in the present context is to compare the model-predicted chemical abundances with the abundance constraints obtained from robust atmospheric retrievals of the observed spectra. As discussed above, the cases of S24 with the highest surface pressure, i.e. those that may allow a magma surface, still predict lower CO_2 abundances than those retrieved for K2-18 b <cit.>. The CO and H_2O abundances are not reported in S24, which prevents a clear assessment of the agreement between the chemical predictions and the retrieved abundances.
§ RESULTS: A CASE STUDY OF K2-18 B
After having established the consistency of our results with <cit.>, and having discussed the S24 findings for K2-18 b, we proceed to apply our framework ex novo. We do so for K2-18 b in the present section, starting, as outlined in Figure <ref>, with internal and atmospheric structure modelling that ensures consistency with the known bulk parameters.
Through considering magma-atmosphere interactions, equilibrium chemistry in the lower atmosphere and non-equilibrium processes in the upper atmosphere, we make predictions for the observable composition and spectral signatures of a sub-Neptune magma world.
§.§ Atmospheric Structure
As discussed in Section <ref>, the dayside atmospheric structure is calculated self-consistently from the atmospheric constraints retrieved in the one-offset case of <cit.>: the median logX_CH4 = -1.74, logX_CO2 = -2.04, and the 2σ upper bound logX_H2O = -3.01. The P-T profile depends on a wide range of parameters, not all of which are observationally well-constrained: these include the internal temperature T_int, the properties of clouds/hazes if present, and the efficiency of day-night heat redistribution. A detailed exploration of the temperature profiles in deep H2-rich sub-Neptune atmospheres has been carried out before, in <cit.>. Here, we assume uniform day-night heat redistribution, and consider four cases for the P-T profiles, varying the internal temperature T_int and the Rayleigh enhancement factor (a) for the hazes: C1, corresponding to T_int = 25 K, a = 10000; C2 and C3, both with a = 1500, with T_int = 25 K and T_int = 50 K, respectively; C4, with T_int = 50 K and a = 100. We note that, in principle, even colder profiles are plausible, given the clouds/haze properties retrieved from observations <cit.>. All the P-T profiles are shown in Figure <ref>.
§.§ Internal Structure
For each of these profiles, we obtain the permitted H_2-rich envelope mass fraction (x_env) and corresponding surface conditions (P_s, T_s) based on the bulk properties of the planet, as discussed in Sections <ref> and <ref> and shown in Table <ref>. We vary the interior composition from f_silicate=5% to f_silicate=100%, adopting the median M_p=8.63 M_⊕ <cit.> and R_p=2.61 R_⊕ <cit.>. We note that the pure silicate and 95% iron (f_silicate=5%) interior cases are unrealistic end-member interior compositions, but we consider them for completeness. We adopt P_0=0.05 bar as the outer boundary condition for the internal structure modelling, corresponding to the pressure at R_p, based on <cit.>.
In Figure <ref> we show the P-T profiles considered, along with the surface conditions (black circles) and adiabatic profiles in the melt for our nominal C2 and C3 scenarios, which we further discuss below. The results for all P-T profiles are given in Table <ref>.
The presence and amount of magma depend on the adopted P-T profile. We start by considering one of the colder profiles, C2. For an Earth-like interior, we find x_env=3.76%, with surface conditions P_s=2.00×10^5 bar and T_s=3084 K. The melt mass fraction (x_melt) in this case is 1.81%. For a Mercury-like interior, i.e. with higher Fe content, we find x_env=5.29%, with surface conditions P_s=3.79×10^5 bar and T_s=3290 K. Based on our assumption of the liquidus as the melt curve, we class this as having 0% melt in Table <ref>. In reality, these surface conditions lie between the liquidus and solidus, which would lead to a partially molten surface. This is also the case for the f_silicate=5% interior, with x_env=6.60%, with surface conditions P_s=6.27×10^5 bar and T_s=3461 K. On the other hand, for the extreme case of a pure silicate interior, we find a melt mass fraction of 3.16%, for x_env=2.62%, P_s=1.12×10^5 bar and T_s=2819 K.
We next consider the higher-temperature P-T profile C3, which permits solutions with a magma ocean surface for all the interior compositions considered. For each interior composition, the permitted envelope mass fraction, and hence the surface pressure, is lower for this hotter P-T profile. For an Earth-like interior, we find x_env=2.52%, with surface conditions P_s=1.36×10^5 bar and T_s=3870 K. The melt mass fraction in this case is 10.16%. For a Mercury-like interior, we find x_env=3.83%, with surface conditions P_s=2.83×10^5 bar and T_s=4200 K. The corresponding x_melt is 5.78%. For the extreme case of a pure silicate interior, we find a lower x_env=1.62%, with P_s=6.97×10^4 bar and T_s=3512 K. The melt mass fraction in this case is larger, at 11.91%. For the other extreme of f_silicate=5%, we obtain x_env=5.06%, for P_s=5.05×10^5 bar and T_s=4503 K, with x_melt=2.62%. We note that including modelling of partial melt would somewhat increase the melt mass fraction in all cases.
As shown in Table 1, the envelope mass fractions and surface conditions we find for profiles C1 and C4 are very similar to C2 and C3 respectively. This is despite the differences in envelope temperature structure resulting from differing haze properties; the difference between C2 and C3 is primarily due to the differing T_int.
§.§ Volatile Abundances at the Interface
At the surface-atmosphere interface, the interactions between the gas phase equilibrium reactions and solubility of the gases in the magma, if any is present, drive the elemental abundances in the atmosphere. We consider the four P-T profiles presented in Table <ref> and assume 50×solar metallicity, using solar abundances by <cit.>. The assumed metallicity is approximately based on the median retrieved CH4 abundance for K2-18 b <cit.>. Across all considered cases, we find that the dominant H-C-N-O-S gas species at the surface are H2, H2O, CH4, NH3, and H2S. The resulting atmospheric elemental abundances from these scenarios are shown in Table <ref>. As expected, the atmosphere is highly reduced, with oxygen fugacities varying between IW-8.8 and IW-4.9 <cit.> among the 12 cases with magma. We note that although our calculations of the oxygen fugacities agree to within 0.35 dex with the self-consistent IVTANTHERMO code at P_s=10^4 bar and T_s=3000 K, as described in Section <ref>, the redox state at higher pressures/temperatures is not well understood. Future work is needed to better understand the redox state of gas dwarfs at these conditions.
Overall, we find that H2O and molecules containing N and S are the most dominant volatile species in the magma ocean, with high surface pressures strongly favouring the solubility of N2. As such, for a given interior composition, we find that cooler P-T profiles, resulting in higher P_s, act to increase the depletion of nitrogen in the atmosphere - until the temperature is too low to support a molten surface. The dependence of N depletion on P_s is stronger than that on the melt fraction. Therefore, a hotter temperature profile does not necessarily result in higher N depletion. In terms of the internal structure, we find that the interior needs to be more iron-rich than Earth's interior to result in nitrogen depletion larger than ∼2 dex.
Whilst we find that nitrogen can be depleted under certain conditions, in line with previous works investigating the solubility of nitrogen in reduced interiors <cit.>, we do not reproduce the six orders of magnitude depletion found by S24. Additionally, we also identify sulfur as a potential atmospheric tracer of a magma ocean; however, the depletion is less than that of nitrogen. Finally, we find that the solubility of H_2, CO, CO_2, and CH_4 is less prominent at the considered conditions and does not drive the abundances of these species far from chemical equilibrium expectations without a magma ocean. However, we note that, as further detailed in Appendix <ref>, many molecular species lack solubility data at the extreme conditions considered here. Hence, further work is needed to improve our knowledge of the solubility of prominent volatiles in silicate melts.
In Figure <ref>, we show the mixing ratios of the major C-H-O-N-S species in the lower atmosphere and the corresponding elemental abundances for a range of oxygen fugacities using the C3 profile and a Mercury-like interior (f_silicate=30 %). This represents the case with the strongest nitrogen depletion, excluding the extreme 5% silicate interior cases, with atmospheric N/H being ∼2.5 dex lower than the assumed metallicity of 50×solar. We also see the onset of sulfur depletion in the atmosphere due to the solubility of S2 at very reducing conditions (∼IW-6 in this case). On the other hand, the carbon abundance remains unchanged, as mentioned above. We also highlight the potential effect of partial melt by doubling the melt mass fraction, shown by the dotted line in Figure <ref>, leading to an approximately linear increase in the depletion of nitrogen.
§.§ Atmospheric Chemistry
We now use the elemental abundances obtained above to determine the atmospheric composition above the surface, using equilibrium and non-equilibrium calculations. From across all the models shown in Table <ref>, we focus on two realistic cases, one with and one without melt. For the molten case, we consider the C3 profile with 30% silicate fraction, which gives a significant N depletion. For the case with no melt we consider the C2 profile with 30% silicate which has no N depletion. For each case, we set the atmospheric elemental budget to that obtained in Section <ref> and reported in Table <ref>. As expected from the model set-up, the no-melt scenario results in all elemental abundances being identical to those of a 50×solar metallicity gas.
Across all cases considered, the primary O, C, N, and S reservoirs are H_2O, CH_4, NH_3 and H_2S over most of the atmosphere, as indicated by the dashed lines in Figure <ref>. This is seen for a pressure range spanning over 10 orders of magnitude and a temperature profile ranging between ∼260-2700 K.
The mixing ratio profiles obtained for the no-melt case are shown on the left-hand-side of Figure <ref>. In both the equilibrium and disequilibrium cases, the abundance of H_2O in the upper atmosphere is significantly depleted by a cold trap below the ∼1 bar pressure level. While CO and CO_2 are absent from the photosphere in the equilibrium case, they are present in the disequilibrium case, arising from photochemical processes. However, their abundance is significantly hindered by the limited availability of O, with the main carrier H_2O being depleted by the cold trap. The abundance of CO_2 is lower than that of CO throughout the atmosphere.
Compared to the retrieved atmospheric composition of K2-18 b <cit.>, shown as errorbars and arrows in Figure <ref>, the computed CH_4 abundance is consistent with the retrieved constraint. However, there is a substantial difference of ∼8 dex between the computed abundance of CO_2 with the measured value across the observable pressure range. Additionally, the retrieved upper limits for H_2O and CO are consistent with the computed amounts. Lastly, the computed value of NH_3 is higher than, and therefore inconsistent with the retrieved upper limit.
The right-hand-side of Figure <ref> shows the case with a molten surface. This configuration results in very similar abundances for O- and C-carrying molecules as the no-melt case. This includes the significant depletion of H_2O due to a cold trap, the limited production of CO and CO_2, and CO being more abundant than CO_2. The main difference from the no-melt case is the notable depletion of NH_3, due to N dissolving in the magma. Specifically NH_3 and N_2 are at much lower mixing ratios than in the no-melt case, by ∼2 dex. Compared to constraints from observations, CH_4, H_2O, CO and in this case NH_3 as well are consistent with the retrieved constraints and upper limits. However, the resulting CO_2 abundance is still substantially lower than the observed abundance.
In summary, we find that even for the case with significant melt, corresponding to our hotter P-T profile with a high T_ int, the NH_3 abundance is close to the observed 95% upper limit, while the CO_2 abundance is still significantly discrepant from the observed value and lower than CO. Therefore, we find that the retrieved atmospheric composition of K2-18 b <cit.> is inconsistent with a magma ocean scenario, or more generally with a deep H_2-rich atmosphere with or without melt, for this planet. In principle, the absence of a cold trap could lead to higher H_2O abundance in the troposphere, which in turn could lead to higher CO_2 abundance. However, such a scenario would also give rise to a significant amount of H_2O and CO, which are presently not detected.
§.§ Sensitivity to Atmospheric Parameters
We also explore other values for the three key atmospheric parameters that may influence the observable composition: the metallicity, the eddy diffusion coefficient K_zz, and the internal temperature T_int. We consider the two cases shown in Figure <ref> as the canonical cases corresponding to the two P-T profiles (C2 and C3). Both cases assume a median metallicity of 50×solar and K_zz of 10^6 cm^2s^-1 in the deep atmosphere. It may be argued that a higher metallicity could result in higher CO_2 abundances than the canonical cases and better match the observed abundances. Similarly, a broader range of K_zz may also influence the abundances. Therefore, for each of the two canonical cases, we investigate models with different values for the metallicity and K_zz. We consider metallicities of 100×solar and 300×solar, representing cases with significantly higher metallicities beyond the median retrieved value of ∼50×solar. For K_zz, we explore two end-member scenarios of 10^4 cm^2s^-1 and 10^8 cm^2s^-1. Based on <cit.> and <cit.>, for our canonical cases we considered values of 25 K and 50 K for T_int. We additionally consider the effect of using a P-T profile with a higher T_int of 60 K, as has been considered by <cit.>. As found for our canonical cases, we find that the observed CO and CO_2 abundances remain unexplained by these models with different values of metallicity, K_zz and T_int. These results are discussed in full in Appendix <ref>.
§.§ Spectral Characteristics
We use the atmospheric compositions computed in Section <ref> to examine the spectral signatures of CH_4, NH_3, CO and CO_2, which have been previously identified as key diagnostics of the presence of a magma surface. Using the VIRA retrieval framework's <cit.> capability of considering non-uniform vertical mixing ratios, we directly use the atmospheric composition profiles computed using the VULCAN <cit.> non-equilibrium code described above and shown in Figure <ref>. We specifically consider the melt case discussed above, to evaluate the spectral implications for the presence of a magma layer. For all cases, we consider parametric grey cloud and Rayleigh-like haze properties corresponding to the median retrieved constraints of <cit.>, to facilitate a qualitative comparison with the observations. Specifically, we set log(a) = 10^7.31, γ = -11.67, P_c = 10^-0.55 and ϕ_c = 0.63.
The resulting spectral contributions and transmission spectrum are shown in Figure <ref>. As can be seen in the top panel, CH_4 has prominent spectral features throughout the 1-5 μm wavelength range, while CO_2 and CO give rise to absorption features at ∼4.3 and ∼4.7 μm respectively. NH_3 shows a spectral feature at ∼3 μm. Due to the depletion of atmospheric nitrogen arising from its dissolution in the magma, the NH_3 spectral contribution is relatively weak and not detected in the present data. Without such a depletion, i.e. with nitrogen at a solar elemental abundance ratio, NH_3 would have prominent spectral features across the wavelength range of comparable strength to CH_4. While CO is more abundant than CO_2 in the observable atmosphere, as described in Section <ref>, the low absolute abundances of both molecules give rise to comparably weak spectral features.
The resulting transmission spectrum provides a reasonable match for the NIRISS observations of K2-18 b at shorter wavelengths due to the strong CH4 features. However, the spectrum does not fit the prominent CO_2 absorption feature seen in the NIRSpec G395H data. Moreover, the spectral contribution of CO is also minimal. Together, the two molecules are present at abundances that do not provide a good fit to the data in the 4-5 μm range.
Overall, we find that the gas dwarf scenario of a thick H_2-rich atmosphere of K2-18 b in equilibrium with a magma ocean at depth is not consistent with the existing JWST observations. In particular, irrespective of the NH_3 depletion, the models predict a low CO_2 abundance and CO > CO_2 which are inconsistent with the retrieved abundances. Future studies need to investigate if other effects may contribute to the observed composition. For example, similar to that discussed in <cit.>, in order for the detected abundance of CO2 to be compatible with a deep H_2-rich atmosphere, an unphysically low C/O ratio of ∼ 0.02–0.06, together with a moderate C/H ratio (∼30-50× solar) and vertical quenching may be required. However, such an atmosphere could also lead to significant CO abundances that may not be consistent with the observations, and the deep atmosphere would have more H_2O than H_2.
§ SUMMARY AND DISCUSSION
In this study, we report an integrated framework to investigate the plausibility of magma oceans on temperate gas dwarfs, and their potential atmospheric signatures. Our framework models the various components of a planet, and their interplay. Specifically, it includes atmospheric and internal structure modelling, magma-atmosphere chemical interactions, and equilibrium as well as disequilibrium (photochemistry and vertical mixing) processes in the atmosphere. Considering all these coupled factors, it predicts the observable abundances of molecular species in the atmosphere and the expected spectral features.
We apply our framework to perform a comparative assessment of previous works, validating our modelling of magma-atmosphere interactions against <cit.> and assessing the model predictions of <cit.>, S24, for a temperate sub-Neptune. Our findings highlight the importance of considering physically plausible models, set up in a holistic framework. In particular, we note that the use of stand-alone magma-atmosphere interaction models, which do not consider the complex interplay of interior and atmospheric factors, can lead to erroneous results.
§.§ Summary
Magma oceans are normally expected for rocky planets with high equilibrium temperatures. In the present work, we have tested the limits of this scenario by exploring whether K2-18 b, a habitable-zone sub-Neptune, can host a magma ocean, as previously suggested by S24, and what the observable signatures could be. We summarise our key findings as follows:
* An integrated framework is essential to obtain physically plausible and self-consistent results for modelling sub-Neptune gas dwarfs. Our framework includes an atmosphere and interior structure model, including phase diagrams and equations of state of appropriate silicates; thermochemical equilibrium calculations for the silicates-atmosphere interface and lower atmosphere; and disequilibrium processes throughout the atmosphere.
* The melt fraction admissible in a gas dwarf depends on atmospheric and interior properties, specifically the interior composition and the atmospheric P-T profile. The P-T profile, in turn, depends strongly on the internal temperature T_int, as well as on the presence and properties of clouds/hazes and on the molecular absorbers present in the atmosphere. For a gas dwarf scenario assuming the bulk parameters of K2-18 b, we find that, with an Earth-like interior composition, maximal melt mass fractions of ∼10% are possible, and may increase somewhat if partial melting is considered.
* A planet's bulk parameters and temperature structure place both upper and lower limits on the envelope mass fraction, assuming a gas dwarf scenario. For the K2-18 b models considered in this work, these limits are ∼1% and 7% of the planet mass, corresponding to a pure silicate and a 95% iron interior, respectively. The envelope mass fraction affects the surface pressure at the rock-atmosphere boundary, which, in turn, affects the potential melt conditions.
* We find using our framework that the current chemical constraints for K2-18 b are inconsistent with a magma ocean scenario or any gas dwarf scenario, contrary to S24. Firstly, the high observed abundance of CO2 along with low H2O is inconsistent with the chemical expectations for the gas dwarf scenario. Secondly, we find CO to be higher than CO2 by over 1 dex which is also inconsistent with the observations. We find this to be the case with or without a magma ocean, and relatively independent of the uncertainties in magma-atmosphere interactions at the extremely reduced conditions as described in Appendix <ref>. Finally, we find that N depletion in the atmosphere depends on a wide range of atmospheric and interior parameters, and can range between no depletion and ∼2.5 dex for a realistic model space, given available solubility data.
* Overall, we find that key atmospheric signatures for identifying a gas dwarf include the CO and CO2 abundances, and, if melt is present, possible nitrogen depletion, consistent with some previous studies (cf. Section <ref>). In particular, we expect that CO/CO2> 1 if no H2O is observed (as a result, e.g., of condensation), or, in the presence of H2O, CO/CO2 ≲ 1, due to photolysis of H2O making more oxygen available for the formation of CO2. Furthermore, we find that N depletion is more sensitive to the surface pressure than to the amount of melt present, provided this is non-zero. Thus, the presence of a magma ocean does not ensure a significant N depletion in the atmosphere.
* Our models predict significant H2S for a deep H2-rich atmosphere scenario. Hence, a lack of H2S may be indicative of a shallow atmosphere. However, we note that there are significant uncertainties in the behaviour of S-bearing species in silicate melts at such extremely reducing conditions. Therefore, more robust data for such conditions is needed in order for this signature to be used with a higher degree of confidence. We also note that there is uncertainty in the sulfur photochemical network for such planetary conditions.
* As discussed below, a number of important unknowns remain. In particular, as discussed in Appendix <ref>, the solubility of NH3 in magma remains poorly understood, especially at extremely reducing conditions, as is also the case for H2S at high pressures and temperatures.
§.§ Future Work
In order to aid accurate modelling of potential gas dwarf magma ocean planets, further developments are needed in three areas: (1) solubility laws for volatiles at extremely high pressures and temperatures and very reducing conditions, (2) equations of state (EOS) of silicates at the conditions relevant to temperate sub-Neptunes, and (3) complete reactions lists for all relevant atmospheric species.
As discussed in Appendix <ref>, there is a pressing need for further experimental data and/or ab initio simulations on the solubility of volatile species in silicate melt at the physical and chemical conditions that we have shown in this study to be relevant to the magma-atmosphere interface on sub-Neptunes. This includes high pressure and temperature, and low oxygen fugacity. In particular, the availability of NH3 solubility laws at these conditions would allow more precise prescriptions than assuming its solubility to be negligible, avoiding the resulting likely overestimation of the abundance of N-bearing species in the atmosphere.
In general, present laws are expected to give an order-of-magnitude estimate of the solubility at the conditions explored in this study; future work is needed to improve the solubility data.
Furthermore, once more accurate and precise solubility laws become available, the non-ideality of gas behaviour at the high pressures relevant at the interface may become a notable source of error if ignored, and will thus need to be appropriately treated <cit.>. We also note that, as a result of the lack of knowledge on the solubility of volatiles in the melt, the phase of the melt itself is not well-constrained. In particular, it is possible that some of the models considered here fall in a regime where there is no surface, and the atmosphere and magma become a single continuous phase at some lower pressure. This would happen if the volatiles were completely miscible in the melt, as is the case for water above a few GPa <cit.>. It is however not known whether this behaviour applies to H2-dominated atmospheres such as the one considered here. Furthermore, even if complete miscibility is not achieved, it is possible that the presence of volatiles in the magma may lead to a change in its EOS, which has not been accounted for here, where we have instead assumed a volatile-free melt for the internal structure calculations.
There is also scope for future work on the internal structure modelling, including the melt. This includes implementing the partial melting that would occur due to the magma's heterogeneous nature between the solidus and liquidus, as shown in Figures <ref> and <ref>. This is expected to result in a larger fraction of the mantle being at least partially melted than when considering the fully melted region alone, hence further depleting the atmosphere of the most soluble species. This effect is however in part addressed in this work, by considering the impact of a doubled melt mass fraction, as shown in Figure <ref>. Furthermore, future work will include more detailed prescriptions for the mantle, including alternate mineral compositions, and a fully temperature-dependent EOS for the solid portion.
Overall, JWST provides a promising avenue for atmospheric characterisation of sub-Neptune exoplanets. The high quality of the observations means that concomitant advances need to be made in theoretical models to maximise the scientific return from the data. In this work, we have outlined an end-to-end framework for gas dwarf sub-Neptunes to enable an evaluation of this scenario given high precision JWST data, and highlight the need for more accurate inputs for these models. Such advancements in both observations and theory promise a new era in the characterisation of low-mass exoplanets with JWST.
Acknowledgements: We thank the reviewer for their helpful comments on the manuscript. N.M., L.P.C. and F.R. acknowledge support from the Science & Technologies Facilities Council (STFC) towards the PhD studies of L.P.C. and F.R. (UKRI grants 2886925 and 2605554). N.M. and M.H. acknowledge support from STFC Center for Doctoral Training (CDT) in Data Intensive Science at the University of Cambridge (grant No. ST/P006787/1), and the MERAC Foundation, Switzerland, towards the doctoral studies of M.H. N.M. and S.C. acknowledge support from the UK Research and Innovation (UKRI) Frontier Grant (EP/X025179/1, PI: N. Madhusudhan). J.D. acknowledges support from Grant EAR‐ 2242946 of National Science Foundation. K.K.M.L acknowledges support from the US Coast Guard Academy Faculty Research Fellowship. J.M. acknowledges NASA grant 80NSSC23K0281. This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (<www.csd3.cam.ac.uk>), provided by Dell EMC and Intel using Tier-2 funding from STFC (capital grant EP/P020259/1), and DiRAC funding from STFC (<www.dirac.ac.uk>).
aasjournal
§ AVAILABILITY OF SOLUBILITY LAWS
We discuss here the availability of silicate melt solubility laws for the volatile species of interest, at the chemical and physical conditions relevant for magma oceans on temperate sub-Neptunes in the gas dwarf scenario. These findings motivate our choices for the solubility laws adopted in this work. We compile a bibliography of the solubility laws consulted for the preparation of this work in Table <ref>, and show a selection of them in Figure <ref>. For most composition-dependent laws, we adopt a basalt composition for the melt – specifically, when a law explicitly depends on melt composition parameters, we set these to the values corresponding to the Mt Etna basalt from <cit.>. This choice is due to the wide availability of solubility laws for basaltic melt and because of the association of basalt with peridotite, which we assume to be the mantle composition.
§.§ Nitrogen Species
The solubility of N2 has been explored for a wide range of parameters (e.g., ), at pressures up to 14.8 GPa and temperatures up to 2800 K <cit.>. By compiling the available data at P ≤ 8.2 GPa and adding their own measurements, <cit.> proposed the solubility law which we use in our calculations.
This law, however, does not appear to extrapolate well at higher pressures and moderately reduced conditions (f_O2∼IW - 2). As warned by <cit.>, experimental data indicate the solubility seems to reach a plateau, while the law predicts solubility to monotonically increase with pressure. A direct comparison with <cit.>'s 10 GPa and 14.8 GPa data points reveals indeed a true solubility ∼ 1 order of magnitude lower than predicted using <cit.>'s law for pure nitrogen vapor. At the extremely reduced conditions explored here, the plateau effect is expected to already be significant at lower pressures <cit.>.
It is also noteworthy that <cit.> - whose data was included in the <cit.> dataset - find some indication of a decrease in the physical solubility of N2 already at P=8 GPa. We note that physical solubility is expected to be the dominant solubility mechanism at the oxidized conditions explored by , as opposed to the chemical solubility relevant at reduced conditions <cit.>.
Nevertheless, as the relevant quantity is not the total pressure, but rather the nitrogen partial pressure, we believe that the <cit.> law can still be a reasonably good approximation even at the reducing, high-pressure conditions that apply at the magma-atmosphere interface, given that the expected N2 mixing ratio in the atmosphere is ≲ 10^-4 in the present models.
The lack of data or simulations for the solubility of NH3 in silicate melt leads us to neglect it, with the caveat that this will lead to our calculations setting only an upper limit on the abundance of N-bearing species in the atmosphere.
§.§ Carbon Species
Of the three prominent carbon species (CO2, CO, CH4), CO2 is by far the one for which the most complete experimental data on solubility in magma is available (e.g., ).
Considering this wide dataset, for the case of T = 2273 K and a bulk silicate Earth (BSE) melt composition, <cit.> find that the solubility of CO2 is well-approximated by Henry's Law, which they fit to the data. <cit.>'s law is in excellent agreement with high pressure (P ≥ 8 GPa) molecular dynamics simulations by <cit.> for the T = 2273 K and rhyolite case, and in good agreement with the corresponding mid-ocean ridge basalt (MORB) case. Interestingly, the agreement is slightly worse with the kimberlite melt case, where instead the melt is closest to Suer's BSE composition.
At lower pressures, the agreement with the simulations is worse, but still within a factor of order unity. In any case, the agreement between <cit.>'s law and <cit.>'s simulations is always satisfactory, which also highlights the weak dependence of the solubility of CO2 on the melt composition, particularly at P ≤ 8 GPa <cit.>.
Despite the fact that the <cit.> law is intended for lower temperatures than those relevant in this study, due to the lack of more appropriate alternatives we adopt it in our calculations.
For CO, there is a lack of solubility data at high pressure and temperature. Solubility laws are provided by, e.g., <cit.> and <cit.> (for both MORB and rhyolite melts), both of whom carried out experiments at P ∼ 1 GPa and T ∼ 1500 ^∘C.
The lack of data may be explained by the fact that exploring the solubility of CO at high pressures is especially complicated, because the 2CO = C + CO2 reaction gets skewed to the right as pressure grows, making an initially pure CO vapour spontaneously become mostly CO2 at P ≳ 1 GPa <cit.>.
An alternative prescription, used by <cit.> as informed by <cit.>, is to instead set the solubility of CO to be one third of that of CO2. This method, taking <cit.>'s BSE law for the CO2 solubility, yields a CO solubility significantly higher than any of the other laws mentioned so far. This might be due to <cit.>'s law being tested for different temperatures and melt compositions. This, however, seems unlikely: on the one hand, the solubility of CO is only weakly dependent on temperature <cit.>; on the other hand, <cit.>'s two laws for MORB and rhyolite yield results less than an order of magnitude apart. This indicates a comparatively weak dependence of solubility on melt composition, while <cit.>'s prescription applied to <cit.>'s law results in a solubility between 1 and 2 orders of magnitude higher, depending on the pressure.
Ultimately, we use the <cit.> MORB law in our calculations, due to it being more recent and calibrated at higher pressures than <cit.>.
The data on CH4 seems to be even sparser: the only solubility law we encountered in the literature – which we use here – is the one in <cit.>, resulting from experiments at 0.7 ≤ P ≤ 3 GPa and 1400 ≤ T ≤ 1450 ^∘C, and the Henry's Law fit to their data by <cit.> - the law by <cit.>, indeed, follows Henry's law for total pressures P ≲ 10^4 bar (at T=3000 K, regardless of the CH4 partial pressure).
§.§ Other Volatiles
The other major volatiles of note are expected to be sulfur species, water, and H2, as well as He, which, however, as a noble gas, has no impact on the atmospheric chemistry.
Chemical equilibrium calculations indicate that, at conditions relevant at the interface, sulfur will be mostly in H2S, with little S2.
For S2 we use the law by <cit.>.
It should be noted that this law is calibrated only with data collected at atmospheric pressure, relatively low temperature (T ≤ 1673 K), and not very reducing conditions (ΔIW≥ -1). As such, its extrapolation to the extreme conditions explored in this paper should be considered only as a zeroth-order estimate of the true solubility. No significant high-pressure/high-temperature data to compare with <cit.>'s predictions were found either, the <cit.> high-pressure data being for a carbonate-silicate melt.
For H2S, we found two laws in the literature, by <cit.> – for rhyolite – and by <cit.> – for basaltic melts. The former is calibrated for 1073 ≤ T ≤ 1273 K and P = 2 × 10^3 bar, while the latter for 1323 ≤ T ≤ 1473 K and 250 ≤ P ≤ 2 × 10^3 bar. The two laws differ significantly in their temperature dependence: <cit.> find that the solubility of H2S moderately increases with increasing temperature, while the law by <cit.> indicates an extremely strong and negative temperature dependence. Furthermore, <cit.> include a dependence on the mole fraction of FeO in the magma, while the law by <cit.> only depends on thermodynamic parameters.
However, when extrapolated to high temperature (T ∼ 3000 K) and pressure (P ∼ 10^5 bar), both laws predict negligibly small solubility for H2S at the expected mixing ratios (shown in Figure <ref>). Hence, we do not expect the results of our investigation to be noticeably impacted by the choice of one law over the other. In this investigation, we chose to use the law by <cit.>.
For H2, the law most used in the literature we reviewed is by <cit.>, who carry out experiments at 0.7 ≤ P ≤ 3 GPa and 1400 ≤ T ≤ 1500 ^∘C, and give two expressions, for basaltic and andesitic melt. Their law for basaltic melt is in excellent agreement with that given by <cit.> for BSE melt up to P ∼ 1 GPa, and so is, to a slightly lesser extent, their andesitic melt law. At higher pressures, however, they diverge, with <cit.> predicting Henrian behaviour to arbitrary pressure, while <cit.>'s laws predict a decline in solubility as pressure increases, consistently with their experimental results.
As the laws given in <cit.> have a robust high-pressure experimental background, we use those, and here, specifically, the basaltic melt case.
For H2O, there is a great deal of experimental data on the solubility in silicate melts <cit.>, a complete review of which is beyond the scope of this work. We focus here on two solubility laws: <cit.>, the most recent law available, and <cit.>, which is the one we choose to implement in our study.
<cit.> provide two slightly different estimates depending on the value of the molar absorption coefficient ϵ_3550, each depending linearly on the square roots of the atmospheric fugacities of both water and molecular hydrogen. These are the result of experiments carried out at very low pressure (P = 1 atm) and high temperature (T=2173 K).
Higher-pressure experiments are carried out in <cit.>, who also propose a solubility law, calibrated upon a vast but low-temperature experimental database (10^2 ≤ P ≤ 10^4 bar, 1100 ≤ T ≤ 1400 ^∘C), which is in rough agreement with that of <cit.> for an H2-rich envelope.
The fact that <cit.>'s law depends on a linear combination of the square roots of the fugacities of both H2 and H2O, however, risks breaking element conservation for oxygen: indeed, it would predict some dissolved O in the magma even if no O is present – in any species – in the initial atmospheric composition. This effect is expected to be particularly relevant at the very reduced conditions explored here, where the abundance of O is expected to be low.
We thus consider extrapolating the law of <cit.> to higher temperatures to be a more accurate prescription than extrapolating that by <cit.> to high pressures, and hence do so here, assuming an Etna basalt composition for the melt. This choice is also consistent with that in <cit.>.
§.§ Summary
Data on solubility in silicate melt are available, at some conditions, for several species of interest, with the one exception being NH3, for which we were unable to find any solubility laws. We list the bibliography on solubility laws and/or data points we have explored for this study in Table <ref>, and we show a selection of them in Figure <ref>. In general, the scenario explored in this study, relevant for magma oceans on temperate gas dwarfs, is extreme in a threefold way: it leads to high temperatures (T ≳ 2500 K), high pressures (P ≳ 10^5 bar), and very reduced melts compared to Earth (ΔIW≲ - 5). There is no data, for any species, at such conditions in all three ways. Only for N2 data at both very high temperature and pressure exists, but that is for relatively oxidised conditions <cit.>. High-pressure (P ≥ 10^5 bar) simulations exist for CO2, but only at T ≤ 2273. For S2, for a co-existing fluid phase, high-pressure data only exists at low temperature, and only for carbonate-silicate melt <cit.>. All other species seem to lack high-pressure data.
Exploring this region of the parameter space, either experimentally or through simulations, will be crucial for improving our understanding of potential magma oceans in sub-Neptunes, and our ability to lift observational degeneracies with other possible internal structures.
§ SENSITIVITY TO ATMOSPHERIC PARAMETERS
As described in Section <ref>, we explore a range of values for three key atmospheric parameters that could influence the observable composition: the metallicity, the eddy diffusion coefficient K_zz, and the internal temperature T_int. Our canonical cases, shown in Figure <ref>, correspond to P-T profiles C2 and C3 with median metallicity of 50×solar, with and without elemental depletion respectively, and K_zz of 10^6 cm^2s^-1 in the deep atmosphere. We investigate if a higher metallicity, a broader range of K_zz values and/or a higher T_int could better match the observed abundances than our canonical cases, for example with higher CO_2 abundance. We therefore consider models with higher metallicities of 100×solar and 300×solar, and two end-member scenarios of 10^4 cm^2s^-1 and 10^8 cm^2s^-1 for K_zz in the deep, convective region. We also consider the effect of using a higher value of T_int of 60 K, as previously considered by <cit.>. Disequilibrium effects due to photochemistry and vertical mixing are included in all cases discussed here.
We start with investigating departures from the canonical C2 case, as shown in Figure <ref>. We first fix the K_zz profile to that used in the canonical case and vary the metallicity as described above. The resulting vertical mixing ratio profiles are shown in Figure <ref>, along with those for 50× metallicity from Figure <ref> for comparison. For both the 100× and 300× solar cases, the abundance of CO_2 remains lower than that of CO throughout the atmosphere, as for the 50× solar case. Similarly, the CO_2 and NH_3 abundances are inconsistent with the retrieved values in the photosphere, between ∼0.01-10 mbar, in all cases. Additionally, the CH_4 abundance for 300× solar metallicity is higher than the retrieved abundance.
Next we consider a range of K_zz values in the deep atmosphere, using the C2 P-T profile. We vary K_zz at P>0.5 bar from 10^4 to 10^8 cm^2s^-1, with our canonical value at 10^6 cm^2s^-1. The metallicity remains fixed at the canonical value of 50× solar. As shown in Figure <ref>, both the higher and lower K_zz values negligibly affect the computed mixing ratios at observable pressures. Increasing K_zz shifts the quench point to higher (deeper) pressures, as shown by the mixing ratio profile for CO_2 in the right-hand panel of Figure <ref>.
We now consider the hotter P-T profile case with NH_3 depletion due to magma; this is the C3 profile with 30% silicates, as discussed above. The higher metallicities of 100× and 300× solar are implemented by proportionately enhancing the canonical elemental abundances for this case. These originally corresponded to 50× solar, hence we increase the relevant elemental abundances in Table <ref> by factors of 2 and 6, respectively. The results are shown in Figure <ref>. As for the C2 profile, the predicted CO_2 abundance remains significantly below the retrieved value in both cases, with the CO mixing ratio exceeding that of CO_2 throughout the atmosphere. The CH_4 abundance for 300× solar metallicity is additionally too high compared to the retrieved abundance.
As an end-member case, we consider each of the C2 and C3 profiles discussed above and adopt our extreme values of 300×solar metallicity and K_zz=10^8 cm^2s^-1 in the deep atmosphere. The resulting vertical mixing ratio profiles are shown in Figure <ref> along with the canonical cases. These end-member cases are similarly unable to match the retrieved CO_2 abundance constraints. A higher K_zz would further increase the abundances of both CO and CO_2, however CO remains more abundant than CO_2.
Thus far we have considered values of 25 K and 50 K for T_int, corresponding the C2 and C3 profiles, respectively. Lastly, we explore the effect of increasing T_int to a higher value of 60 K for completeness, as has been considered by other works for K2-18 b <cit.>. We adopt the P-T profile of <cit.> with 100× solar metallicity, extrapolated to higher pressures (1000 bar) using an adiabat. We consider two cases: 1) 100× solar metallicity with depletion (i.e. twice the C3 30% silicates abundances from Table <ref>) and our canonical K_zz treatment, and 2) a high K_zz=10^8 cm^2s^-1 and a high metallicity of 300× solar (i.e. 6× the C3 30% silicates abundances). With the canonical K_zz, we find that the computed CO abundance exceeds the retrieved upper limit, while the computed CO_2 abundance remains significantly lower than the retrieved abundance. The retrieved CH_4 abundance and NH_3 upper limits can be explained by this model. For the high K_zz and high metallicity case, the computed CO abundance similarly exceeds the retrieved abundance. In this case the retrieved CO_2 abundance can be explained by the model. However, the computed CH_4 abundance exceeds the retrieved value. Due to the higher temperatures in this P-T profile, the H_2O abundance exceeds the retrieved value for both cases of metallicity and K_zz considered.
Overall, we have explored a wide parameter space for the atmospheric chemistry, considering a range of values for K_zz, metallicity and T_int. In this exploration, we do not find a case resulting in CO_2 > CO that would satisfy the retrieved atmospheric abundance constraints for K2-18 b <cit.>.
|
http://arxiv.org/abs/2409.02541v1 | 20240904085842 | A host-pathogen coevolution model. Part I: Run straight for your life | [
"Matthieu Alfaro",
"Florian Lavigne",
"Lionel Roques"
] | math.AP | [
"math.AP"
] |
theoremTheorem[section]
prop[theorem]Proposition
defi[theorem]Definition
lem[theorem]Lemma
rem[theorem]Remark
|
http://arxiv.org/abs/2409.02688v1 | 20240904132028 | Pointwise and uniform bounds for functions of the Laplacian on non-compact symmetric spaces | [
"Yulia Kuznetsova",
"Zhipeng Song"
] | math.AP | [
"math.AP",
"math.FA",
"22E30, 42B15, 35L05, 43A85, 43A90"
] |
1]Yulia Kuznetsova
[1]Université de Franche Comté, CNRS, LmB (UMR 6623), F-25000 Besançon, France
1,2]Zhipeng Song
[2]Ghent University, Department of Mathematics: Analysis, Logic and Discrete Mathematics, 9000 Ghent, Belgium
Yulia Kuznetsova: [email protected]; Zhipeng Song: [email protected]
[2020]22E30; 42B15; 35L05; 43A85; 43A90
Pointwise and uniform bounds for functions of the Laplacian on non-compact symmetric spaces
[
September 9, 2024
===========================================================================================
Let L be the distinguished Laplacian on the Iwasawa AN group associated with a semisimple Lie group G. Assume F is a Borel function on ℝ^+. We give a condition on F such that the kernels of the functions F(L) are uniformly bounded. This condition involves the decay of F only and not its derivatives.
By a known correspondence, this implies pointwise estimates for a wide range of functions of the Laplace-Beltrami operator on symmetric spaces. In particular, when G is of real rank one and F(x)= e^it√(x)ψ(√(x)), our bounds are sharp.
#1 | #1 |
#1 { #1 }
#1 #1
#1 ( #1 )
#1 [ #1 ]
#1 ⟨#1 ⟩
α
§ INTRODUCTION
An important part of the classical harmonic analysis is the study of multipliers. Let k be a tempered distribution on ^n and m= k be its Fourier transform. The celebrated Hömander-Mikhlin multiplier theorem <cit.> asserts that the convolution operator T·:=· *k is bounded on L^p(^n) (1< p<∞) if the symbol m satisfies the Mikhlin type condition:
sup_ξ∈^nξ^|α|∂_ξ^α m(ξ)≤ C< +∞, ∀ |α|≤n/2+1.
Much of this theory makes sense on Lie groups, if we restrict our attention to spherical multipliers. Let G be a semisimple, connected, and noncompact Lie group. Assuming G has a finite center, it has an Iwasawa decomposition G=KAN, and there exists the Harish-Chandra transform , also called the spherical transform, taking K-biinvariant functions into functions on _^*. Here _^* is the dual of complexification _ of the Lie algebra of A. Let k be a K-biinvariant tempered distribution on G and k its Harish-Chandra transform. Then the operator T·=· *k is a spherical multiplier on the symmetric space S=G/K, of kernel k and symbol m= k.
We also refer reader to <cit.>.
The conditions on m are however very different from the Euclidean case: by a famous result of Clerc and Stein <cit.>, m must be holomorphic and bounded in an open tube around the real part ^* of _^*, in order to make T bounded on L^p(S) for 1<p<2. Conversely, if this condition is satisfied, then a Mikhlin-type bound completes the picture to give a sufficient condition.
This was proved by Clerc and Stein for G complex, Stanton and Tomas <cit.> in the rank-one case, and finally by Anker <cit.> in the case of a higher rank.
From the PDEs point of view, the most important class of multipliers is of those generated by the Laplace operator. We state the problem in the case of symmetric spaces directly. To the Riemannian structure on S, there corresponds canonically its Laplace–Beltrami operator Δ. It is self-adjoint on L^2(S). Consequently, by spectral theorem, every bounded Borel function F on defines a bounded operator on L^2(S):
F(Δ)=∫_0^∞ F(ξ) dE(ξ),
where E is the spectral measure of Δ. Moreover, being left-invariant, F(Δ) acts as a convolution on the right with a kernel k_F(Δ) (a priori defined in the distributional sense).
An interesting question is, under what conditions on F, the operator F(Δ) is also bounded from L^p(S) to L^q(S)?
This requires estimating L^p norms of the kernel, which often requires finding its pointwise bounds.
Such pointwise bounds are known for a variety of functions F and are obtained case by case. The heat kernel corresponding to exp(-t|Δ|) is very well studied, see the book <cit.> or a more recent survey <cit.>; the latest results <cit.> describe the asymptotics of solutions of the heat equation at large time. Moreover, bounds are available (most of them by Anker and coauthors)
for:
the resolvents (z-Δ)^-s <cit.>;
the Poisson kernel corresponding to exp(-t√(|Δ|)) <cit.>;
the Schrödinger kernel of the semigroup exp(itΔ) <cit.>;
oscillating functions of the type (|Δ|+q)^-τ (|Δ|+q̃)^ σexp(it√(|Δ|+q)) with different exponents τ and σ, see <cit.> in rank one and recently <cit.> in higher ranks.
* In rank one: Tataru <cit.>: the symbol is sin(λ t)λ (λ^2+β^2)^-ρ/2+is so that the operator is
sin(√(|Δ_ρ|) t)√(|Δ_ρ|) (Δ_ρ+β^2)^-ρ/2+is, on the hyperbolic space (ℓ=1), the uniform bound is c(sinh t)^-ρ.
* Anker, Pierfelice (on Damek-Ricci): D^-τD̃^τ-σ e^itD where D=√(|Δ|-ρ^2), D̃=√(|Δ|+ρ̃^2-ρ^2), ρ̃>ρ, σ∈_+, τ∈[0,3/2). That is, the operator is
(|Δ|_ρ)^-τ/2 (|Δ|_ρ+ρ̃^2)^(τ-σ)/2 e^it√(|Δ|_ρ)
<cit.>
* Higher rank: Hassani <cit.>: |Δ|^-σ/2 e^it√(|Δ|); the bounds are for |t|≥1:
|k_0(x)| ≤ C |t|^-l/2ϕ_0(x) (3+|x|)^a if a∈ 2 and l<a (integral over |λ|≤1);
|k_∞(x)| ≤ C |t|^-dϕ_0(x) (3+|x|)^d if σ >n (integral over |λ|≥1).
* Anker and Zhang <cit.>: |Δ|^-σ/2 e^it√(|Δ|), d=ℓ+∑_α∈Σ^+ m_α≥3 (but ℓ may be 1). Boundary point: σ = (d+1)/2. Since d=2n-l, this value is n-l/2+1/2.
Theorem 3.7: for large t and |x|≥|t|/2, the bound is |t|^-N_1 (1+|x^+|)^N_2 e^-ρ(x^+) for every N_1∈ and N_2>N_1+2(d+1)+ (1/2) (max(d,D) -l).
Theorem 3.3: For |x/t|<C_Σ<1/2, |ω̃^σ,0_t(x)| ≤ |t|^-D/2 (1+|x|)^(D-l)/2ϕ_0(x).
In this paper we obtain pointwise estimates for F(Δ) with a very general function F, on spaces of arbitrary rank. More precisely, we get estimates for the shifted Laplacian Δ_ρ = -Δ - |ρ|^2, which has no spectral gap; ρ is a fundamental linear functional on , see Section 2. Our main theorem (Corollary <ref> in the text) reads:
Let F be a Borel function on ^+ such that
I_F = ∫_^*F(λ^2) (1+|λ|)^7(n-l)+1 dλ < ∞.
Then for every x∈ G
|k_F(Δ_ρ)(x)| ≤ C I_F e^-ρ(H(x)),
where C is a constant depending on G only, and H(x) is the radial part of x in the Iwasawa decomposition.
See Section 2 for definition of the dimensions n, l.
This is equivalent to Theorem <ref> below. We show also that the upper limit of e^ρ k_F(Δ_ρ) at infinity is bounded by a similar integral with the power n-l instead of 7(n-l)+1.
It is clear that for F positive, one cannot hope for a much better bound: it is sufficient to evaluate k_F(Δ_ρ) at identity.
But for oscillating functions (<ref>) can be close to optimal as well: we show in Theorem <ref> that for F(|λ|^2)=^it|λ|ψ(|λ|) and in rank one, the lower limit of e^ρ k_F(Δ_ρ) at infinity is never zero and is estimated from below by a similar integral.
This generalizes results of <cit.> obtained for hyperbolic spaces.
From known estimates
of spherical functions,
it is easy in fact to obtain bounds for the kernel of the type |k_F(Δ_ρ)(x)| ≲ (1+|H(x)|)^d e^-ρ(H(x)), see discussion at the end of Section 4. Our result is that it is possible to remove the polynomial factor in H(x).
To achieve this, we refine the asymptotic estimates of Harish-Chandra for spherical functions φ_λ. In the technical Section 3, we obtain explicit constants controlling the growth in λ in these estimates.
A polynomial factor in H(x) is often non-significant in the analysis of the Laplace-Beltrami operator. But it becomes essential in concern with the following operator L which has attracted much attention.
We can define it as L = δ^-1/2τΔ_ρτδ^1/2, where τ is the inversion: (τ f)(g) = f(g^-1), and δ is the modular function of S. This makes L act as a convolution on the left, and the kernel of F(L) is linked to that of F(Δ_ρ) by k_F(L) = δ^-1/2 k_F(Δ_ρ). Importantly, L can be written as the sum of squares L=∑ X_j^2 of vector fields generating the Lie algebra of S, in a full analogy with the Euclidean case; whereas Δ has a necessary linear term in addition, due to non-unimodularity of S.
It turns out that the theory of L is very different from that of Δ: Hebisch <cit.> was first to prove that if G is complex, then F(L) is bounded on L^p(S) for 1≤ p<∞ as soon as F satisfies the Mikhlin condition. In other words, no holomorphy is needed.
In 1994 Cowling et al. <cit.> extended this result (with other constants replacing n/2+1) to all real Lie groups; they gave a sufficient condition on F such that F(L) is bounded on L^p for 1<p<∞ and is of weak type (1,1).
Sikora <cit.> obtained a stronger optimal order of differentiability, which is 1/2 smaller than the result of Cowling et al. Time-dependent bounds on L^1 for special oscillating functions were obtained by Gadziński <cit.>.
In the rank one case, estimates were obtained first by Müller and Thiele <cit.> on ax+b groups, and then by the same authors with Vallarino <cit.> on more general Damek-Ricci spaces, including all symmetric spaces of rank one. They show that for ψ ...., the kernel k_t of F(L) of type <ref> is bounded by k_t_1≲ (1+t) and k_t≲ 1. Akylzhanov, Kuznetsova, Ruzhansky, and Zhang <cit.> proved that on ax+b groups, the estimates of Müller and Thiele are sharp.
Even earlier bu in an unpublished thesis Gadziński <cit.> investigated L_1- estimates of the function F(L) = e^it√(|L|) e^-bL with t,b>0. ...
shorten this
We are interested in continuing this comparison. Thus, our Theorem <ref> is stated in terms of L: the kernel k_F(L) is uniformly bounded as soon as the integral (<ref>) converges.
We show finally that in rank one, this estimate is best possible even for oscillating functions of the type F(x) = exp(it√(x)) ψ(√(x)): the uniform norm of the kernel k_F(L) is bounded but does not decay at large time t. This generalizes the results of <cit.> for ax+b groups (more precisely, their parts concerning uniform norms); we note also a related subsequent result of Müller and Vallarino <cit.> on Damek-Ricci spaces, a class not covering all rank one symmetric spaces but containing many non-symmetric ones.
Acknowledgements. This work is supported by the EIPHI Graduate School (contract ANR-17-EURE-0002). The first author is also supported by the ANR-19-CE40-0002 grant of the French National Research Agency (ANR). The second author is also supported by the Methusalem programme of the Ghent University Special Research Fund (BOF), grant number 01M01021.
§ NOTATIONS AND PRELIMINARIES
Our main interest is in semisimple groups, but the proofs involve results on reductive groups. This class, also called the class of Harish-Chandra, is defined differently from one author to another. We stick to the following definition of Gangolli and Varadarajan <cit.>, and term it class ℋ accordingly:
A real Lie group G with its Lie algebra is in class ℋ if
is reductive;
G has only finitely many connected components;
(G) ⊂ (_);
and the analytic subgroup of G with Lie algebra , has finite center.
Every semisimple, connected Lie group with finite center is in class ℋ.
§.§ Iwasawa decomposition
Let G be a Lie group in class ℋ with the Lie algebra .
Let K be a maximal compact subgroup of G and the Lie algebra of K. We have the Cartan decomposition = ⊕ and the involution θ acting as 1 on and -1 on .
Let be a maximal abelian subspace of , this is, is a Lie subalgebra of such that ⊂.
We denote by ^* the real dual space of and _^* the dual of its complexification _.
Let Σ⊆^* be the set of restricted roots of (,). We have the restricted root space decomposition
= _0 ⊕⊕_α∈Σ_α
where _α={X∈|( H)X=α(H)X, ∀ H∈}. Setting =⊕_α∈Σ^+_α, we obtain the Iwasawa decomposition = ⊕⊕ with abelian and nilpotent.
Denote by m_α=_α the multiplicity of α.
By choosing a lexicographic ordering of the roots we can define the set of positive roots Σ^+. Let Σ_r^+ be the set of reduced (also called short or indivisible) roots, that is, roots α∈Σ for which 1/2α is not a root.
Let Σ_s^+⊂Σ^+ be the set of simple roots.
Note that Σ_s^+ ⊆Σ_r^+ ⊆Σ^+ ⊆Σ; if G is semisimple, then Σ_s^+ is a basis of ^*.
Let A and N be the analytic subgroups of G with the Lie algebras and .
On the Lie group level, we get the Iwasawa decomposition G=KAN, and the multiplication map K × A × N → G given by (k, a, n)↦ kan is a diffeomorphism onto.
Similarly, we also have the decomposition G=NAK.
Let H(g) denote the unique -component of g∈ G in the decomposition g=kexp(H(g))n with k∈ K, n∈ N;
and let A(g) denote the unique -component in the decomposition g=nexp(A(g))k.
The decomposition (<ref>) is orthogonal with respect to the inner product defined on as follows:
X,Y_θ := B_θ(X, Y)=-B(X,θ Y), X,Y∈.
On × we shall write ·, · because B|_× = B_θ|_×.
The Killing form B on is positive definite on . For every λ∈^*, there exists H_λ∈ such that λ(H)= H_λ, H for all H∈; this defines an inner product on ^*,
λ, μ:= H_λ, H_μ, λ, μ∈^*.
We denote in the sequel by |λ| the corresponding norm of λ∈^*.
Let be the Weyl group of the pair (G, A).
For any root α∈Σ, the operator
s_α (φ)= φ-2φ, α/α^2α, φ∈^*.
is a reflection in the hyperplane σ_α=φ∈^*|φ, α=0 and carries Σ to itself.
Now let us transfer s_α to such that
s_α' (H)= H-2α(H)/α(H_α)H_α, H∈.
Let denote the finite group generated by all reflections s_α' (α∈Σ).
For any s' ∈ and λ∈^*, we define s'λ:=λ^s'=λ∘ s'^-1 and then s_α(λ)= s_α ' λ.
Let us consider the mapping u(k)=(k), k∈ M'.
By the definition, the mapping (k)|_ : → only depends on the coset kM.
Then u induces an homomorphism kM⟼(k)|_ of W into GL().
More precisely this is an isomorphism of W with ⊂ GL().
For convenience, we will use the same symbol sλ=s'λ to denote both reflections s'∈ and s generated by s_α. We also will use the same symbol to denote W and .
Set
_'=H∈_|α(H)≠ 0 for all α∈Σ
and '=∩_'.
We define the positive Weyl chamber of associated with Σ^+ as
^+=H∈|α(H)> 0 for all α∈Σ^+.
We transfer this definition to _^*:
_^*'=μ∈_^*|α, μ≠ 0 for all α∈Σ, ^*'=^* ∩_^*'.
Vectors in _^*' are said to be regular.
Let _+^* ⊆^* be the positive Weyl chamber associated with Σ^+, that is
_+^*=λ∈^* | λ, α>0, for all α∈Σ^+.
We define the Weyl vector by
ρ(H)=1/2tr( H|_) =1/2∑_α∈Σ^+m_αα(H)
for H∈.
Denote by =H∈: α(H)≥ 0 for all α∈Σ^+ the closure of ^+ in . The polar decomposition of G is given by G=K K where =exp(), and we denote x^+ the -component of x∈ G in this decomposition.
§.§ Spherical functions
Harmonic analysis on symmetric spaces is built upon spherical functions.
It is important to note that our main references, Helgason <cit.> and Gangolli and Varadarajan <cit.>, have different conventions in their notations. We chose to adopt Helgason's notations <cit.> in our work; translation from <cit.> to <cit.> is done by λ↦ iλ.
Let f be a complex-valued function on G/K such that f̃(e)=1 and f∈ C^∞(G/K). f is called a spherical function if
* f is K-invariant, that is, f∘τ(k^-1)=f for all k∈ K.
τ is the conjugation?
* f is a joint eigenfunction of all invariant differential operators, that is, for all D∈(G/K), Df=λ_Df where λ_D∈.
For convenience, we say that f̃ is a spherical function on G if f is a spherical function on G/K.
For each λ∈_^*, let φ_λ(x) be the elementary spherical function given by
φ_λ(x)= ∫_K e^(iλ-ρ)(H(xk)) dk ∀ x∈ G.
Similarly, under the decomposition G=NAK we set
φ_λ(x)= ∫_K e^(iλ+ρ)(A(kx)) dk ∀ x∈ G.
These are exactly the joint eigenfunctions of all invariant differential operators on G/K.
We have the following functional equation: φ_λ= φ_sλ for all λ∈_^*, s∈.
In particular, the basic spherical function φ_0(x) =∫_K e^-ρ(H(xk)) dk, also denoted as Ξ(x), has the following estimates
φ_0(exp H)≤ C(1+|H|)^d e^-ρ(H) ∀ H∈^+.
then we have
∫_B_r|φ_0(x)|^2dx ≤ C
r^n if r ≤ 1,
r^ν if r > 1.
More information on spherical functions is given by the Harish-Chandra -function. An explicit expression for it as a meromorphic function on _^* is given by the Gindikin-Karpelevich formula: For each λ∈_^*,
(λ) = c_0 ∏_α∈Σ_r^+ 2^-iλ, α_0Γ(iλ, α_0)/Γ(1/2 (1/2m_α +1+ iλ, α_0))
Γ(1/2(1/2m_α +m_2α+ iλ, α_0)),
where α_0=α/α, α, Γ is the classical Γ-function, and c_0 is a constant given by (-iρ)=1.
If we denote A(z, y)=Γ(z+y)/Γ(z), by Legendre duplication formula, the function can be rewritten as
(λ)^-1=C·∏_α∈Σ_r^+Aiλ, α_0, m_α/2 A1/2(iλ, α_0 +m_α/2) , m_2α/2.
From the formula above, one can deduce for λ∈^*:
|(λ)|^-2≤ C
|λ|^ν-l if |λ|≤ 1,
|λ|^n-l if |λ|> 1.
where n:= G/K, l:= A, d is denoted by the cardinality of Σ_r^+, and ν:=2d+l which is called the 'pseudo-dimension'.
This implies a less precise, but shorter estimate
(λ)^-1≤
C (1+|λ|)^1/2(n-l).
The other two properties of we will use are that for λ∈^* and s∈
|(sλ)|=|(λ)|, (-λ)=(λ).
Let now Σ_s^+=α_1,⋯, α_r be the simple system in Σ^+, and Λ=^+α_1+⋯+^+ α_r the set of all linear combinations n_1α_1 + ⋯ + n_rα_r (n_i∈^+). Set Λ^+=Λ∖{0} and Λ̃=Λ=α_1+⋯+α_r.
For any μ∈Λ̃ and s, t∈ (s≠ t) we define the following hyperplanes in _^*:
σ_μ=λ∈_^*: μ, μ=2iμ,λ,
τ_μ(s,t)=λ∈_^*:i(sλ-tλ)=μ.
Set σ=⋃_μ∈Λ^+σ_μ and σ^c=_^*\σ.
For μ=n_1α_1 + ⋯ + n_rα_r ∈Λ, we denote by
m(μ)=n_1 + ⋯ + n_r
the level of μ.
If λ∈_^* does not lie in any of the hyperplanes σ_μ or τ_ν(s,t), then φ_λ is decomposed into the Harish-Chandra series
φ_λ(exp H)= ∑_s∈(s λ)^(isλ-ρ)(H)∑_μ∈ΛΓ_μ(sλ)^-μ(H), H∈^+,
where Γ_μ are coefficient functions on ^* determined by the recurrent relation
μ, μ-2iμ, λΓ_μ(λ)= 2∑_α∈Σ^+m_α∑_ k≥ 1,
μ-2kα∈ΛΓ_μ-2kα(λ)μ+ρ-2kα,α-iα, λ,
with initial function Γ_0≡ 1.
Let C(G//K) denote the space of continuous functions on G which are bi-invariant under K.
Let C_c(G//K) be the set of all functions in C(G//K) with compact support.
The spherical transform, called also the Harish-Chandra transform, of f∈ C_c(G//K) is defined by
( f)(λ)=∫_G f(x)φ_-λ(x)d x, λ∈_^*.
The Harish-Chandra inversion formula is given by
f(x)=C∫_^*( f)(λ)φ_λ(x) (λ)^-2dλ, x∈ G
where C is a constant associated with G.
§.§ Symmetric space
When G is a semisimple, connected Lie group with finite center, the homogeneous space G/K can be identified with the solvable Lie group S=AN with the Lie algebra ⊕.
The multiplication and inverse for (a,x), (b,y)∈ S are given by
(a,x) (b,y)=(ab,bxb^-1y)
and
(a,x)^-1=(a^-1, ax^-1 a^-1),
which is well defined as is stable under ().
The group G is unimodular, while S is not (unless G=A) and has the modular function given by
δ(s)=^-2ρ(log a), s= (a,x)∈ S.
It admits an extension to G defined by δ(g):=^-2ρ(H(g))(g∈ G).
The left and right Haar measures of S are given by
d_ls=d a d n,
d_rs=δ^-1(s) d_ls=^2ρ(log a)d a d n.
As G is semisimple then is unimodular, the Haar measure d g on G is invariant and given by
d g=d_lkd_r(an)=d k d_r(an)=^2ρ(log a)d k d a d n.
or
d g=d_l(an)d_rk=d_l(an)dk=d ad nd k.
§.§ Distinguished Laplacian and Laplace-Beltrami operator
Keep the assumptions of Section <ref>.
The Killing form B of G is positive definite on . This defines a Riemannian structure on G/K and the Laplace–Beltrami operator Δ. It has also a coordinate description as described below.
The bilinear form B_θ: (X,Y)↦ -B(X,θ Y) is positive definite on and the Iwasawa decomposition =⊕⊕ is orthogonal with respect to it. We choose and orthonormal basis (H_1,…,H_l,X_1,…,X_) in so that (H_1,…,H_l) is a basis of and (X_1,…,X_n-l) a basis of . With these viewed as left-invariant vector fields on , we have then <cit.>
Δ = ∑_i=1^l H_i^2+2∑_j=1^n-l X_j^2+∑_i=1^l H_i · (H_iδ)(e).
For a function f on G, set τ (f) = f̌ and f̌(s):=f(s^-1). Setting -X = τ∘ X ∘τ for X∈, we get a right-invariant vector field; the operator L defined by
-L = ∑_i=1^lH_i^2+2∑_j=1^n-lX_j^2
Denote by C_c^∞(S) the set of complex-valued continuous functions with compact support on S.
Let be the Lie algebra corresponding to S and left-invariant vector fields X_1,⋯, X_n be an orthogonal basis of .
For each 1≤ i ≤ n, set X_i as the right-invariant vector fields on G that coincide with X_i at the identity which is denoted by e.
One can have -X_i = τ∘ X_i ∘τ where τ (f) := f̌, for all f∈ C_c^∞(S) and f̌(s):=f(s^-1).
The inner products on C_c^∞(S) are defined by
f, g_l = ∫_S f(s)g(s)d_ls, f, g_r = ∫_S f(s)g(s)d_rs
for f,g∈ C_c^∞(S).
We define two operators L_r and L_l as follows
L_r f, g_l =∑_j=1^nX_j f, X_j g_l, L_l f, g_l =∑_j=1^n X_jf, X_jg_l
By <cit.> , we have
L_r=-∑_j=1^ nX_j^2 and L_l=-∑_j=1^ nX_j^2-∑_j=1^ n(X_jδ)(e)X_j.
Particularly,
If we chose H_1, …, H_l to be an orthonormal basis of and √(2)X_1, …, √(2)X_ to be an orthogonal basis of , define
-L_r = ∑_i=1^lH_i^2+2∑_j=1^ñX_j^2,
-L_l= ∑_i=1^l H_i^2+2∑_j=1^ñ X_j^2+∑_i=1^l H_i · (H_iδ)(e).
One can prove that Δ acting on C^∞(G/K) can be identified with the operator -L_l. We call L_r the (minus) distinguished Laplacian on S and conveniently denote it by L.
is a distinguished Laplace operator on S=G/K and
has a special relationship with Δ<cit.> :
δ^-1/2∘τ L τ∘δ^1/2 = -Δ - ρ^2=: Δ_ρ.
The shifted operator Δ_ρ has the spectrum [0, ∞). Both Δ_ρ and L extend to positive and self-adjoint operators on L^2(S, d_l), with the same notations for their extensions.
Let F be a bounded Borel function on [0, ∞).
We can define bounded operators F(L) and F(Δ_ρ) on L^2(S) via Borel functional calculus.
Note that L is right-invariant while Δ_ρ is left-invariant, we can define k_ and k_ to be the kernels of F(Δ_ρ) and F(L) respectively, a priori distributions, such that
F(Δ_ρ)f=f *k_ and
F(L)f=k_*f, f∈ L^2(S).
The connection between L and Δ implies that
k_=δ^1/2 k_,
which was pointed out by Giulini and Mauceri <cit.> (also see <cit.>).
It is known that
k_ is K-biinvariant. Thus, not only k_ but k_ can be regarded as a function on G by the formula above.
By inverse formula for the spherical transform, we obtain the exact formula of the kernel as follows:
k_(x)=C∫_^*F(|λ|^2)φ_λ(x) (λ)^-2dλ, x∈ G.
§ ASYMPTOTIC BEHAVIOR OF Φ_Λ
In this section, we shall introduce some notations and properties about the asymptotics of the elementary spherical functions. The main result is by Harish-Chandra <cit.>
, but we are systematically citing the book of Gangolli and Varadarajan <cit.>.
In this section, G is a group in class ℋ. We use all the notations introduced above.
Fix H_0∈ which is not in the center of . This is equivalent to say that the subset F⊂Σ_s^+ of simple roots vanishing at H_0 is not equal to Σ_s^+. Set _F = { H∈: (H)=0 for all ∈ F}, and let _1F be the centralizer of _F in . If we denote by M_1F the centralizer of _F in G, then _1F is corresponding Lie algebra of M_1F.
Denote by Σ_F^+ ⊂Σ^+ the set of positive roots vanishing at H_0 and Σ_F:=Σ_F^+∪ -Σ_F^+. Then F and Σ_F^+ are the simple and positive systems of the pair (_1F, ) respectively.
With =Z_(), we have the restricted root space decomposition
_1F=⊕⊕⊕_λ∈Σ_F_λ,
where as expected
_λ = X∈_1F: H,X=λ(H)X for all H∈
for each λ∈Σ_F.
Let _F=_1F∩
and _1F=_1F∩=⊕_λ∈Σ_F^+_λ.
Then we have
the Iwasawa decomposition _1F=_F⊕⊕_1F.
Let K_F=K∩ M_1F and N_1F=exp_1F, on Lie group level, we also have the decomposition
M_1F=K_F A N_1F.
We define θ_F=θ|_M_1F and B_F=B|__1F_×_1F_, then θ_F is a Cartan involution of M_1F. It is important that M_1F is also of class ℋ and (M_1F,K_F,θ_F,B_F) inherit all the properties of (G,K,θ,B)<cit.>.
We denote the Weyl group of pair (_1F,) by _F and define τ_F(H)=min_α∈Σ^+\Σ_F^+α(H) for all H∈.
Let _F denote the Harish-Chandra -function with respect to M_1F.
The elementary spherical functions on M_1F corresponding to λ∈_^* are given by
θ_λ(m)=∫_K_F^(iλ-ρ_F)(H(mk_F))dk_F,
where ρ_F:=1/2∑_α∈Σ_F^+m_αα.
An important part of our proofs relies on Theorem <ref> below, which is <cit.>.
To state it, we need the following result. It is a proposition in <cit.> since the functions ψ_λ on M_1F are defined there by another formula <cit.>; we take it however for a definition.
[<cit.>]
Let λ∈^* be regular. Then
ψ_λ(m)=_F^-1∑_s∈ ((sλ)/_F(sλ))θ_sλ(m)
for all m∈ M_1F.
Next, we introduce the following theorem about the asymptotic behavior of φ_λ in a region including a wall of the positive chamber.
For any ζ> 0, define
A^+(H_0:ζ):=a∈ : τ_F(log a) > ζ|log a| .
This is a conical open set in , and it contains exp H_0 when ζ is small enough.
Fix ζ >0. Then we can find constants κ=κ(ζ)>0,
C=C(ζ)>0, and ι=ι(ζ)>0 such that for all λ∈^* and all a∈ A^+(H_0: ζ),
|^ρ(log a)φ_λ(a)-^ρ_F(log a)ψ_λ(a)|
≤ C(1+λ)^ι^-κlog a.
§.§ Estimates of the constants in Theorem <ref>
We are interested in explicit values of the constants in the theorem above, and especially of ι. Theorem <ref> as stated is in fact a particular case of <cit.>: the latter contains estimates for a differential operator b acting on φ_λ, and we apply it in Section <ref> with b=𝕀. We would like to estimate the constant ι, which is denoted by s in the book <cit.>, in the case of a general operator b, since the proof is almost the same.
Extracting constant from the proofs of <cit.> and its preceding statements is a long task requiring a lot of citations.
It is impossible to explain every symbol appearing in the discussion and to keep a reasonable volume, so we invite the reader to consult the book <cit.> in parallel. Triple-numbered citations like Proposition 5.8.3 or (5.9.2) will be always from this book. Any new notations introduced in this subsection will be used only within it.
First to mention are
the notations concerning parabolic subgroups. Those introduced above suit our proofs in Lemma <ref>, but are different from <cit.>. In this subsection we denote, following the book <cit.>: M_10=M_1F, _10=_1F, _0=_F, τ_0=τ_F and ρ_0=ρ_F.
Let _10 denote the algebra _10∩ (we note that in <cit.>, the complement of in the Cartan decomposition is denoted by rather than ).
Denote by U() and U(_10) the universal enveloping algebras of and _10 respectively. There is a projection β_0 of U() onto U(_10) defined in (5.3.32) and (5.3.33).
Next, a map γ_0 is defined in (5.9.2) by
γ_0(b) = d_P_0∘β_0(b) ∘ d_P_0^-1, b∈ U()
where d_P_0 is a homomorphism from M_10 to _+ defined on p. 208 referring to the formula (2.4.8). For a∈ A, we have d_P_0(a) = e^(ρ-ρ_0)(log a) by Lemma 5.6.1.
The space ^*_ is denoted by ℱ in <cit.>, and is decomposed as ℱ = ℱ_R⊕ℱ_I with ℱ_R=^* and ℱ_I = i^*. For λ∈ℱ this gives λ=λ_ R+λ_ I.
For any κ>0, the set ℱ_I(κ) is defined in Lemma 4.3.3 as
ℱ_I(κ) = {λ∈ℱ: |λ_ R| <κ}.
In particular, it contains ℱ_I.
It is important to recall again that a spherical function denoted by φ_λ in <cit.> would be denoted φ_iλ in <cit.>, so that φ_λ of <cit.> is positive definite when λ∈ℱ_I (Proposition 3.1.4), and the integral in the inversion formula of the spherical transform is over ℱ_I as well (Theorem 6.4.1). Thus, in Proposition <ref> below we consider the case λ∈ℱ_I which implies λ_ R=0; in Corollary <ref>, we return to the notations of the rest of the paper.
We can now state the theorem:
Theorem 3.2' <cit.>
Fix ζ >0. Then we can find constants κ=κ(ζ)>0, and, for each b∈ U(), constants
C=C(ζ,b)>0, s=s(ζ,b)>0 such that for all λ∈ℱ_I(κ) and all a∈ A^+(H_0: ζ),
|^ρ(log a)(bφ_λ)(a)-^ρ_0(log a) (γ_0(b) ψ_λ)(a)|
≤ C(1+λ)^s^-κlog a.
Our result is now:
For regular λ∈ℱ_I, the inequality (<ref>) holds with the constant s = 6d +1 + b where d=Σ_r^+ is the number of indivisible roots.
As we will see, this constant is determined by the degrees of differential operators involved, so we will follow the proofs very briefly while keeping a particular attention at this point.
Two more remarks on notations are also needed. If μ is a differential operator and f a function (on G, or its subgroup), then we denote its value at a point x by (μ f)(x), but in <cit.> it is denoted also by f(x;μ). Also, the spherical functions in <cit.> are denoted both φ_λ(x) and φ(λ:x).
The proof of Theorem 5.9.3(b) relies on Proposition 5.9.2(b) with the same power s.
In its proof in turn, estimates with (1+|λ|)^s come from two sides:
* Formula (5.8.8) of Proposition 5.8.3 estimating |Φ_0(λ:mexp H;μ) - Θ (λ:mexp H;μ)|, with μ=γ_0(b).
The function Φ_0 is defined in (5.4.12), and Θ in Proposition 5.8.1. Formula (5.8.8) is a vector-valued inequality: by (5.4.12), for λ∈^*_, m∈ M_10
Φ_0(λ:m) = [ (v_1∘ d_P_0) φ_λ(m); ⋮; (v_k∘ d_P_0) φ_λ(m) ]
with operators v_1, …, v_k∈ U(_10) defined in (5.4.9). In Proposition 5.9.2, only the first coordinate is used, in which v_1=1 by (5.4.5).
* The estimates of (μ_i φ_λ)(m exp H) with m∈ M_10, H∈, which are obtained by Lemma 5.6.3 with μ=d_P_0∘μ_i ∘ d_P_0^-1; here μ_i, 1≤ i≤ N, are operators in U(_10) whose existence follows from Proposition 5.3.10 with g=b (since φ_λ is K-biinvariant, we have (b φ_λ)(m)=(δ'(b)φ_λ)(m) by (5.3.26), at m∈ M^+_10A_+ as in the proof of Proposition 5.9.2). We get also from Proposition 5.3.10 that μ_i≤ b for all i.
Discussion of the case (b).
Case (a) will require a longer discussion, but it is eventually reduced to the same Lemma 5.6.3, so we continue first with the case (b).
Let us state the lemma in question
<cit.>: Denote by U(_10) the universal enveloping algebra of _10.
Fix μ∈ U(_10). Then there are constants C=C(μ)>0, s=s(μ)≥ 0 such that
| (μ∘ d_P_0) φ_λ(m) | ≤ C e^|λ_ R| σ(m) (1+|λ|)^s (1+σ(m))^s Ξ_0(m)
for all λ∈_^*, m∈ M_10.
The function σ is defined in (4.6.24), but in fact we do not need to consider it since this factor does not depend on λ. We recall again that notations of <cit.> are different from that of Helgason and of those used in our paper, so that for λ∈^*⊂^*_ we have λ_ R=0 in the cited Lemma.
In the proof of Lemma 5.6.3, the constant s comes from the formula (4.6.5) in Proposition 4.6.2:
|φ(λ; ∂(v):x; u)| ≤ C (1+|λ|)^ u(1+h(x))^ vφ(λ_ R:x).
When applied in Lemma 5.6.3, one puts v=1 so that ∂(v)=𝕀, then x=m and u=d_P_0^-1∘μ∘ d_P_0, so that u = μ. The function h is defined in the same Proposition 4.6.2 but it is not significant in our case since v=0.
We see now that the constant s appearing in Lemma 5.6.3 is μ, and by our previous considerations, it is bounded by b in the case (b).
Discussion of the case (a).
In the proof of Proposition 5.8.3, the estimate (5.8.8) follows from (5.8.16), where one can replace exp(Γ_0(λ:H)) Θ^μ(λ:m) by Θ(λ:mexp H;μ) according to (5.8.18) and (5.8.20).
And (5.8.16) is an application of Proposition 5.7.4, precisely (5.7.26). One should note that (5.7.26) is an inequality involving functions of t∈^q, whereas in (5.8.16) this variable is hidden: we suppose that H=∑_i=1^q t_i H_i, where H_1,…,H_q are chosen on p.224.
The factor (1+|λ|)^s in (5.8.16) is a product of two others. The first one comes from the factor (1+Γ)^2(k-1) in (5.7.26) since Γ_i=Γ_0(λ:H_i), with Γ_0 defined in (5.5.18). For every i, the norm of the matrix Γ_0(λ:H_i) is bounded by |λ| |H_i| since its eigenvalues are of the form H_i(s^-1λ), s∈, by Theorem 5.5.13. Moreover, k in this formula is the dimension of the space of values of the function f; in (5.8.16), the function under study is Φ_0 (with λ, m, μ fixed), and its values are k-dimensional with k as in (5.4.12), that is, k=[:_0], by (5.4.4).
The second factor contributing to (1+|λ|)^s in (5.8.16) is G_i,λ,m_β |λ_ R| ,s estimated in (5.8.9), with the function G_i,λ,m defined in (5.6.4), and the power s is given by Lemma 5.6.6; recall that this lemma is applied with μ=γ_0(b). Note also that (5.8.9) does not add an exponential factor in λ since λ_ R=0 for λ∈^*.
Altogether, we get (1+|λ|) in the power 2([:_0]-1)+s, where s is given by Lemma 5.6.6 (second inequality) with μ=γ_0(b).
The index [:_0] can however be large and is not satisfactory for us. But actually it replaced by a better estimate, and let us see how.
Improving the estimate (1+Γ)^2(k-1).
We have seen that this factor comes from (5.7.26), with Γ_i = Γ_0(λ:H_i), i=1,…,q. Formula (5.7.26) should read
| f(t) - exp(t_1 Γ_1+…+t_q Γ_q) θ| ≤ C (1+Γ)^2(k-1) (1+| t|)^s+2k-1max_i g _i_b,s e^-(aη/3)| t|
(there is a misprint in the book where the exponent is exp(-t_1 Γ_1+…+t_q Γ_q); the first minus sign is unnecessary, as can be figured out from the proof, or compared with (5.7.14); when applied in Proposition 5.8.3, this minus is not supposed either).
This inequality is obtained by estimating first | f^*( t)-θ| with f^*( t) = exp(-t_1 Γ_1-…-t_q Γ_q) f( t) (notations of (5.7.15)) then multiplying by exp(t_1 Γ_1+…+t_q Γ_q). In our case,
t_1 Γ_1+…+t_q Γ_q = t_1 Γ_0(λ:H_1)+…+t_q Γ_0(λ:H_q) = Γ_0(λ:H),
since Γ_0 is linear in H_i (see (5.5.16)-(5.5.19)). Let us write L=Γ_0(λ:H) till the end of this proof. Thus, we are estimating
| f(t) - exp(L) θ| ≤exp(L) | f^*(t)-θ|.
The estimate for | f^*(t)-θ| is obtained as | f^*(t)- f^*(t,…,t)| + | f^*(t,…,t)-θ| where t=min( t), and both are eventually reduced to Corollary 5.7.2 of Lemma 5.7.1. We would like to replace it by the following:
Claim. If λ∈^* is regular, then exp(L)≤ C_k (1+|λ|)^2ξ for every H
∈, where ξ = max_1≤ j≤ k u_j and u_j are defined in (5.5.14).
Let denote the coset space /_0. By Theorem 5.5.13, L=Γ_0(λ:H) is diagonalizable: for s̅∈, the vector f_s̅(λ) of (5.5.34) is its eigenvector with the eigenvalue H(s^-1λ), and these vectors form a basis of ^k (recall that k=||). The matrix F(λ) with columns f_s̅(λ) is the transition matrix to the Jordan form J(λ) of L=F(λ) J(λ) F(λ)^-1, so that
exp L=F(λ) exp( J(λ) ) F(λ)^-1. We have now
exp L≤ F(λ) exp( J(λ) ) F(λ)^-1≤F(λ) F(λ)^-1
since J(λ) is diagonal with purely imaginary values on the diagonal (recall that we assume λ∈
i ^*).
The norm of F(λ) is up to a constant (depending on k) equal to the maximum of its coordinates, that is, of u_j(s^-1λ) over 1≤ j≤ k and s∈, by Theorem 5.5.13. Every u_j is a polynomial function on ^* of degree u_j, chosen in (5.5.14). We can thus estimate F(λ)≤ C_k max_j (1+|λ|)^ u_j.
The inverse matrix has, by Lemma 5.5.10, coordinates |_0| u^i(s̅^-1λ) where (u^i) is a basis dual to (u_j) with respect to the bilinear form (5.5.25). Both bases span the same linear space. Belonging to the linear span of (u_j), every u^i has degree at most max_j u_j, and this allows to estimate F(λ)^-1 by C_k max_j (1+|λ|)^ u_j and finally
exp L≤ C_k max_j (1+|λ|)^2 u_j, what proves the claim.
Return now to the proof of (5.7.26). We can use now the Claim above instead of every reference to Lemma 5.7.1 or Corollary 5.7.2. One occurrence is (5.7.27), but let us first recall the notations (5.7.7)-(5.7.8) which imply that for any continuous ^k-valued function h on ^q_+ and any t∈^q_+,
| h(t)| ≤ k^1/2 e^b| t| - amin ( t) (1+| t|)^s h_b,s.
Note that | t| = |t_1|+…+|t_q| and min( t)=min (t_1,…, t_q) by (5.6.6). In particular,
| h((t,…, t))| ≤ k^1/2 e^bqt-at (1+ qt)^s h_b,s.
This together with the Claim implies that (5.7.27) can be improved to
∑_1≤ i≤ q | g_i^*(t,…,t)| ≤ C (1+qt)^s (1+λ)^2ξ e^qbt-atmax_1≤ i≤ q g_i_b,s.
By (5.7.23), qb-a<aη/3-a<-2a/3. The existence of the limit θ follows as in the original proof, and this limit is given by (5.7.12). Up to a change of notations, this is also (5.7.20). We can now estimate
| f^*(t,…,t)-θ| ≤ C max_1≤ i≤ q g_i_b,s (1+|λ|)^2ξ∫_t^∞ (1+qu)^s e^(qb-a)u du,
with the integral
∫_0^∞ (1+qt+qv)^s e^(qb-a)(t+v) dv ≤ C_a,b,s (1+t)^s e^(qb-a)t.
Next comes the estimate of | f^*( t)- f^*(t,…,t)|, done similarly to the calculations between (5.7.21) and (5.7.22). We arrive at
| f^*( t)- f^*(t,…,t)| ≤ C max_1≤ i≤ q g_i_b,s (1+|λ|)^2ξ (1+| t|)^s e^b| t|-a min( t)
instead of (5.7.22), and conclude that for t∈ S_η (this set is defined in (5.7.4), and within it t≥η| t|)
| f( t) - exp(L) θ| ≤ (1+|λ|)^4ξmax_1≤ i≤ q g_i_b,s (1+t)^s e^(b-η)| t|,
allowing to replace (1+Γ)^2(k-1) by (1+|λ|)^4ξ in (5.7.26).
Collecting the powers of (1+|λ|). The final power of (1+|λ|) is the maximum between two cases: in case (b), s≤ b; in case (a), s is bounded by the sum of 4ξ = 4max_j u_j with u_j of (5.5.14 ), and of the power s given by Lemma 5.6.6 (second inequality) with μ=γ_0(b), of degree μ≤ b. We will proceed now to estimate this last power.
Lemma 5.6.6 with its two inequalities follows directly from Lemma 5.6.4, first and second inequalities respectively. The power s and the operator μ are the same in both. And the second inequality of Lemma 5.6.4 is reduced to the first one, with another μ; we need to estimate its degree. For a given i, the operator E_i=E_H_i is defined in (5.4.14) as a k× k matrix with nonzero entries(only in the first column) of the type
∑_p=1^p_iψ_H_i,jpμ_H_i,jp,
j=1,…,k, where ψ_H_i,jp are functions in R^+ (that is, smooth functions on A^+, see a remark after formula (5.2.7) and notations after (4.1.26)), and μ_H_i,jp are differential operators in U(_10) of degree
μ_H_i:jp≤ H_i + v_j = 1 + v_j,
by (5.4.13).
The operators v_j are introduced by the formula (5.4.9), and we will yet return to them. The product μ E_i is a matrix again with entries(only in the first column)
∑_p=1^p_iμ∘ (ψ_H_i,jpμ_H_i,jp) = ∑_p=1^p_i∑_l=1^l_iψ_ijpl' μ'_ijplμ_H_i,jp,
j=1,…, k, with some ψ_ijpl'∈ R^+, μ'_ijpl∈ U(_10), and each entry in (<ref>) has degree ≤μ+1+max_j v_j. This is thus the maximal degree of μ appearing in the first inequality of Lemma 5.6.4. The last reduction to make is to use the formula (5.4.12) defining Φ_0, also cited above; estimates for Φ_0(λ:m;μ) follow from those for φ(λ:m; μ∘ d_P_0) in Lemma 5.6.3, but with μ∘ v_j, j=1,…,k, instead of μ. This increases the bound for the degree in (<ref>) to
μ+1+2max_j v_j
with (v_j) of (5.4.9). Now, as we have seen in the discussion of the case (b) above, the constant s in Lemma 5.6.3 is μ, so that s= s(μ) ≤ b+1+2max_j v_j with μ=γ_0(b).
The total estimate for s in the initial Theorem 5.9.3 is now
s ≤ b+1+2max_j v_j + 4 max_j u_j
with (u_j) of (5.5.14) and (v_j) of (5.4.9).
p.243: d_P_0(a) = e^(ρ-ρ_F)(a).
: for m∈ M_10
d_P_0(m) = | Ad(m) |__0| ^1/2.
Here _0 is the nilpotent component in the Langlands decomposition of P_0=M_0A_0N_0; in our case P_0=P_F is standard and _F = ∑_λ∈Δ^+∖Δ^+_F_λ (2.3.15); by (2.4.10),
d_P_0(ma) = exp( 1/2 Ad (log a) |__0),
and we can write explicitly Ad (H) |__0 = ∑_λ∈Δ^+∖Δ^+_F m_λλ(H) for H∈.
We are at our final destination: s is u, and we can trace it back.
In Lemma 5.6.3 we have u=d_P_0^-1∘μ∘ d_P_0 with μ∈ U__10, so that u = μ.
When used in Proposition 5.8.3, formula (5.8.9) estimates G_i,λ,m_βλ_ R, s. Notations of the cited Lemma 5.6.6 are a bit different:
| G_i (λ: m: t) | ≤ C e^λ_ Rσ(m) + βλ_ R |t| (1+λ)^s
(1+|t|)^s (1+σ(m))^s Ξ_0(m) ε_0(m)^2 ( 1-ε_0(m)^2 )^-(s+1) e^-2min(t);
the norm G_i,λ,m_βλ_ R, s as a function of t∈^q is defined by (5.7.7) (see the proof of Proposition 5.8.3).
Now, the second formula in Lemma 5.6.4 which we apply with μ=γ_0(b) reduces to the first one
| Φ_0(λ:m;μ) | ≤ C e^λ_ Rσ(m) (1+λ)^s (1+σ(m))^s Ξ_0(m)
with μ=γ_0(b)ψ_H_i:jpμ_H_i:jp. Together with the definition of Φ_0 by (5.4.12)
Φ_0(λ:m) = [ φ(λ:m; v_1∘ d_P_0); …; φ(λ:m; v_k∘ d_P_0) ]
this makes us apply Lemma 5.6.3, that is, (<ref>), with μ = γ_0(b)ψ_H_i:jpμ_H_i:jp v_k which is of degree ≤ 1 + b + 2max{ v_j: 1≤ j≤ k}.
Concentrate now on the degrees of v_j. These operators are introduced by the formula (5.4.9): v_j∈𝔔_0=U(𝔰_10)^K_0 are such that γ__10/(v_j)=u_j; the elements u_j∈ U() are homogeneous and _0-invariant. The homomorphism γ__10/ is defined by the formula (2.6.53) and Theorem 2.6.7. By the proof of Theorem 2.6.7 (p. 93-94) v_j can be chosen of the same degree as u_j.
Now by (5.4.1-5.4.5) u_j are in ℌ, a graded subspace of U() such that U() is the direct sum of ℌ and U()I_^+, where I_ is the space of -invariant polynomials in U() and I_^+ is the subset of polynomials in I_ of positive degree. This situation is described by general theory in <cit.>. In particular, <cit.> states that the multiplication map I_⊗ℌ→ U() is an isomorphism, and the Poincaré series of ℌ is given by
P_ℌ(t) = ∏_1≤ i≤ l (1+t+… +t^d_i-1),
where d_i= p_i are the degrees of polynomials generating I_. There are exactly l of them, see <cit.>; moreover, by (4.15.34) we know that ∑_i=1^l (d_i-1) is the number of reflections in . This is exactly the number of indivisible roots in Δ^+ (see for example Corollary 4.15.16, and note that s_α = s_2α if both are roots). This gives us an estimate
∑_i=1^l (d_i-1) ≤ d.
By (<ref>), the left hand side is also the highest degree of polynomials in ℌ, whence we conclude that v_j≤ d.
The reasoning for u_j is similar, just done in a more abstract setting; the bound is d as well.
We obtain finally:
s
≤ b + 6d + 1.
In Theorem <ref> we can choose ι≤ 6d+1 ≤ 6(n-l)+1.
This is just <cit.> applied with b=𝕀 and observe that Σ_r^+≤Σ^+≤ = n-l.
We would like to say a bit more about the index k=[:_0]. In terms of d (the number of indivisible roots), it can grow exponentially: if _0={𝕀} then k=|| and is, for example, (d+1)! if the root system is A_d. In our applications _0 will be the parabolic subgroup generated by all simple roots but one; however, in this case k can also grow exponentially. An example is given again by the system A_d. Suppose that d=2p is even and remove the p-th root; the remaining system is then reducible and _0 is the direct product of A_p and A_p-1. We have k = (2p+1)! (p+1)! p!, and up to a constant this is 4^p/√(p). Our final estimate by 6d+1 is thus indeed much lower.
[Calculations are to be removed in the published version:
k = (2p+1)!/ (p+1)! p! ∼√(2π (2p+1) ) (2p+1)^2p+1 e^p+1 e^p / e^2p+1√(2π (p+1) 2π p) (p+1)^p+1 p^p
∼√( 2p+1 ) (2p+1)^2p+1/√(2π (p+1) p) (p+1)^p+1 p^p
∼ (2p+1)^2p+1/√(π p) (p+1)^p+1 p^p
∼ 2^2p+1( p+1/2/p+1) ^p+1( p+1/2/p) ^p (π p)^-1/2
∼ 2^2p+1exp( (p+1) ln ( 1+ 1/(2p+2) ) ) exp( p ln(1+1/(2p)) ) (π p)^-1/2
∼ 2^2p+1exp( (p+1)/(2p+2) ) ) exp( p /(2p)) ) (π p)^-1/2
∼ 2^2p+1 e^1/2+1/2 (π p)^-1/2
∼ 2^2p+1 e (π p)^-1/2
]
Estimates of κ.
In Lemma 5.6.6, β is defined as max_1≤ i≤ q |H_i|, where (H_i) is a basis of _0∩, which extends to a dual basis of with respect to the simple roots (α_i).
In the proof of Proposition 5.8.3, after the formula (5.8.15), the constants c and η>0 are chosen so that ^+_0(η) is nonempty, 0<c<1 and for every t∈^q
|t_1 H_1+… +t_q H_q| ≥ ct.
Then one sets
κ = 2cη/3q β (2k+1).
All subsequent results relying on this Proposition use in fact the same value of κ.
Let us analyse it.
By (5.6.3), ^+_0 = { H∈_0: α_i(H)>0, 1≤ i≤ q}, and by (5.8.1),
_0^+(η) = { H∈_0: τ_0(H) > η|H| },
where τ_0(H) = min_1≤ i≤ q |α_i(H)| by (5.6.3).
It follows that κ depends on the root system only. In other words, for a given group it can be chosen once and be valid for any choice of H_0 or ζ.
§ UNIFORM ESTIMATES OF KERNELS
This is the central part of our article. We start with a general lemma concerning functions on a general group G in class ℋ, and then we derive from it estimates for the kernels of a class of functions of Laplace operators, for G semisimple.
Let G be a group of class ℋ and R a radial positive function on ^*.
Then
sup_a∈^ρ (log a)∫_^*R(λ)φ_λ(a)(λ)^-1d λ≤ C ∫_^* R(λ) (1+|λ|)^6d+n-l/2+1 dλ,
where the constant C>0 depends on the group only.
Moreover,
lim sup_a∈ |log a|→∞^ρ (log a)∫_^*R(λ)φ_λ(a)(λ)^-1d λ≤ C ∫_^* R(λ) (1+|λ|)^n-l/2 dλ.
It will be clear from the proof that one can replace the estimates by
C ∫_^* R(λ) (1+|λ|)^n-l+1 |(λ)|^-1 dλ and
C ∫_^* R(λ) (1+|λ|)^n-l/2 |(λ)|^-1 dλ
respectively, which is more precise around 0 (in ^*).
Step 1.
First, we deal with the simplest case Σ_s^+=1.
Throughout this step we assume that Σ_s^+=α. Then Σ^+=α or Σ^+=α,2α, and Σ=±α or Σ=±α,± 2α.
Since the argument will be similar and easier when Σ^+=α, in the following proof, we only discuss the case Σ^+=α,2α.
Under the assumption above we have
^+=H∈: α(H)> 0, =H∈: α(H)≥ 0.
With the fact s_α=s_-α=s_2α we know that =𝕀, s_α and Λ=α^+. We rewrite the Harish-Chandra series of φ_λ as follows: for all regular λ∈^* and all H∈^+,
φ_λ(exp H)^ρ(H)
= (λ)^iλ(H)∑_m=0^∞Γ_mα(λ) ^-mα(H)
+ (s_αλ)^i s_αλ(H)∑_m=0^∞Γ_mα(s_αλ) ^-mα(H).
By <cit.>, there exist constants C>0, p≥
0 such that |Γ_mα(λ)| ≤ C m^p for all λ∈^*.
We now need an estimate for Γ_mα. By definition, Γ_0≡1 and for each m≥1
m^2α^2-2miα, λΓ_mα(λ)
= 2m_α∑_k=1^m/2Γ_m-2kα(λ) mα+ρ-2kα,α-iα, λ
+ 2m_2α∑_k=1^m/4Γ_m-4kα(λ) mα+ρ-4kα,α-i2α, λ.
We can calculate
ρ_α := ⟨ρ,α⟩/|α|^2 = 1/2 m_α + m_2α;
moreover,
m^2α^2-2miα, λ≥maxm^2α^2,2mα,λ.
This allows us to estimate Γ_mα(λ) as follows:
Γ_mα(λ) ≤ 2m_α∑_k=1^m/2Γ_m-2kα(λ)m+ρ_α-2k/m^2+1/2m
+ 2m_2α∑_k=1^m/4Γ_m-4kα(λ)m+ρ_α-4k/m^2+1/m
≤ 2m_αρ_α/m^2+3/2m∑_k=1^m/2Γ_m-2kα(λ)
+ 2m_2αρ_α/m^2+2/m∑_k=1^m/4Γ_m-4kα(λ).
With Γ_0≡1 and Γ_α≡0, this implies already that for every m there is a constant C_m≥1, independent of λ, such that |Γ_mα(λ)|≤ C_m. It is clear that every C_m depends on m_α and m_2α only.
Set C_α^0=max{ C_m: 0≤ m≤ρ_α} and C_α = max{ C_α^0, 1/25m_α + 3m_2α}.
In particular, we have C_α≥5/2 since m_α≥1.
We will show now by induction that |Γ_mα(λ)|≤ C_α^m for all m and all regular λ.
The assumption is true for m≤ρ_α.
For m>ρ_α, suppose the assumption holds for all l<m, then
Γ_m α(λ) ≤ 2m_α ( 1/m + 3/2m) ∑_k=1^m/2Γ_m-2kα(λ)
+ 2m_2α ( 1/m + 2/m) ∑_k=1^m/4Γ_m-4kα(λ)
≤ 5m_α/m∑_k=1^m/2 C_α^m-2k
+ 6m_2α/m∑_k=1^m/4 C_α^m-4k
≤ 5m_α/m m/2 C_α^m-2
+ 6m_2α/mm/4 C_α^m-4
≤( 5m_α/2 + 3m_2α/2) C_α^m-1≤ C_α^m.
Then we can estimate spherical functions as follows:
φ_λ(exp H)^ρ(H) ≤(λ)∑_m=0^∞^-mα(H)Γ_mα(s_αλ)+Γ_mα(λ)
≤ 2 (λ)∑_m=0^∞^-mα(H) m^p,
the series converging for any H∈^+.
Set
^+_1=H∈^+: α(H)> 1.
We obtain that for any regular λ and H∈_1^+,
φ_λ(exp H)^ρ(H)≤ C'(λ),
with
C' = 2 ∑_m=0^∞ e^-m m^p.
For any H∈\_1^+=H∈: α(H)≤ 1, in virtue of φ_λ_∞=φ_λ(1)=1,
φ_λ(exp H)^ρ(H)≤^ρ(H)· 1
≤^1/2m_α+m_2α =: C”
< ∞.
Thus, for every H∈ and regular λ, we have |φ_λ(exp H)|^ρ(H)≤ C
max(1,|(λ)|),
with C=max(C', C”).
The set ^*∖^*' of irregular points has measure zero in ^* and does not influence integration.
Together with (<ref>),
we now obtain
sup_a∈∫_^*R(λ)φ_λ(a)^ρ(log a)(λ)^-1d λ≤ C ∫_^*R(λ)1+λ^n-l/2dλ,
addressing both assertions of the theorem.
Step 2.
We induct on Σ_s^+.
If Σ_s^+=1, by the discussion in Step 1, the result follows.
Assume now the theorem holds
for all Lie groups of class ℋ whose simple system satisfies Σ_s^+≤ r-1.
We proced to consider the case Σ_s^+= r.
We denote by Z_ the centre of (which may be nontrivial), and set =∩ Z_.
As a subspace of ,
=⋂_α∈Σ^+α=⋂_α∈Σ_s^+ α.
For each positive root α we can define the quotient linear functional α̅: /→ by
α̅(H̅):=α(H), H̅ = H+∈/ .
It is clear that p(H̅):=max_α∈Σ^+_s α̅(H̅) is a norm on /.
Moreover, it is equivalent to the quotient norm since / is finite-dimensional.
It follows that
we can find a constant ζ>0 such that p(H̅)>ζH̅ for any H∉.
Recall that the simple system is Σ_s^+=α_1,⋯,α_r.
For each H∉ there exists α_k∈Σ_s^+ such that p(H̅)=α̅_k(H̅) > ζH̅.
By definition of the quotient norm, this implies that there exists Y∈ such that α_k(H)=α_k(H+Y)>ζH+Y.
Moreover, if H∈ then α_k(H)=α_k(H+Y)> 0 and also H+Y ∈.
Define _k^+=X∈: α_k(X)> ζX, then H=H+Y-Y∈_k^+ + and we have just shown that
⊂⋃_k=1^r (^+_k+)∪.
We choose next a basis H̅_1,…, H̅_r of / dual to α̅_1, …, α̅_r and pick representatives H_k∈ of H̅_k such that α_k(H_k) > ζH_k. This ensures in particular that α_j(H_k)=0 if j k, and H_k∈_k^+⊂.
Step 3.
For any H∈, α∈Σ^+ and Y∈_α, we get α(H)Y=H,Y=0.
It implies ρ(H)=0 and δ(a)=^-2ρ(H)=1.
With the fact φ_λ_∞ = 1 we have
sup_a∈exp∫_^*R(λ)φ_λ(a)^ρ(log a)(λ)^-1d λ≤∫_^*R(λ)1+λ^n-l/2 dλ .
Next, estimates on ^+_k+ are the same as on ^+_k, which is verified as follows.
By <cit.> we have for any Y∈ and H∈,
φ_λ(exp H)
=φ_λ(exp(-Y+Y+H))
=∫_K ^(-iλ+ρ)(A(kexp Y))^(iλ+ρ)(A(kexp(Y+H)))dk
=∫_K ^(-iλ+ρ)(A(exp Yk))^(iλ+ρ)(A(kexp(Y+H)))dk
=∫_K ^(-iλ+ρ)(Y)^(iλ+ρ)(A(kexp(Y+H)))dk
=^-iλ(Y)∫_K ^(iλ+ρ)(A(kexp(Y+H)))dk
= ^-iλ(Y)φ_λ(exp(Y+H)).
Thus we obtain
φ_λ(exp(H+Y))^ρ(H+Y)
=φ_λ(exp H)^ρ(H),
that is,
sup_H∈^+_k+ |φ_λ(exp H)|^ρ(H)
= sup_H∈^+_k |φ_λ(exp H)|^ρ(H).
Step 4.
For each k∈1,…,r, set F_k = {α_j: j k}. This is exactly the set of simple roots of G vanishing on H_k. Let P_k=P_F_k be the parabolic subgroup associated with F_k, as described in Section <ref>. Denote by G_k=M_1F_k the associated subgroup of P_k. The simple system on the Lie algebra _k of G_k is exactly F_k and trivially F_k=r-1. We write for convenience Σ_k^+:=Σ_F_k^+, Σ_k:= Σ_F_k, _k:=_F_k, _k:=_F_k, τ_k:=τ_F_k and ρ_k:= ρ_F_k= 1/2∑_α∈Σ_k^+ m_αα. Then the conical region (<ref>) corresponding to H_k is
A^+(H_k:ζ)=a∈: τ_k(log a)> ζ |log a|=a∈: α_k(log a) > ζ |log a|.
The sets (_k^+) can be represented now in the following form:
_k^+ =log x: x∈ A^+(H_k: ζ).
According to Notation <ref> and to Theorem <ref>, for every k there exist constants C_k(ζ), ι_k, κ_k such that for any a∈ A^+(H_k: ζ)
φ_λ(a)^ρ(log a) ≤^ρ_k(log a)ψ_λ(a)+C_k(ζ) 1+λ^ι_k^-κ_k log a
= _k^-1∑_s∈(sλ)/_k(sλ)θ_sλ(a)^ρ_k(log a)+C_k(ζ)1+λ^ι_k^-κ_k log a;
by Corollary <ref>, we can assume ι_k = 6d+1.
Note also that ζ is the same for all k.
We denote then
I_k'(a) = ∫_^*R(λ)_k^-1∑_s∈(sλ)/_k(sλ)θ_sλ(a)^ρ_k(log a)(λ)^-1d λ,
I_k”(a) = C_k(ζ) ∫_^*R(λ)1+λ^ι_k^-κ_k |log a|(λ)^-1d λ
and
I_k(a) = I_k(a)'+ I_k(a)”.
:=sup_a∈ A^+(H_k: ζ)∫_^*R(λ)φ_λ(a)^ρ(log a)(λ)^-1d λ
≤∫_^*R(λ)_k^-1∑_s∈(sλ)/_k(sλ)θ_sλ(a)^ρ_k(log a)(λ)^-1d λ
+ C_k(ζ)∫_^*R(λ)1+λ^ι_k^-κ_k |log a|(λ)^-1d λ
=:I_k'+ I_k”.
To estimate the main part I_k(a)', recall first that |(sλ)|=|(λ)| for each s∈, so that
I_k'(a)
≤∫_^*R(λ)_k^-1∑_s∈(sλ)/_k(sλ)θ_sλ(a)^ρ_k(log a)(λ)^-1 dλ
= ∫_^*R(λ)_k^-1∑_s∈_k(sλ)^-1θ_sλ(a)^ρ_k(log a) dλ.
Next, every s∈ is an orthogonal transformation of , which implies
R(λ)=R(λ)=R(sλ)=R(sλ).
Changing the variables, we can now estimate the integral I_k'(a) as follows:
I_k'(a)
≤ |_k|^-1∑_s∈∫_^*R(sλ) _k(sλ)^-1θ_sλ(a)^ρ_k(log a) dλ
= 1/|_k|∑_s∈^ρ_k(log a)∫_^*R(λ) _k(λ)^-1θ_λ(a) dλ
= ||/|_k| ^ρ_k(log a)∫_^*R(λ)θ_λ(a)_k(λ)^-1dλ.
The simple roots of G_k with respect to are {α_j: j k} and there are r-1 of them. The number d_k of indivisible roots is smaller than d. We can apply thus the inductive hypothesis to get
sup_a∈ I_k'(a)
≤ C_k∫_^*R(λ)1+λ^6d_k+1+n_k-l/2 dλ
and
lim sup_|log a|→∞ I_k'(a)
≤ C_k∫_^*R(λ)1+λ^n_k-l/2 dλ,
with a constant C_k depending on the group G_k only.
The remainder term I_k”(a) is bounded by
I_k”(a) ≤ C_k(ζ)c_G_k∫_^*R(λ)(1+|λ|)^6d+1+(n-l)/2 d λ ^-κ_k |log a|
;
it contributes to the uniform norm but not to the supper limit at |log a|→∞.
Thus, keeping the largest powers only,
sup_a∈ A^+(H_k: ζ) I_k(a)
≤ C_k' ∫_^*R(λ)1+|λ|^6d+n-l/2+1 dλ
and
lim sup_a∈ A^+(H_k: ζ) |log a|→∞ I_k(a)
≤ C_k'∫_^*R(λ)1+λ^n-l/2 dλ
with a constant C_k'≥1.
For any a∈, let X=log a, if X∈, the inequality (<ref>) is obtained in the Step 3. If X∉, by the cover of positive chamber (<ref>), there exists k∈1,…, r such that X∈_k^+ +. Let us write X=H+Y where H∈_k^+ and Y∈. With the relation (<ref>), we obtain
sup_X∈_k^+ +∫_^*R(λ)φ_λ(exp X)^ρ(X)(λ)^-1d λ
= sup_H∈_k^+∫_^*R(λ)φ_λ(exp H)^ρ(H)(λ)^-1d λ
≤ C∫_^*R(λ)1+λ^z_k + n-l/2 dλ
With (<ref>), it remains now to collect (<ref>), (<ref>) and the inequality above to get
the statement of the theorem, with C=max{ C'_k: 1≤ k≤ r}.
For every H∈ we have one of the following possibilities:
* H is regular, and all simple roots are nonzero on it. Then on every induction step it falls into one of the regions ^+(H_k:ζ) (to think again if ζ changes from step to step or can be estimated once for the whole of the proof). This continues until there is only one simple root left; and in this case we are in Step 1. The only case is possible:
* H∈^+_1,α, then the bound for this step is C|(λ)|;
* The second case is when α_j(H)=0 for j∈ F⊂Δ^+_s which is nonempty. Then at certain point of induction we fall into Step 3 and get 1 as the bound.
The remainders are of order (1+|λ|)^ι e^-κ |H| in both cases.
That is, we get no decay in H, what is somewhat surprizing. Probably this is because we do not use any differentiation conditions.
From this moment we suppose that G is a semisimple connected Lie group with finite center and F a Borel function on ^+.
We proceed
to obtain uniform estimates for the kernel of F(L), and by consequence pointwise bounds for the kernel of F(Δ_ρ). The following lemma shows that it is sufficient to estimate the values on the subset ⊂ G only.
Let k_ be defined as in Section <ref>, then
k__∞=sup_x∈ Gk_(x)≤sup_a∈k_(a).
Every x∈ G can be decomposed as x=k_1 a k_2 where a∈ and k_1, k_2∈ K.
In virtue of (<ref>) and knowing that k_ is K-biinvariant, we obtain
k_(x)
=δ^-1/2(k_1 ak_2)k_(k_1ak_2)
=δ^-1/2(ak_2)k_(a)
=δ^-1/2(ak_2)δ^1/2(a)δ^-1/2(a)k_(a)
= ^ρ(H(ak_2)-log a)k_(a).
By <cit.> (see also <cit.>) we know that
ρ(H(ak))≤ρ(log a) for all a∈, k∈ K.
Then we get
^ρ(H(ak_2)-log a)≤ 1
and
sup_x∈ Gk_(x)≤sup_a∈k_(a).
This completes the proof.
Recall that x^+ denotes the component in the polar decomposition of x∈ G.
Let G be a semisimple connected Lie group with finite center and F a Borel function on ^+ such that
I_F = ∫_^*F(λ^2) (1+|λ|)^6d+n-l+1 dλ < ∞.
Then k_ is uniformly bounded on G, and
k__∞
=sup_x∈ Gδ^-1/2(x)∫_^* F(λ^2) (λ)^-2φ_λ(x) dλ≤ C I_F,
where C is a constant depending on G only.
Moreover,
lim sup_|log x^+|→∞ |k_(x)| ≤ C ∫_^* |F(|λ|^2)| (1+|λ|)^n-l dλ.
By Lemma <ref>, it is sufficient to estimate |k_(a)|
for a∈.
Since G is semisimple it is also in class ℋ; we take
R(λ)=F(λ^2)1+λ^n-l/2
in Lemma <ref> to obtain, using (<ref>), that
sup_a∈k_(a) ≤sup_a∈∫_^*F(λ^2)^ρ(log a)φ_λ(a)(λ)^-2 dλ
≤sup_a∈∫_^* R(λ) ^ρ(log a)φ_λ(a)(λ)^-1 dλ
≤ C∫_^* R(λ) (1+|λ|)^6d+1+n-l/2d λ
= C∫_^* |F(|λ|^2)| 1+|λ|^6d+n-l+1d λ.
The second inequality is obtained similarly.
In the statement of the theorem, one can clearly replace (<ref>) by
I_F = ∫_^*F(λ^2) (1+|λ|)^7(n-l)+1 dλ < ∞.
Let G be a semisimple connected Lie group with finite center and F a Borel function on ^+ verifying (<ref>) or (<ref>).
Then for every x∈ G
|k_(x)| ≤ CI_F e^ -ρ( H(x) ) ,
where C is a constant depending on G only.
Moreover,
lim sup_|log x^+|→∞ |k_(x)| e^ρ( H(x) ) ≤ C ∫_^* |F(|λ|^2)| (1+|λ|)^n-l dλ.
It follows from Theorem <ref> that the operator F(L) is bounded from L^1(G,m_r), with respect to the right Haar measure, to L^∞(G). This occurs frequently, as soon as the integral (<ref>) converges, without any smoothness or positivity assumptions on the symbol.
One can also note that from L^1(G,m_l) to L^∞(G) these operators are never bounded, unless F is zero (this is a general fact about left convolution operators).
This can be seen as follows: [to be shortened] For any x we have
(k*f)(x) = ∫ k(y) f(y^-1x) dx
= ∫ k(xz) f(z^-1) dz
= ∫ k(xz^-1) f(z) δ(z)^-1 dz
= ∫ k(xz^-1) δ(xz^-1) δ(x^-1) f(z) dz.
The norm of the linear functional v_x: f↦ (k*f)(x) is
esssupv_x = _z k(xz^-1) δ(z^-1) = δ(x^-1) _z k(xz^-1) δ(xz^-1)
= δ(x^-1) _y k(y) δ(y);
as soon as k≢0, we have kδ_∞>0 and
F(L)_L^1→ L^∞ = sup_x,f |(k*f)(x)| /f_1 = sup_x v_x = +∞.
From the known estimate <cit.>
of spherical functions
|φ_λ(e^H)| ≤ C (1+H)^d e^-ρ(H), for all H∈
it is easy to obtain estimates for the kernel of the type |k_F(L)(e^H)| ≲ (1+H)^d and |k_F(Δ_ρ)(e^H)| ≲ (1+H)^d e^-ρ(H). Our result is that it is possible to remove this polynomial factor.
In the analysis of the Laplace-Beltrami operator Δ, a factor of this kind is often non-significant. But it becomes essential in concern with the operator L.
§ LOWER BOUNDS FOR OSCILLATING FUNCTIONS IN RANK ONE
§.§.§ Higher ranks
Fix H_k∈ such that α_k(H_k)=1 and α_j(H_k)=0 if j k. In the notations of Section 2 (?? link), we have ρ_0(H_k)=0, and moreover, H_k is in the center of the Lie algebra 𝔪_10. It follows that θ_λ(exp (u H_k)) = e^iuλ(H_k) for every u∈. By Theorem 2.7 (link?),
|φ_λ(e^uH_k) - _0^-1∑_s∈(sλ)/_0(sλ) e^iusλ(H_k)|
< C(1+λ)^ι^-κ |uH_k|
with some constants ι, C, κ.
Simple roots of _10 are α_j, j k, so that _0 is the subgroup of generated by s_α, where α is in the linear span of α_j, j k.
[We might have roots of the type ∑ l_j α_j with l_k0 but also l_j0 for some other j, this will influence the exponential but also the c-function.]
§.§.§ Rank one
Estimates of Section 4 involve the absolute value of the function F and may seem unsatisfactory for oscillating functions of the type F(x) = exp(it√(x)) ψ(√(x)), where one would hope to see a decrease in t as t→∞. However, they are optimal even in this case, if G has rank one. We show this in the present section. In higher rank the behaviour would be different, but it will be considered elsewhere.
Let us assume now that G is a semisimple Lie group of real rank one.
Both and ^* can be viewed then as the real line and Σ_s^+=1. Let α be the only positive root; we can assume that it acts at H∈≃ as simple multiplication, α(H) = α H.
For all H∈^+ and all regular λ∈, we have the Harish-Chandra decomposition (<ref>) of φ_λ(exp H)^ρ(H).
By <cit.>, there exist constants C_α>0, p>0 such that |Γ_mα(λ)| ≤ C_α m^p for all λ∈^*. For H such that α(H)>1, we can then estimate
φ_λ(exp H)^ρ(H)- (λ)^iλ(H)- (s_αλ)^is_αλ(H)
≤ C_α( |(λ)| + |(s_αλ)| ) ∑_m=1^∞ m^p ^-mα(H)
< C |(λ)| e^-α(H).
Since s_αλ= -λ for all λ∈^*, we have
k_(exp H) = ∫_ F(λ^2) φ_λ(exp H)^ρ(H) (λ)^-2d λ
≥∫_ F(λ^2) (λ)^-2 (λ)^iλ(H) + (-λ)^-iλ(H) d λ
- C ∫_F(λ^2)^-α(H) | (λ)|^-1 d λ.
Now we set F(x)=^it√(x)ψ(√(x)) where t>0 and ψ is a continuous function on such that
J_ψ = ∫_0^∞ |ψ(x)| (1+x)^n+6 dx < ∞.
This corresponds to (<ref>) in rank one when d=l=1.
Denote by k_t the kernel of the operator F(L), then under the assumptions above on H we have k_t(exp H)≥I_1(t,H)-I_2(H) where
I_1(t, H)= ∫_^itλψ(λ) (λ)^-2 (λ)^iλ(H) + (-λ)^-iλ(H) d λ,
I_2(H) = C ^-α(H)∫_ψ(λ) | (λ)|^-1 d λ.
For the asymptotics of k_t with large H, we only need to consider the I_1(t,H), as I_2(H) vanishes as H→ +∞ under the condition (<ref>). Let us write H=t+a with a∈. Using the conjugation property (<ref>) of the -function, we transform I_1 as follows:
1/2 I_1(t,t+a)
=∫_^itλψ(λ) (λ)^-2 (λ)^iλ (t+a) d λ
=∫_0^∞^itxψ(x)(x)^-2(x)^i(t+a)x+(-x)^-i(t+a)x d x
= 2∫_0^∞ψ(x)(x)^-2^itx(x)^i(t+a)x d x.
After elementary calculations this takes form
= ∫_0^∞ψ(x)(x)^-2[ (x) ( (1+cos(2tx)) cos(ax) - sin(2tx) sin(ax)
+ isin(2tx) cos(ax) -i (1-cos(2tx)) sin(ax) )
- (x) ( sin(2tx) cos(ax) + (1+cos(2tx)) sin(ax)
+ i (1-cos(2tx)) cos(ax) + i sin(2tx) sin(ax) ) ] d x,
which tends at t→+∞ to
∫_0^∞ψ(x)(x)^-2[ ( (x) cos(ax) - (x) sin(ax) )
-i ( (x) sin(ax) + (x) cos(ax) ) ] d x
= ∫_0^∞ψ(x)(x)^-2[ ( (x) e^iax) - i( (x) e^iax) ] d x
=∫_0^∞ψ(x)(x)^-2(x) e^-iax d x
=∫_0^∞ψ(x)(x)^-1 e^-iax d x .
This is the Fourier transform at a of the function ψ̃ which is equal to ψ(x)(x)^-1 if x≥0 and 0 otherwise. Note that ψ̃∈ L^1 since ψ satisfies (<ref>).
For ψ≢0, this Fourier transform (ψ̃) is a nonzero continuous function vanishing at infinity, so its norm ν=(ψ̃)_∞>0 is attained at a point a_0∈.
For t≥α^-1-a_0, we have α(H)=α(a_0+t)≥ 1, and the estimate above is applicable. Altogether, this implies that
lim inf_t→ +∞k_t_∞≥lim_t→ +∞k_t(exp (t+a_0))≥lim_t→ +∞(I_1(t,t+a_0)-I_2(t+a_0))=ν >0.
In particular, this applies to the most common localization function ψ(x)=1+x^2^-κ where κ is a constant such that (<ref>) holds.
To summarize the above, we proved the following theorem:
Let G be a semisimple Lie group of real rank one. Denote by k_t the kernel of ^it√(L)ψ(√(L)) where t>0 and ψ is a continuous function on satisfying (<ref>).
Then k_t_∞≤ CJ_ψ for all t, and lim_t→ +∞k_t_∞≥ν where ν=(ψ̃)_∞>0 and
ψ̃(x) =
ψ(x)(x)^-1 if x≥ 0,
0 if x< 0.
In particular, if ψ(x)=1+x^2^-κ, then the theorem applies if κ>(n+7)/2.
§ UNIFORM NORMS OF OSCILLATING FUNCTIONS
* In rank one: Tataru <cit.>: the symbol is sin(λ t)λ (λ^2+β^2)^-ρ/2+is so that the operator is
sin(√(|Δ_ρ|) t)√(|Δ_ρ|) (Δ_ρ+β^2)^-ρ/2+is, on the hyperbolic space (ℓ=1), the uniform bound is c(sinh t)^-ρ.
* Anker, Pierfelice (on Damek-Ricci): D^-τD̃^τ-σ e^itD where D=√(|Δ|-ρ^2), D̃=√(|Δ|+ρ̃^2-ρ^2), ρ̃>ρ, σ∈_+, τ∈[0,3/2). That is, the operator is
(|Δ|_ρ)^-τ/2 (|Δ|_ρ+ρ̃^2)^(τ-σ)/2 e^it√(|Δ|_ρ)
<cit.>
* Higher rank: Hassani <cit.>: |Δ|^-σ/2 e^it√(|Δ|); the bounds are for |t|≥1:
|k_0(x)| ≤ C |t|^-l/2φ_0(x) (3+|x|)^a if a∈ 2 and l<a (integral over |λ|≤1);
|k_∞(x)| ≤ C |t|^-dϕ_0(x) (3+|x|)^d if σ >n (integral over |λ|≥1).
* Anker and Zhang <cit.>: |Δ|^-σ/2 e^it√(|Δ|), d=ℓ+∑_α∈Σ^+ m_α≥3 (but ℓ may be 1). Boundary point: σ = (d+1)/2. Since d=2n-l, this value is n-l/2+1/2.
Theorem 3.7: for large t and |x|≥|t|/2, the bound is |t|^-N_1 (1+|x^+|)^N_2 e^-ρ(x^+) for every N_1∈ and N_2>N_1+2(d+1)+ (1/2) (max(d,D) -l).
Theorem 3.3: For |x/t|<C_Σ<1/2, |ω̃^σ,0_t(x)| ≤ |t|^-D/2 (1+|x|)^(D-l)/2ϕ_0(x).
The maximum on [0,+∞) of the function f(x) = x^a e^-bx (with a,b>0) is ( abe)^a.
[details to remove: f'(x) = x^a-1 e^-bx ( a -bx) and vanishes at x=a/b, this is clearly the maximum and f(a/b) = ( ab)^a e^-a.] In the expression t^-N_1 x^N_2 e^-ρ x) (assuming x≥ t>0) set x=t^γ, supposing γ≥1; then we get
t^γ N_2-N_1 e^-ργ t
whose maximum on (0,+∞) is attained at ργγ N_2-N_1 and equal to
( abe)^a
tocsectionReferences
amsplain
|
http://arxiv.org/abs/2409.02493v1 | 20240904074253 | Toward Realistic Solar Flare Models: An explicit Particle-In-Cell solver in the DISPATCH framework | [
"Michael Haahr",
"Boris V. Gudiksen",
"Åke Nordlund"
] | astro-ph.SR | [
"astro-ph.SR",
"physics.plasm-ph",
"physics.space-ph"
] |
An explicit Particle-In-Cell solver in the DISPATCH framework
Rosseland Centre for Solar Physics, University of Oslo, P.O. Box 1029 Blindern, NO-0315 Oslo, Norway
[email protected]
Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, NO-0315 Oslo, Norway
Centre for Star and Planet Formation, Niels Bohr Institute, University of Copenhagen, Øster Voldgade 5-7, 1350 Copenhagen, Denmark
Simulating solar flares, which involve large-scale dynamics and small-scale magnetic reconnection, poses significant computational challenges.
This study aims to develop an explicit Particle-In-Cell (PIC) solver within the DISPATCH framework to model the small-scale kinetic processes in solar corona setting. This study in the first in a series with the ultimate goal to develop a hybrid PIC-MHD solver, to simulate solar flares.
The PIC solver, inspired by the PhotonPlasma code, solves the Vlasov-Maxwell equations in a collisionless regime using explicit time-staggering and spatial-staggering techniques. Validation included unit tests, plasma frequency recovery, two-stream instability, and current sheet dynamics.
Validation tests confirmed the solver's accuracy and robustness in modeling plasma dynamics and electromagnetic fields.
The integration of the explicit PIC solver into the DISPATCH framework is the first step towards bridging the gap between large and small scale dynamics, providing a robust platform for future solar physics research.
Toward Realistic Solar Flare Models
M. Haahr
1,2
B. V. Gudiksen1,2
Å. Nordlund 1,3
September 9, 2024
===========================================================================================
§ INTRODUCTION
Simulating solar flares, encompassing both the macroscopic evolution and the microscopic details of current sheets that form during magnetic reconnection events, remains an unsolved computational challenge. According to recent work
<cit.>, no computational tool currently bridges the gap between the large-scale dynamics of solar flares and the intricate processes within magnetic reconnection regions.
Solar flares are generally accepted to result from the release of magnetic energy in magnetic reconnection regions <cit.>. Within these regions, some particles are accelerated to non-thermal, relativistic speeds. These high-energy particles exchange energy with the surrounding plasma. The non-local effects of these particles influence plasma behavior on large scales, which in turn affects the small scale dynamics. This interplay between large and small scales is critical for understanding the initiation and development of solar flares.
Kinetic approaches, particularly the Particle-In-Cell (PIC) method <cit.>, have been extensively applied to study reconnection at microscales. Despite their utility, these studies often resort to simplified, idealized configurations, such as the 2D Harris sheet model <cit.>, due to computational constraints. Given that real-world solar flares can extend over vast distances, employing PIC methods exclusively becomes impractical. Although there have been efforts to combine Magnetohydrodynamics (MHD) and PIC techniques <cit.>, challenges persist in accurately capturing the multiscale nature of solar flares. A hybrid modeling approach that can seamlessly integrate different temporal and spatial scales is thus paramount for realistic simulations.
Handling the wide range of temporal and spatial scales in solar flare simulations, while maintaining computational performance, is a significant challenge. Developing a sophisticated multi-scale, multiphysics solver from scratch would be daunting. Fortunately, the DISPATCH framework <cit.> provides a robust foundation for handling the scales required to simulate solar flares. DISPATCH organizes the simulation domain into "patches," each updated semi-autonomously. This localized data management enables each patch to function as a unique type of solver within a simulation, facilitating simulations with multiphysics solvers. DISPATCH has already proven capable of modeling complex, multiscale environments, such as the entire solar convection zone <cit.>.
This paper marks the initial phase in developing a hybrid solver within the DISPATCH framework by integrating a PIC solver inspired by the PhotonPlasma code <cit.>. We detail the process of embedding the PIC solver into DISPATCH, noting the essential modifications made from the original PhotonPlasma code to ensure compatibility and effectiveness within the DISPATCH architecture. The PIC solver is tailored for future integration with MHD simulations. Through this integration, we aim to bridge the gap between large-scale flare dynamics and the detailed kinetic processes occurring within reconnection zones, thereby contributing to a more comprehensive understanding of solar flare mechanisms.
§ METHODS
§.§ Governing Equations
The PIC solver implemented in this study solves the Vlasov-Maxwell system of equations.
For the sake of reproducibility and clarity, we outline only the most important equations here. For an exhaustive description, readers are referred to the original description of the PhotonPlasma code <cit.>. In our notation, we use bold letters to indicate vectors. The Vlasov-Maxwell system is represented as follows:
∂𝐟^s/∂ t + 𝐮·∂𝐟^s/∂𝐱 + q^s/m^s (𝐄 + k_F 𝐮×𝐁) ·∂𝐟^s/∂ (𝐮γ) = C
Here, 𝐟 is the distribution function, which is a function of time, velocity (𝐮), and space (𝐱). s denotes particle species (e.g., electrons, protons), γ = 1/√(1 - (u/c)^2) the relativistic Lorentz factor and C the collision operator. Finally, 𝐄 and 𝐁 are the electric and magnetic fields respectively. For this study, we focus on the collisionless regime, setting C=0.
The electric and magnetic fields' evolution follow the Maxwell equations:
∂𝐄/∂ t = k_E/k_B∇×𝐁 - 4π k_E 𝐉
∂𝐁/∂ t = - 1/k_F∇×𝐄,
∇·𝐁 = 0,
∇·𝐄 = k_E 4 πρ_c,
where 𝐉 and ρ_c represent the current density and charge density, respectively.
The constants k_F k_E, and k_B ensure the equations' consistency across different unit systems, as detailed by <cit.>. We show the values corresponding to SI, CGS, and HL units in table <ref>. The use of this unit-agnostic formulation is commented on in section <ref>
The PIC solver approximates the Vlasov equation (eq:Vlasov) by sampling the phase space with "macro particles", each representing a collective of real particles, as discussed in works such as <cit.> and <cit.>. Macro particles are assigned a weight that represents the number of real particles they sample. The macro particles are then treated as individual particles, with their velocities influenced by the Lorentz force:
d (𝐮_p γ)/dt = q_s/m_s (𝐄 + k_F 𝐮_p ×𝐁),
where their positions evolve as per:
d 𝐱_𝐩/dt = 𝐮_p.
Here 𝐮_p and 𝐱_p is the velocity and position of each individual macro particle. q_s and m_s is the charge and mass of the particle species.
Interpolating electric and magnetic fields to macro particle positions require assigning specific shapes to these particles, affecting field interpolation. Following PhotonPlasma's methodology, we have implemented various shape functions, including Nearest-Grid-Point (NGP), Cloud-in-Cell (CIC), Triangular-Shaped-Cloud (TSC), and Piecewise-Cubic-Spline (PCS), noting that NGP can introduce instabilities <cit.>. We use 'CIC' interpolation and 2nd order derivatives as the default in our solver.
Determining properties such as charge density from particle distributions uses similar shape function, and we refer to this as 'particle deposition' in this article. The interpolation and deposition shape functions can be defined independently as simulation input parameters.
§.§ Transparent Scaling
§.§.§ Scaling
Scaling of simulation parameters plays a critical role in ensuring the stability and optimizing the performance of simulations. Although frequently mentioned in scientific literature, the detailed methodologies for scaling are often not fully elaborated, posing challenges to reproducibility. In response to this, we aim to clearly outline the scaling techniques employed in our study.
Within the DISPATCH framework, we have incorporated a scaling module to handle scaling for SI, CGS, and HL unit systems. This module relies on three primary parameters for scaling: mass density, length, and time. The scaling of all subsequent units stems from these foundational parameters.
Often, variables of interest such as the proton mass, the elementary charge, and the speed of light, are set to 1 in code units. Along with this, pressure, density, and field strength are usually given in code units as well, making it difficult to recover the 'real' physical units. For ease of physical interpretation and to ensure that input parameters remain intuitive, we have based all scaling on real physical units. As such, the code is designed to accept input parameters in familiar terms, including common length scales, magnetic field strengths, number densities, and temperatures.
§.§.§ Fudging
A common practice in PIC simulations is the "fudging" of physical constants to adjust time and length scales appropriately. This process, although critical for the fine-tuning of simulations, is frequently glossed over in the literature, leaving a gap in understanding regarding the specific adjustments made. Our goal is to demystify this aspect by ensuring complete transparency in our scaling process.
In our simulations, we permit the modification of three key constants: the speed of light, electron mass, and elementary charge. The modification of the speed of light is facilitated through changes to the vacuum permeability, μ_0, or vacuum permittivity, ϵ_0. Specifically, in CGS and HL units where μ_0 is dimensionless, alterations to the speed of light are by adjusting ϵ_0.
§.§.§ Unit-Agnostic Equations
The Vlasov-Maxwell system of equations varies depending on the unit system used. Often, the unit system is not explicitly stated and the equations written in a 'natural units' where factors such as 4π are omitted. Gaussian and Heaviside-Lorentz (HL) are examples of 'natural units' and of often both referred to as CGS. Although units may appear identical in Gaussian and HL units their actual values differ. For instance, a magnetic field strength of 50G in Gaussian units corresponds to approximately 14.1G in HL units, differing by a factor √(4π). While an experienced reader may deduce the underlying system, this can be problematic for inexperienced readers. This can make it challenging to reproduce results if the exact unit system is not specified.
'Fudging' the physical constants also alters the equations. When ambiguous fudging is combined with an ambiguous unit system, it presents a complex puzzle for the reader to solve.
To assist the reader, we have written all equations in a unit-agnostic form. Constants are included in the equations, and these constants vary depending on the unit system and the 'fudging' used, as shown in Table <ref>. This approach allows us to use the same equations in both 'real' units and code units, simplifying implementation and improving the clarity of our code.
§.§ Numerical Approach
§.§.§ DISPATCH
A fundamental understanding of the DISPATCH framework is needed to understand some of our choices made when implementing the PIC code into DISPATCH. While a detailed description of DISPATCH can be found in <cit.>, we here summarize the key aspects relevant to our PIC solver integration.
DISPATCH organizes the simulation domain into distinct sections referred to as "patches". These patches are updated semi-autonomously, interacting with adjacent patches primarily to update ghost zones. This architecture offers substantial benefits, specifically the ability for each patch to proceed with its own timestep. This feature significantly boosts the potential for simulation efficiency and enables the simultaneous resolution of diverse timescales within a single experiment.
The framework's design, focusing on localized data management, inherently limits the applicability of global convergence schemes like implicit methods. Furthermore, DISPATCH imposes specific restrictions on the communication between patches, designed to manage the complexity of data interactions efficiently.
Adapting to DISPATCH's focus on localized computations, our implementation utilizes an explicit PIC solver, staggered in both time and space. This choice is compatible with DISPATCH's decentralized data handling paradigm, ensuring computational efficiency and precision while navigating the framework's structural nuances.
§.§.§ Discretization
For the numerical stability and reliability of our explicit computational scheme, we incorporate a time-staggering method akin to the leap-frog integration technique, as visually represented in fig:time_stagger. The leap-frog scheme (<cit.> section 9.6, <cit.> p. 56) is renowned for its efficacy in maintaining stability during oscillatory motion— a common occurrence within PIC simulations. The electric field is staggered by a full timestep, Δ t in the past. Magnetic field, particle velocities, and current density are staggered backwards by 0.5Δ t. Particle position is centered in time.
Spatially, we stagger mesh variables in accordance with a Yee lattice setup (<cit.>), to align with DISPATCH standards. This spatial organization involves positioning magnetic fields on the cell faces, electric fields and current densities are staggered along cell edges. Consistent with DISPATCH conventions, mass density, momentum, and energy metrics are centered within each cell. Uniquely, charge density is allocated to the lower corner of each cell, as shown in fig:space_stagger.
This arrangement of variables in time and space minimizes the interpolation needed, thereby improving the simulation's computational efficiency and accuracy.
In our simulation, both electric and magnetic fields are interpolated from the mesh to each particle's position. Interpolating fields to particle positions involves considerations of both time and space. As outlined in fig:time_stagger, the magnetic field requires time interpolation between the timestep i, and timestep i+1. However, with varying timesteps this introduces the potential for this time interpolation to become desynchronized. Within the DISPATCH framework, each patch dynamically sets its timestep based on the local maximum velocity to comply with the Courant condition <cit.>. This approach is applied to all patches in the simulation, including the PIC solver patches. Due to the PIC solver's focus on electromagnetic phenomena and the need to accurately simulate the propagation of electromagnetic waves at the speed of light, the "maximum velocity" in PIC patches defaults to the speed of light. This condition results in a uniform timestep across these patches. Consequently, time interpolation for the magnetic field is simplified to averaging B^i and B^i+1, as outlined in fig:time_stagger.
Spatially, after addressing time interpolation, fields are then interpolated to the exact spatial locations of particles. This spatial interpolation relies on the shape functions introduced in Section <ref>.
Particle deposition into current density, mass density, and other mesh variables happens after the particles have been updated, as described in section <ref>. This process ensures that changes in particle distribution are accurately reflected in the mesh. Consequently, electromagnetic field calculations can incorporate the most current state of the plasma. We optionally allow the shape function for deposition to differ from the one used for interpolation, but as default the shape functions are identical.
§.§.§ Solving Maxwell's Equations
The update cycle for each patch begins with updating electric and magnetic fields. The integration of Maxwell's equations in our solver uses an explicit method, sequentially updating the electric and magnetic fields.
The updating process initiates by updating E^i to E^i+1, (fig:time_stagger), using eq:Ampere_law. This calculation uses the magnetic field, B^i,
alongside the current density, J^i. The magnetic field is then updated in line with eq:Faraday_law, using the just-updated electric field.
Accumulation of errors over time can require the use of 'cleaners' for both the electric and magnetic fields. Within the DISPATCH framework, divergence cleaners for the magnetic field are employed to uphold eq:div_B in the guard zones, where time-interpolation in general does not conserve div(B) exactly. A similar cleaning strategy is applied to the electric field to ensure it meets the requirements of eq:div_E. This cleaning procedure begins by identifying the divergence error, ϵ, through the calculation of the electric field's divergence, from which the charge density is then subtracted:
ϵ = ∇·𝐄 - k_E 4 πρ_c.
A subsequent Poisson filtering step addresses only the errors local to each patch to compute
Φ_ϵ from
ΔΦ_ϵ = ϵ ,
after which the electric field is updated to a 'cleaned' state:
𝐄_clean = 𝐄 - ∇Φ_ϵ.
The implementation allows users to decide the frequency of the electric field cleaning step and whether to use it at all.
§.§.§ Particle Movement
Next in the update cycle, particle velocities are updated, affecting both particles in the "inner" domain and those within the ghost zones. As highlighted earlier, the electric and magnetic fields are interpolated to the particles' positions before their velocities are updated.
The update of velocities uses the Lorentz force equation (eq:Lorentz_foce). <cit.> compares several different explicit approaches to implement this numerically. We have chosen to use the Vay particle pusher algorithm <cit.> due to its ability to correctly 'cancel' field contributions when both electric and magnetic fields act on particles. Following the velocity updates, we calculate the energy for each macro particle. Then, their positions are updated according to eq:particle_position.
With updated positions and velocities, the simulation is prepared for the next and final phase – deposition of particle data back to the mesh. This stage involves computing the current density, which is essential for updating the electric field in the next timestep. Additionally, other mesh variables, such as mass density and bulk momentum, are computed from particle deposition.
§.§.§ Current Density Calculation
Current density calculation is a critical step in ensuring the accuracy of PIC simulations. Often, PIC codes employ a 'charge conserving' method. For example, the PhotonPlasma uses an extension of the charge conserving method developed by <cit.>. This method involves calculating the divergence of the current density by examining how the deposition weights for each macro particle change between timesteps. Following the determination of the divergence, the actual current density is obtained through a prefix sum operation.
In DISPATCH, such prefix sum operation would require communication across patches to align ghost-values with neighbor patches. The current communication scheme does not allow this, and we therefore opted for a simpler and more direct current density calculation:
𝐉 = ∑_s q^s ∑_i w_i 𝐯_𝐢 W(x_p - x_c),
where W(x_p - x_c) represent deposition weight function, w_i the weight of the macro particle, 𝐯_𝐢 the velocity of the macro particle, and q_s the charge of the particle species. This method calculates current density directly from the macro particles' properties.
While not exactly charge conserving, down to each macroparticles gradual contribution, the method is nevertheless on average charge conserving, with errors expected mainly on cell scales, where E-field cleaning (Eqs. <ref>-<ref>) can easily take care of making the electric field exactly consistent with actual charge distribution.
§.§ Particle Sampling
§.§.§ Initial Condition Sampling
The initial sampling of particles within our simulation is based on mesh variables such as mass density, current density, bulk velocity, and temperature. These mesh variables, critical for defining initial conditions, are aligned with those used in MHD simulations, to streamline PIC-MHD integration efforts.
To distribute particles evenly at the start, we segment each cell into a predefined number of sectors. The number of sectors, n_sec, varies with the dimensionality of the simulation:
* In 3D, n_sec = ⌊ n_pc^1/3⌋,
* In 2D, n_sec = ⌊ n_pc^1/2⌋,
* In 1D, n_sec = n_pc,
where n_pc denotes the number of particles per cell.
For example, in a 2D setup with 10 particles per cell, n_sec = 3, leading to each cell being divided into 3 × 3 = 9 sectors. One particle is placed randomly within each sector, with any additional particles randomly located within the entire cell, as depicted in fig:position_sampling.
After establishing positions, particles' velocities are sampled from a Maxwellian distribution. The initial temperature for this distribution is either provided directly or derived from the internal or total energy, depending on the initial conditions.
Additionally, we incorporate bulk and current velocities into the velocity assignment for particles, similar to the approach described by Daldorf et al. (2014):
𝐯_𝐞𝐥𝐞𝐜 = 𝐮_𝐛𝐮𝐥𝐤 + m_i/q_e𝐉/ρ,
𝐯_𝐩𝐫𝐨𝐭𝐨𝐧 = 𝐮_𝐛𝐮𝐥𝐤 + m_e/q_i𝐉/ρ,
where 𝐮_𝐛𝐮𝐥𝐤 represents the bulk velocity, 𝐉 is the current density derived from ∇×𝐁, ρ is the mass density from grid, and m_i, q_i, m_e, q_e are the masses and charges of protons and electrons, respectively.
§.§.§ Resampling
Particles within the inner domain of a patch can freely move between cells. Over time, this freedom can lead to significant discrepancies in the number of particles per cell, with some cells having too many particles and others too few. Similarly, particle distribution discrepancies can occur at the patch level. To maintain a roughly constant number of particles per cell and ensure workload balance across patches, we've implemented a resampling strategy that involves "splitting" particles in underpopulated cells and "merging" particles in overcrowded ones.
Splitting, or up-sampling, is straightforward. The heaviest particle in a cell is divided into two particles, each retaining the original velocity but carrying half the weight. To prevent creating identical particles, their positions are slightly offset. This offset is randomly determined in each dimension from a Maxwell distribution with σ = 0.1, with one particle receiving half the offset subtracted and the other half the offset added. This process can then be repeated until the desired number of particles per cell is achieved.
Merging, or down-sampling, presents more complexity, as it can potentially violate conservation laws for mass, energy, and momentum. Drawing on strategies discussed by <cit.>, we've adopted the 'leveling' approach from their work and introduced a 'close-phase' strategy. For the latter, we evaluate particle velocities within the same cell using the Euclidean distance metric, merging the two particles closest in velocity space into one. This new particle's energy is then compared to the total energy of the original pair. If the energy discrepancy falls within an acceptable tolerance, the merge is finalized; otherwise, the particle is discarded. This process repeats until the target particle count is achieved or all potential merges are explored without success due to excessive energy differences.
The frequency and activation of resampling are left to user discretion, offering flexibility in simulation management.
§.§ Optimization
Optimization is crucial for our goal of developing a comprehensive and realistic solar flare model across multiple scales. We've introduced several optimizations to boost the performance of our simulation framework, preparing for future enhancements.
Our simulation adopts a Particle-in-Patch (PIP) approach, similar to DISPATCH's strategy of updating patches instead of individual cells. This method optimizes cache usage and enables effective vectorization of computations. In our PIC solver, each patch hosts several arrays of particle information, facilitating vectorization. Though these arrays are too large for cache, making them less than ideal for traditional computing, their structure is ideal for GPU execution. GPU acceleration remains a key development goal, given its potential to significantly enhance simulation performance.
A fundamental part of our optimization strategy involves sorting particles to improve memory access patterns. When particles spatially close to each other are also adjacent in memory, the efficiency of interpolation and deposition operations increases markedly. We store particle positions in two arrays: an integer array for cell location and a floating array for position within the cell. This dual-array setup simplifies sorting, allowing for integer-based comparisons that improve performance and numerical precision.
The choice of sorting algorithm impacts performance. We have implemented both insertion sort and quick-sort algorithms <cit.>. Given the minimal expected changes in particle ordering between timesteps, insertion sort seemed the likely choice. However, initial results indicate better performance with a complete re-sort using quick-sort, even when sorting every timestep.
Interpolation and deposition are, especially for higher-order schemes, the most demanding processes in our PIC solver, due to inefficient memory access patterns when accessing cells in higher dimensions. Sorting particles addresses this issue to some extent. In our effort to optimize the interpolation routine, we found a more efficient method than interpolating for each particle during the particle pusher step. Instead, interpolating field values for all particles prior to the particle pusher step and storing these values in temporary arrays performed better.
§ CODE VALIDATION
To validate the PIC solver integrated within the DISPATCH framework, we have devised a comprehensive suite of test cases that progressively increase in complexity. Initially, we conduct a series of unit tests aimed at examining specific equations governing particle motion and field evolution in isolation, i.e., without any feedback between particles and fields. Subsequently, we introduce complexity by incorporating particle-field feedback mechanisms in both 1D and 2D test scenarios. All tests described herein uses CGS units for consistency, and we did not activate resampling to preserve the integrity of conserved variables.
§.§ Unit tests
§.§.§ Constant Particle Motion
This test, designed to exclusively evaluate eq:particle_position, initiates with 18 particles, each with distinct velocities across every direction. Both electric and magnetic fields are nullified, preventing any particle-induced field alterations. The experiment is conducted within a 3D domain, comprising a grid of 3x3x3 patches, each containing 8x8x8 cells, to examine the solver's three-dimensional computational integrity. The particle trajectories are tracked to detect any discrepancies as they traverse patch boundaries. fig:unit_test illustrates the outcome of this test, affirming that all observed errors remain within the bounds of numerical precision.
§.§.§ Constant Particle Acceleration
This test specifically aims to validate eq:Lorentz_foce in the absence of a magnetic field, focusing on the effects of an electric field on particle acceleration. We initialize the experiment with seven particles, each moving along one of the Cartesian axes (X, Y, and Z) in positive and negative direction. The seventh particle has initial zero velocity. To evaluate the solver's accuracy in calculating particle acceleration due to an electric field, we conduct three separate runs. In each run, a constant electric field is applied along one of the axes.
The setup for this test mirrors the configuration used in the previous section, utilizing a 3D grid of 3x3x3 patches, where each patch consists of 8x8x8 cells. In each of the three runs, we analyze the change in proper velocity of the particle moving along the axis aligned with the electric field. The primary objective is to confirm that the solver accurately captures the expected velocity change between snapshots, with a focus on the incorporation of relativistic effects. All observed changes in velocity fall within the realm of numerical precision for both electrons and protons as shown in fig:const_acc_unit.
§.§.§ E cross B drift
The E cross B drift test extends our exploration of eq:Lorentz_foce to scenarios involving both electric and magnetic fields positioned perpendicularly to each other. This orthogonal arrangement results in a drift velocity expressed as v_drift = c E_y/B. Specifically, with an electric field oriented in the y-direction (E_y) and a magnetic field in the z-direction (B_z), the theoretical positions of particles can be determined as follows:
x = x_0 + v_sign· r_L ·sin(ω· t) + c E_y/B_z t,
y = y_0 + v_sign· q_sign· r_L · (cos (ω· t) - 1),
where r_L denotes the Larmor radius within the reference frame where E_y = 0 — namely, the frame moving at the drift velocity v_drift = c E_y/B_z. The variable v_sign represents the sign of the initial velocity in this specific frame, while q_sign indicates the charge sign of the particle, and ω denotes the gyro frequency.
fig:ExB_drfit presents the results from this test. Notably, the figure illustrates that the drift velocity error for electrons stays within the bounds of numerical precision up to t=40, equating to around 40,000 timesteps.
§.§.§ EM-wave in Vacuum
This test assesses the solver's implementation of eq:Ampere_law and eq:Faraday_law in a vacuum scenario with zero current density (𝐉=0). Executed within a 1D setup, the experiment examines the propagation of an electromagnetic wave along a single axis. Separate runs were conducted for each of the three spatial directions for a comprehensive validation. Here, we detail the results for the z-direction. The theoretical model of the wave is described by the following set of equations:
Bx = - B_0 cos(k z - ω t)
By = B_0 cos(k z - ω t)
Bz = 0
Ex = B_0 cos(k z - ω t)
Ey = B_0 cos(k z - ω t)
Ez = 0
where k represents the wavenumber and ω the frequency of the electromagnetic wave.
We initialize with k = 2 π· m_0/L, with L indicating the domain's extent and m_0 the designated number of wavelengths within this domain. The frequency, ω, is derived using ω = k · c.
The domain's span is set to 1000 cm, and m_0 = 2. Scales are adjusted so that the domain's length corresponds to 1 in code units, and similarly, time is scaled to set the speed of light, c, to 1 in these units. Various cell counts, including 250, 500, and 1000, were tested to gauge the solver's accuracy across different resolutions. fig:EM_wave displays the relative errors encountered in these tests, with derivatives of 2nd, 4th, and 6th orders being examined. Errors predominantly arise from phase shifts and amplitude variations, with phase-shift discrepancies dominating in higher-order cases, while amplitude errors dominate in the 2nd order outcomes.
Error convergence for each order of derivation was critically analyzed. The 2nd order tests showed a 'linear' reduction in errors on a log-log scale, with a slope approximating -2.2, aligning closely with the expected -2.0 slope indicative of its second-order nature. In contrast, 4th and 6th order tests revealed a diminished slope of -0.65, hinting at marginal gains from enhanced resolution. This outcome suggests that the 2nd order time integration of the leapfrog scheme emerges as a significant constraint, limiting the benefits of higher-order spatial resolutions.
To avoid indexing errors in proximity to patch boundaries requires increasing the number of guard zones when using higher-order derivatives, thus incrementally increasing the computational workload. Considering also that a change from 4th to 6th order derivatives does not markedly improve precision, while significantly inflating the computational demands, we conclude that lop-sided increase of only spatial order is not a good idea.
§.§ Recovery of Plasma frequency
The plasma frequency is pivotal in characterizing plasma dynamics. Accurately capturing this frequency in simulations is crucial for validating the scalability and feedback mechanisms within our solver. We explored this through Fourier analysis in a controlled 2D setup, aiming to recover the plasma frequency.
The experimental setup involved initializing electrons and protons at a uniform density of 10^9 cm^-3 within a 5 cm× 5 cm domain. The domain is segmented into 5x5 patches with 10x10 cells in each for a total of 50x50 grid points. We initialized each species with a thermal velocity corresponding to a temperature of 1 MK. With these parameters the cell width is approximately one Debye length.
We scaled mass density to unity, time such that ω_e = 2 π, and did not scale lengths.
As depicted in fig:plasma_freq, the Fourrier analysis reveal distinct peaks corresponding precisely to the plasma frequency. Time has been scaled such that the theoretical plasma period is exactly 1 'code Hz'. The peak is most notable in the field component orthogonal to the testing plane. This test demonstrates the solver's adeptness at recovering the correct plasma frequency.
§.§ Two-Stream Instability
Investigating the two-stream instability provides a rigorous assessment of the solver's capability to simulate plasma interactions and instability dynamics. Several different versions of the two stream experiment exists and have been tested <cit.>. Our setup closely resembles <cit.>, but is notably different due to different scaling parameters. We initialize our setup with two counter-streaming electron populations against a backdrop of stationary ions. Electrons and protons are initialized with Maxwellian velocities at temperature T, and electrons additionally with a drift velocity of ± V_0.
Analyzing the cold case (T → 0) through the dispersion relation:
0 = 1 - ω_e^21/2(1/(ω - k V_0)^2 + 1/(ω + k V_0)^2) - ω_p^2/ω,
yields the maximum growing wave mode at k_max≈√(3/8)ω_e/V_0, with growth rate γ≈ 0.35355 ω_e, when neglecting the contribution from ions. Delving into the warm case revealed the growth rate and mode size changes by less than 1% for v_e ≤ 0.05V_0, informing our temperature selection to closely model realistic cold-case plasma conditions.
The computational domain span 4.105 d_e where d_e = c/ω_e represents the electron skin depth. This domain size was chosen to accommodate two wavelengths of the maximum growing mode. We conduct a series of simulations with varying cell counts (n=[64,128,256,512]) and particles per cell (ppc=[64,128,256,512,1024]). For n=64 we used 4 patches with 16 cells in each. For the other cell counts we used 32 cells per patch and [4,8,16] patches, respectively.
Physical variables were kept unscaled, and we normalized length such that d_e=1 in code units and ω_e=1 for time, rendering the speed of light unity in code units as well. We chose a drift velocity of V_0=0.2c, which sets the temperature T≈ 592,990 K for v_e = 0.05 V_0. Additionally, we used a number density of 10^10 cm^-3 and scaled mass, such that the mass density is 1 in code units.
The simulations proceeded without field cleaners or particle resampling until t=40. fig:two_stream_evo illustrates the evolution of the linear phase and the initial emergence of the nonlinear phase, showing the formation of dual structures in phase space as anticipated from the experiment's design.
fig:two_stream_growth shows the electric field's growth rate for the dominant wavemode, illustrating consistency with the anticipated growth rates derived from linear theory. This alignment is consistent across various experimental configurations, underscoring the solver's robustness in accurately modeling the intricate processes underlying two-stream instabilities under a spectrum of simulation conditions.
§.§ Current Sheet
The final test is the behavior of the physical effects under different parameter scaling scenarios. We chose a current sheet setup as such a scenario is inherently unstable and should quickly highlight any differences in evolution. Current sheet behavior is an open research topic with many authors investigating current sheet in space and solar contexts, wiht varying degree of realism (e.g. <cit.>, <cit.>, <cit.>, <cit.>). For our initial validation, we have chosen a simpler setup, inspired by <cit.>. The magnetic field configuration is defined by two hyperbolic tangent functions:
B_x(y) =
- B_0 tanh( y + 3.6/0.5 · L) if y < 0,
B_0 tanh( y - 3.6/0.5 · L) if y ≥ 0,
where B_0 = 50 G represents the background magnetic field strength and L the initial width of the current sheets. We set L=r_p where r_p is the proton gyroradius. The simulation initializes electrons and protons with Maxwellian distributions at T=1 MK and uniform density n=10^9 cm^-3 within a domain sized [25.6,12.8]r_p. Periodic boundary conditions are applied along both the x and y axes. An initial velocity perturbation:
v_y = v_0 ·sin( 2 π/L_x x ),
with v_0 = 0.001 c, stimulates the onset of reconnection.
Experimentation varied over cell counts ([128x64, 256x128, 512x256, 1024x512, 2048x1024]) and particles per cell ( = [32, 64, 128, 256]). For = [128x64, 256x128] we used 16x16 cells per patch and [8x4,16x8] patches. For = [512x256, 1024x512, 2048x1024] we used 32x32 cells per patch and [16x8, 32x16, 64x32] patches.
Four distinct 'fudging' strategies were applied to probe their influence on simulation behavior:
* Pure Runs: Physical variables remained unaltered, providing a control scenario for comparison.
* m400 Runs: Electron mass was scaled to set the mass ratio m_p / m_e = 400.
* c10 Runs: Speed of light was reduced tenfold, achieved by augmenting ϵ_0 by 100.
* c10m400: Combined the adjustments from m400 and c10 runs for cumulative effect analysis.
Modifying the elementary charge primarily reduces the gap between microscopic and macroscopic scales <cit.>. In a pure PIC simulation, changing the elementary charge only affects the length and time scales. When scaled to code units, there is no difference between runs with and without elementary charge scaling. This is unlike scaling the electron mass or the speed of light, which explicitly changes the ratios between parameters of interest, even in code units. Therefore, we did not make any adjustments to the elementary charge.
We kept scaling consistent across scenarios for analytical clarity. We scaled length such that r_p = 1, time such that plasma frequency ω_e = 2 π, and initial mass density so ρ_0 = 1 in code units for the non-fudged run.
The current density evolution depicted in fig:J_comparison illustrates the effects of electron mass and light speed adjustments on plasmoid dynamics. Particularly, the c10 and c10m400 runs showcase a prevalence of smaller-scale plasmoids, contrasting with the more homogeneous current dissipation in pure and m400 scenarios.
Figure <ref> illustrates convergence towards zero relative total energy error with increasing resolution, in alignment with theoretical expectations for a second order solver. A slope of -2.2 was found, similar to our findings from the EM-wave test.
The influence of the particle count per cell on electric field energy stabilization is investigated in fig:electric_energy, affirming the solver's stability under cell sizes below 10 Debye Lengths. Theoretically, Δ x < 3.4 λ_D must be satisfied to avoid self-induced instabilities <cit.>. The theoretical analysis assumes that the number of particles inside the Debye radius is large. In our experiment, we show that the solver is stable for as little as 2 particles per Debye area for 2D simulations.
This comprehensive suite of tests validates the solver's robustness in modeling the intricate dynamics of current sheet formation and magnetic reconnection, setting a solid foundation for expanding investigations into 3D simulations.
§ DISCUSSIONS AND CONCLUSION
§.§ Speed of light and related issues
The integration of the explicit PIC solver within the DISPATCH framework marks our first step forward towards simulating the complex plasma dynamics of solar flares. This integration also highlights certain limitations inherent to the nature of explicit solvers, and how this plays out relative to the DISPATCH code framework.
A primary challenge lies in the limitations on the timestep size by the speed of light; a consequence of the need to accurately model electromagnetic field propagation. This constraint, while potentially limiting in other, less extreme contexts, aligns well with the solver's intended application to scenarios involving relativistic particle acceleration, such as observed in solar flares. One can hardly claim to realistically model such scenarios without embracing and accepting that one must deal with the full range of velocities.
Moreover, relativistic conditions offer other advantages, such as effectively reducing the impact of the difference of mass between electrons and ions, with both species moving at essentially the same velocities, despite large difference in mass.
The explicit solver's reliance on uniform timesteps might seem to diverge from DISPATCH's otherwise innovative approach of employing local time stepping. To some extent, this is an unavoidable consequence of the fundamental impact of the speed of light in relativistic dynamics—when information speed is actually essentially constant there is no advantage to gain from variations in speed, but the framework still allows for coping with large differences in time steps due to differences in spatial resolution, coming from fixed or adaptive mesh refinement.
§.§ Exposing the consequences of `fudging'
As previously mentioned, the 'fudging' of physical constants is a key component in many PIC codes. However, the descriptions of these adjustments are often unclear. With our 'transparent scaling,' we address this issue by clearly outlining the modifications and their implications.
Some examples of the impact of fudging are examined in Section <ref>. We demonstrate how varying the speed of light and the mass of the electron alters the fundamental plasma scales. Specifically, reducing the speed of light by a factor of 10 increases the allowable timestep by the same factor. However, simultaneously, the relevant scales are reduced by approximately a factor of 10, and thus the benefits offered by this fudging might be partially or fully offset. As illustrated by fig:J_comparison, the results with and without fudging may differ dramatically, and thus the methodology requires very careful vetting of assumptions and consequences, as made possible by the clear distinction made here between fudging on the one hand, and scaling to code units and choice of unit system on the other hand.
§.§ Conclusion
We have successfully introduced an explicit PIC solver into the DISPATCH framework, adapting and building upon the PhotonPlasma code's foundations.
Our comprehensive validation process, encompassing both straightforward unit tests and more intricate simulations such as the double current sheet test, has rigorously confirmed the PIC solver's robustness and precision. These efforts underscore its capacity to simulate the nuanced interplay of particle dynamics and electromagnetic fields, offering a robust platform for cutting-edge research in solar and plasma physics.
The integration into the DISPATCH code framework not only broadens the solver's applicability across diverse simulation scenarios but also lays the groundwork for a hybrid PIC-MHD implementation. Such an implementation will greatly extend our ability to model solar flares as well as other complex astrophysical phenomena with unprecedented realism, by using the cheaper MHD representation where possible, switching to the much more expensive PIC representation only where needed.
We would like to thank Troels Haugbølle for his invaluable discussions and insights regarding the PhotonPlasma code and plasma physics. We also wish to acknowledge Jan Truelsen for his assistance and contributions to the theoretical foundations of this work, particularly insight into the warm case analysis of the two-stream instability test. Additionally, we thank Andrius Popovas and Mikolaj Szydlarski for their guidance and support throughout the code development process.
This research was supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262622.
plainnat
|
http://arxiv.org/abs/2409.02226v1 | 20240903185337 | Cosmic topology. Part Ic. Limits on lens spaces from circle searches | [
"Samanta Saha",
"Craig J. Copi",
"Glenn D. Starkman",
"Stefano Anselmi",
"Javier Carrón Duque",
"Mikel Martin Barandiaran",
"Yashar Akrami",
"Fernando Cornet-Gomez",
"Andrew H. Jaffe",
"Arthur Kosowsky",
"Deyan P. Mihaylov",
"Thiago S. Pereira",
"Amirhossein Samandar",
"Andrius Tamosiunas"
] | astro-ph.CO | [
"astro-ph.CO",
"gr-qc",
"hep-ph",
"hep-th"
] |
What makes a face looks like a hat: Decoupling low-level and high-level Visual Properties with Image Triplets
Maytus Piriyajitakonkij1,20000-0002-7610-8953 Sirawaj Itthipuripat30000-0001-9302-0964 Ian Ballard†*,40000-0003-1814-3141 Ioannis Pappas*,50000-0002-0168-7014
September 9, 2024
==================================================================================================================================================================
§ INTRODUCTION
According to the general theory of relativity, the local geometry of spacetime is a solution of the Einstein field equations, a set of coupled, non-linear, local partial differential equations <cit.>.
Assuming that appropriate spatial slices of that spacetime and its contents are homogeneous and isotropic, the local four-geometry will be described by one of the three Friedmann-Lemaître-Robertson-Walker (FLRW) metrics, i.e., one of the three isotropic, constant-curvature geometries—flat (zero spatial curvature) Euclidean geometry E^3, positive spatial curvature S^3, and negative spatial curvature H^3—with a time-dependent scale factor.
The metric, however, characterizes only the local geometry—one must independently specify the global structure of the spacetime.
It is conventionally assumed that the spacetime manifold is time cross the covering space[
Though it can be confusing, we will adopt the usual custom of referring both to the local flat, positively curved, and negatively curved geometries and their covering spaces as, respectively, E^3, S^3, and H^3.
]
of one of those three homogeneous, isotropic geometries—i.e., infinite flat 3-space (the covering space of E^3), the 3-sphere S^3, or the 3-dimensional pseudosphere H^3.
However, while these covering spaces are the manifolds that globally preserve the isometry group of each local geometry, each homogeneous and isotropic 3-geometry admits many possible topologies, with at least one real parameter to specify the manifold with that topology.
There are thus many possible 3-manifolds other than the three covering spaces that can accommodate FLRW cosmology.
They are distinguished from the covering spaces in lacking globally the full isotropy[
Except for the projective space S^3/_2≅ L(2,1), which is the only multiply-connected FLRW manifold that is both globally homogeneous and isotropic.
]
(and usually parity and homogeneity) of their local geometries.
This global isotropy breaking, parity breaking, or homogeneity breaking has the potential to be reflected in the properties of fluctuations around the FLRW background, and thus in the statistical properties of cosmological observables. In recent work <cit.>, we explored how symmetry breaking in non-trivial topologies affects cosmic microwave background (CMB) polarization correlation matrices.
In this paper, we consider the possible spacetimes with S^3 spatial geometry, and place new limits on one class of allowed S^3 topologies—lens spaces—as a function of the curvature scale.
While 3-space on large scales is reasonably homogeneous and isotropic, there is evidence from CMB temperature fluctuations of multiple “large-angle” anomalies.
Together, these amount to evidence in excess of 5σ equivalent significance against statistical isotropy <cit.>, mostly on scales larger than the horizon size at the time of the last scattering of the CMB photons (see, e.g., Planck:2013lks,Schwarz:2015cma,Planck:2015igc,Planck:2019evm,Abdalla:2022yfr for reviews.)
One of the very few potential physical explanations for this large-scale anisotropy is non-trivial spatial topology, though no specific manifold has yet been identified that explains the observed isotropy violation.
The possibility of non-trivial spatial topology has been considered since at least as far back as Einstein's initial S^3
cosmological model <cit.> at which time de Sitter remarked <cit.> that the projective 3-sphere, which has the same local geometry but half the 3-volume, was to be preferred.
Ever since, there have been cosmologists working to develop observational tests of cosmic topology.
Several approaches have been considered, including cosmic crystallography (the search for topological “clones”) <cit.>, CMB
matched circle pairs (a.k.a. “circles in the sky”) <cit.> which is essentially the search for topological clones in CMB maps, and CMB Bayesian likelihood comparison <cit.>.
More recently, the general set of eigenmodes, the correlation matrices, and the detectability of the orientable Euclidean manifolds have been studied in COMPACT:2022gbl, COMPACT:2022nsu, COMPACT:2023rkp, COMPACT:2024cud. Similarly, various machine learning techniques have been explored as tools for detecting signatures of non-trivial topology in harmonic space <cit.>.
While, in principle, Bayesian likelihood comparison is the most powerful technique because it uses all the available data, it suffers the disadvantage that the likelihood must be computed for each allowed topology of each allowed 3-geometry, for every value of the real parameters that specify the 3-manifolds of a given topology, and for every distinguishable position and orientation of the observer within the manifold <cit.>. Moreover, that scan over the space of possible 3-manifolds should, at least in principle, be done at the same time as the scan over the space of parameters that specify the background cosmology.
With a countable infinity of possible topologies, up to six additional parameters that specify the manifold (e.g., the lengths of the sides of a torus in E^3) and up to six parameters specifying the position and orientation of the observer within a manifold that breaks isotropy and homogeneity, in addition to the seven cosmological parameters, it is no surprise that the full Bayesian likelihood search has yet to be attempted.
On the other hand, a search using the circles-in-the-sky method is agnostic to the cosmological parameters, agnostic to the local 3-geometry—so long as anisotropies in the 3-geometry are sufficiently small—and agnostic to the values of the topological parameters within the domain of validity of the search.
In a topologically non-trivial spatial 3-manifold, points in the covering space of the local geometry are identified if they are related by any element of some discrete group of spatial transformations (a discrete, freely acting, subgroup of the isometry group of the local geometry).
We call any such pair of identified points in the covering space “clones” (or sometimes “topological clones”). It is important to realize that these clones have no independent existence, but they are often a convenient way to visualize or calculate the consequences of non-trivial topology.
The circles signature rests on two fundamental observations.
First, the CMB photons have all been traveling through the Universe since very nearly the same time, i.e., since recombination of the primordial plasma, and therefore the CMB that any observer detects comes from a sphere centered on them—their last-scattering surface (LSS).
Second, for every one of our clones that is closer to us than the diameter of the LSS, our LSS intersects with “their” LSS and the intersection of those two spheres is a circle.
This circle is a locus of points visible to us (i.e., to ourselves and our clone) in two distinct directions on the sky.[
This description pretends that the LSS has zero thickness. The actual LSS has a finite thickness, and so the self-intersection of the LSS is a finite-volume circular tube with a complicated cross-sectional profile.]
So long as that circle is large enough, we would be able to identify the tight correlation of CMB temperature fluctuations around the two matched circles as statistically anomalous, and so detect non-trivial topology.
This search for matched-circle pairs was performed in full generality on the Wilkinson Microwave Anisotropy Probe (WMAP) full-sky temperature map and no statistically significant matches were found <cit.>.
The search was repeated on Planck 2013 maps with identical results <cit.>, but not reported.
A limited search was conducted by the Planck team, again with negative outcome <cit.>.
In all cases, the smallest circle that could be ruled out depended on the required false negative and positive rates.
However, both are steep functions of the radius of the circle for small circle size.[
A more subtle question is how well to trust limits on those circles that are small enough to lie entirely or mostly within the usual foreground masks that are applied to full-sky CMB temperature maps.
We reserve such questions for a future paper revisiting the statistical details of the circle searches.]
The negative conclusion of these searches is that the length of the shortest path from us to our nearest clone, i.e., the shortest path around the Universe through us, must be greater than f_O.
The reported value of f_O is 0.985 at 95% confidence level.
The task remains to translate this generic limit on the distance to our nearest clone into a constraint on model parameters.
In COMPACT:2022nsu, this was done for orientable manifolds admitting homogeneous Euclidean geometry E^3; this will be complemented in a future paper on non-orientable E^3 manifolds.
Here, we begin the same task for manifolds admitting a homogeneous S^3 local geometry by considering the lens spaces L(p,q): the quotient spaces of the spherical manifolds by the cyclic group _p.
A future paper will consider the other S^3 manifolds,
and still other papers will in turn consider the six other Thurston geometries <cit.> as each presents its own particular challenges.
This work is by no means the first attempt to constrain topologies of S^3.
A detailed construction and complete classification of all 3-dimensional spherical manifolds was given by Gausmann et al. <cit.>.
They also discussed the likelihood of detectability of spherical topologies by crystallographic methods, as a function of cosmological parameters.
Gomero et al. <cit.> considered which hyperbolic and spherical manifolds were excluded by observations, however, this was before the considerable progress made using WMAP data and then Planck data, and in particular they could not include constraints from matched-circle searches
<cit.>.
In the same period, Uzan et al. <cit.> studied CMB anisotropies in S^3 manifolds, focusing on the lens spaces L(p,1).
Suppression of low-ℓ anisotropies in inhomogeneous lens spaces L(p,q), especially with p=8, was studied by Aurich et al. <cit.>, followed by exploration of the specific L(p, q=p/2 - 1) with p4=0 and prism spaces <cit.>.
In Aurich:2012sp, the authors surveyed lens spaces with p ≤ 72 and concluded that L(p,q) with q≈ 0.28p and q≈ 0.38p display strong suppression of CMB fluctuations on angular scales θ≥ 60^∘ compared with the covering space.
It is important to note that, for all p and all allowed values of qp>1, L(p,q) is statistically inhomogeneous, i.e., the statistical properties of CMB anisotropies (and other observables) depend on the CMB observer's position.
This is because translation invariance is broken by the requisite boundary conditions—for example, the length of the shortest closed geodesic varies with location.
The q=1 lens spaces are the rare exception.
This means that any exploration of constraints on lens spaces must vary not only p, q, and the Ricci scalar, but also the location of the observer. Similarly both isotropy and parity are violated statistically; and so the orientation of the observer and the handedness of their coordinate system matter.
In this paper, we systematically explore constraints on L(p,q), based on the fact that matched-circle pairs in the CMB temperature sky have not been detected.
This, and a new analysis of clone separations in L(p,q), will allow us to considerably strengthen the previous limits on these spaces obtained in Gomero:2001gq.
This paper is organized as follows. We provide a brief review of the S^3 geometry and lens spaces L(p,q) in <ref>.
The background for the application of circle searches to lens spaces is given in <ref>.
To connect the circle searches to observational constraints requires understanding of S^3 cosmological models, which is presented in <ref>.
In <ref>, we relate the non-detection of CMB matched-circle pairs to constraints on the parameters of the lens spaces and gives a strong condition on the detectability of the lens spaces as possible topologies of the Universe.
We summarize the paper and conclude in <ref>.
The GitHub repository associated with this study is publicly available at <https://github.com/CompactCollaboration>. Codes will be deposited there as publicly usable versions become available.
§ 3-SPHERE AND LENS SPACES
We begin with a short introduction to S^3 and, in particular, the lens spaces.
Some details not pertinent to limits from circle searches are included to provide a more complete introduction and a foundation for future studies.
There are various ways to discuss the 3-sphere, S^3.
We will describe it in terms of its natural embedding in 4-dimensional Euclidean space E^4 as the set of points {(x_0, x_1, x_2, x_3) | x_0^2+x_1^2+x_2^2+x_3^2=R_c^2}.
For simplicity we will typically work in units of the curvature scale, i.e., we set R_c=1.
The 3-sphere S^3 is both this simply-connected space (i.e., any closed loop on this 3-sphere can be smoothly contracted to a point) and the geometry induced on this 3-sphere.
In this representation, it is manifest that the isometry group of S^3 is O(4)—the rotations and reflections in four dimensions.
The topologically non-trivial manifolds with S^3 geometry are quotients of the 3-sphere by any freely acting discrete subgroup of the full isometry group.[
A group is freely acting if the only group element that takes any point (on the 3-sphere) to itself is the identity.
]
For S^3 the freely acting discrete subgroups consist of only rotations, i.e., they are all discrete subgroups of SO(4), none of the freely acting isometries are parity-reversing.
More simply put, one covers the 3-sphere with a finite number of identical tiles that are related to one another by (a finite set of) SO(4) elements, such that no point on the edge of one tile touches the identical point on a neighboring tile, and the tiles share only edges.
There are a countably infinite number of such discrete subgroups of SO(4).
Threlfall and Seifert <cit.> gave the first complete classification of these spherical 3-manifolds.
Another classification method, using quaternions, can be found in Thurston1982ThreeDM.
In the cosmological context, a detailed and complete classification of all the spherical manifolds can be found in Gausmann:2001aa.
In this paper, we focus on lens spaces, quotient spaces of S^3 of the form S^3/_p, where _p is the cyclic group of order p (considered as a subgroup of SO(4)).
There are multiple distinct actions of _p on S^3 that give distinct spaces labeled by a second integer parameter q, where p and q are relatively prime and 0<q<p <cit.>. The group of each of these actions has p elements, R^j_pq (with j ∈{0,…,p-1 }), acting on a point x⃗∈ S^3 by
x⃗' = (R_pq)^j x⃗≡R_pq^j x⃗ .
One particularly useful representation of R^j_pq is as a rotation in E^4 separated into rotations in two orthogonal planes,
R^j_pq=
[ cos(2π j/p) -sin(2π j/p) 0 0; sin(2π j/p) cos(2π j/p) 0 0; 0 0 cos(2π jq/p) -sin(2π jq/p); 0 0 sin(2π jq/p) cos(2π jq/p); ] .
This contains a rotation by (2π/p)j in the x_0-x_1 plane simultaneously with a rotation in the x_2-x_3 plane by (2π/p) (jqp).
Note that R^j_pq keeps both x_0^2+x_1^2 and x_2^2+x_3^2 unchanged—this will prove crucial for understanding the limit we will place on L(p,q).
Clearly R_pq^p=R_pq^0≡,
while R_pq≡R^1_pq is a generator of the group.[
It would seem that one could generalize (<ref>) by replacing j/p by q/p in the x_0-x_1 block and jq/p by jq'/p' in the x_2-x_3 block for a wider variety of integers p,p',q and q'.
However, one can easily show that all combinations other than those given by (<ref>) are either not freely acting or equivalent to one of those already considered.
]
An object at a location x⃗^(0) would thus have p-1 distinct clones,
x⃗^(j) = R^j_pqx⃗^(0) , j∈{1,…,p-1} .
For each choice of the value of p, it appears at first sight that there are up to p-1 distinct q values defining different lens spaces L(p,q) for q∈{1, …, p-1}.
Note that R_p0 is not freely acting, and that, if q>p, R_pq = R_p(qp), so we can indeed limit the analysis to 0<q<p.
In truth, not all of these values are allowed—some are not freely acting—and not all of them are distinct.
However, let us first understand the role of q.
To do so, consider the pattern of clones yielded by (<ref>) given the representation of R_pq in (<ref>).
We can label the clones of any point by the two integers j and j'≡ jqp—for a fixed p and q there will be p-1 clones of the initial point.
Taking p=7 as an example, for q=1, the rotation R_71 takes (j,j') to ((j+1)7,(j'+1)7). Starting at (j,j')=(0,0), repeated applications of R_71 gives the (j,j') sequence
(0,0)→(1,1)→(2,2)→(3,3)→(4,4)→(5,5)→(6,6)→(0,0).
This represents the 6 clones of a point labeled by (0,0) in L(7,1).
Similarly, for q=2, the rotation R_72 takes (j,j')→((j+1)7 , (j'+2)7),
so starting at (0,0) repeated applications of R_72 takes
(0,0)→(1,2)→(2,4)→(3,6)→(4,1)→(5,3)→(6,5)→(0,0).
This represents the 6 clones of a point labeled by (0,0) in L(7,2).
An important question for cosmology is “For a given value of p which values of q are physically distinct?”
Roughly, two topological spaces S_1 and S_2 have “similar shapes” if they are homeomorphic.
More precisely, a function h: S_1→ S_2 is a homeomorphism if it is continuous, one-to-one and onto, and its inverse h^-1 is also continuous.
If such a function exists then the spaces S_1 and S_2 are homeomorphic.
Two lens spaces L(p,q) and L(p',q') are known to be homeomorphic if and only if p = p' and either q = ± q'p or qq^'=± 1p.
Topologically, L(p,q) and L(p,q') are equivalent if they are homeomorphic, resulting in catalogs of lens spaces only including one of each class.
However, a homeomorphism does not preserve all physical properties.
Thus, although spaces may “have the same shape”, they may still be physically distinguishable by observers in those spaces.
As one example, L(7,2) and L(7,5) are homeomorphic since 2 = -57.
Notice that R_75 takes (0, 0) to (1, 5) which is equivalent to (1, -2).
Written in this more suggestive manner, if one starts at (0,0) repeated application of R_75 takes
(0,0)→(1,-2)→(2,-4)→(3,-6)→(4,-1)→(5,-3)→(6,-5)→(0,0).
Comparing to (<ref>) we see that these two spaces have opposite “handedness,” in the sense that while j steps in the same direction for both spaces, j' steps in opposite directions.
Thus L(7,2) and L(7,5) have distinguishable clone patterns despite being homeomorphic.
More generally, this is true for all L(p,q) and L(p, p-q): they are homeomorphic topological spaces but have distinguishable clone patterns.
Note that for these pairs of topologies the distances between all pairs of clones will remain the same.
Continuing with p=7, L(7,2) and L(7,3) are also homeomorphic since 2×3=6=-17.
Once again these spaces can be shown to have different clone patterns though the homeomorphism between the two spaces is less obvious.
Despite this, the distance between all pairs of clones will also remain the same.
Similarly, it can be shown that the pattern of clones seen by an observer in L(7,2) is identical to that seen by some observer in L(7,4) and the pattern of clones seen by an observer in L(7,3) is identical to that seen by some observer in L(7,5).[
For example, the pattern of clones for L(7,2) is given by (<ref>), while the pattern for L(7,4) is
(0, 0) → (1, 4) → (2, 1) → (3, 5) → (4, 2) → (5, 6) → (6, 3) → (0, 0).
We see that these contain all the same clones (tuples of j and j') if we swap j and j', i.e., if we swap (x_0, x_1) and (x_2, x_3). (The order in the sequence is irrelevant.) This swapping can be accomplished by the orthogonal transformation
O = [ 0_2× 2 _2× 2; _2× 2 0_2× 2 ].
Therefore, L(7,2) and L(7,4) have the same clone patterns.]
Wrapping up the example, this means that there are three physically distinct lens spaces for p=7: L(7,1) and two additional ones that can be chosen to be L(7,2) and L(7,3). L(7,4), L(7,5) and L(7,6) are each physically equivalent to one of these three.
The behavior of the p=7 example is generic.
L(p,q) is homeomorphic to L(p,p-q), but they do not share the same clone pattern.
There is also at most one other 0 < q'< ⌊ p/2 ⌋ such that L(p,q') is homeomorphic to L(p,q) and with its clone pattern identical to that of L(p,p-q).
The list of L(p,q) that are distinct for cosmological purposes is thus longer than the list of L(p,q) that are topologically distinct: for every topologically distinct L(p,q) with 0 < q < ⌊ p/2 ⌋, physically one must also consider L(p,p-q).
If there is another homeomorphic L(p,q') with q' < ⌊ p/2 ⌋, then it is physically equivalent to L(p,p-q).[
Topologists are also interested in spaces that are homotopically equivalent, a weaker condition than homeomorphic: all lens spaces that are homeomorphic are also homotopically equivalent, but the converse is not true.
The two lens spaces L(p,q) and L(p', q') are homotopically equivalent if and only if p= p^' and qq'=± n^2p for some n∈.
For example, L(11,2) and L(11,3) are homotopically equivalent since 2× 3 = 6 = -4^211, but they are not homeomorphic.
Note that p=11 is the smallest p with both q and q' larger than 1 for which there is a homotopically equivalent but not homeomorphic pair of lens spaces.
Homotopically equivalent lens spaces that are not homeomorphic are physically distinguishable with different clone pair separations.
]
While this holds in general, the focus of this work is on limits from circle searches which only depend on the interclone distances and not on the pattern of the clones.
Therefore, in this work we can restrict ourselves to 0 < q < ⌊ p/2 ⌋ since the homeomorphic partners L(p,p-q) of each lens space have the same interclone distances.
§ CIRCLE SEARCH FOR LENS SPACES
Constraints on the non-trivial topology of the Universe can be addressed by the existing circle-in-the-sky signature searches based on CMB temperature data.
As noted above, for the lens space L(p,q), the covering space contains p copies of each observer, i.e., the covering space can be viewed as being tiled with each tile having a clone of each observer.
Studies originally based on the WMAP <cit.> and later on higher-resolution maps from the Planck satellite <cit.> confirmed that there are no matched-circle pairs in the CMB sky maps.
This non-detection of circles in the CMB sky can be used to constrain the lens-space parameters p and q.
A lens space can conservatively be ruled out if all observers would see matched circle pairs.
For the inhomogeneous lens spaces (those with q>1) this requires comparing to the distance of every observer's nearest clone.
The distance (on the unit S^3) between an observer at x⃗^(0) and the position of one of their clones x⃗^(j) given by (<ref>) is[
The distance between any pair of clones x⃗^(i) and x⃗^(j) is
d^(p,q)_ij
= cos^-1(x⃗^(i)·x⃗^(j))
= cos^-1[s cos(2π (i-j)/p) + (1 - s) cos(2π (i-j) q/p) ] .
]
d^(p,q)_j(s) ≡ d^(p,q)_0j(s) = cos^-1[ s cos(2π j/p) + (1 - s) cos(2π j q/p) ] ,
for s ≡ (x^(0)_0)^2 + (x^(0)_1)^2.
Notice that for the homogeneous lens space L(p, 1) the distance to all clones is the same for all observers
d^(p,1)_j(s) = cos^-1[ s cos(2π j/p) + (1 - s) cos(2π j /p) ]
= 2π j/p.
On the other hand, since the lens spaces L(p, q≠ 1) are globally inhomogeneous, the pattern of clones depends on the observer location.[
The shape of the Dirichlet domain—the set of all points closer to a given observer than to any clone of that observer—also depends on the observer location for q≠1.]
Thus to apply the circle search limits to a topology we must search over all observers and find the maximum distance to their nearest clone.
In other words, we must determine the maximum-minimum distance
d^(p,q)_max = max_0≤ s ≤ 1( min_0<j<p d^(p,q)_j(s) ) .
We immediately see that in the homogeneous lens spaces
d^(p,1)_max = 2π/p.
For the inhomogeneous lens spaces (q>1) we compute d^(p,q)_max numerically.[
Beginning from an equally spaced set of s∈[0, 1], we find for each s the clone j to which the distance is a minimum, determine the s from this set that has a maximum of these minimum distances, then repeat the process for a finer grid bracketing this s until the bracket is smaller than a specified width.]
The maximum-minimum distance for all lens spaces L(p,q) with p < 8192 is shown in <ref>.
The smallest distance (lower, green, dashed line) occurs for the homogeneous lens spaces L(p,1) from (<ref>).
For q>1 the distances appear to approach a maximum value with a shallow fall-off with respect to p.
Empirically, it is found to be well approximated as
d^(p,q)_max < 2 πα/√(p),
with α≃ 0.761. This choice for α is valid for all p < 8192, but appears to be increasing very little if at all by the time p reaches this value.
This bound is shown as the upper, orange, solid line in the figure.
The scaling of this upper limit on d^(p,q)_max as p^-1/2 is a direct consequence of our earlier observation that R^j_pq keeps both x_0^2+x_1^2 and x_2^2+x_3^2 unchanged.
This implies that the p clones of any point populate a 2-torus in ^4 (that of course lies on the S^3).
If they are randomly distributed, then their average separation is proportional to p^-1/2.
An analytic calculation of α seems possible.
A first estimate is obtained by noting that the mean separation of p points uniformly distributed on the maximum-area 2-torus submanifold of the 3-sphere is d=√(8π/p), assuming that p is large enough for the areas of discs of this diameter to be well-approximated by π d^2/4. This leads to an estimate of α≃√(2/π)≃0.798, which is within a few percent of the value derived numerically.
The figure also contains other noticeable band-like structure.
At first glance, since the lower bound is given by d^(p,1)_max it may be thought that the bands represent other fixed values of q.
This is not the case.
An illustrative example for q=11 is provided in the figure as blue dots.
Though the structure is intriguing, and more prominent as one zooms into the figure, it has no bearing on the limits presented in this work and will not be explored further here.
§ COSMOLOGICAL SETTINGS
In order to use (<ref>) to constrain the topology of S^3 manifolds from cosmological data, we recall certain facts about a positively curved FLRW universe.
The spherical FLRW geometry is characterized by the locally homogeneous and isotropic FLRW metric
s^2 = -c^2 t^2 + a(t)^2 (χ^2 + sin^2χΩ^2) .
Here, t is the cosmic time, a(t) is the scale factor, χ is the comoving radial distance in units of the curvature radius R_c for the 3-sphere, and Ω^2 = θ^2 + sin^2 θ φ^2 is the infinitesimal solid angle.
The first Friedmann equation in this geometry for a universe filled with homogeneous dust of density ρ (and zero pressure) and with cosmological constant Λ is
H(t)^2= 8π G ρ(t)/3 - c^2/a(t)^2 R^2_c + Λ c^2/3 ,
where H(t) = ȧ(t)/a(t) is the Hubble expansion rate, G is Newton's constant, and we have displayed R_c explicitly.
It is convenient to rewrite this in terms of density parameters today: Ω_m for matter, Ω_Λ for the cosmological constant, and Ω_K as an effective density parameter for curvature, all defined by
Ω_m≡8π G ρ_0/3 H_0^2, Ω_Λ≡Λ c^2/3H_0^2, Ω_K ≡ -c^2/R_c^2 H_0^2 .
Here all quantities are written in terms of their values today at time t_0: H_0 ≡ H(t_0), ρ_0 ≡ρ(t_0), and we have chosen a_0 ≡ a(t_0) = 1.
Noting that ρ(t) = ρ_0 / a(t)^3 for nonrelativistic matter we can, as usual, rewrite <ref> as
H^2 = ( ȧ/a)^2 = H_0^2 ( Ω_m a^-3 + Ω_K a^-2 + Ω_Λ) .
Notice that from the definition of Ω_K the current physical curvature radius is
R_c^phys≡ a_0 R_c = c/H_0 √(|Ω_K|) .
The comoving distance between an observer and a point at redshift z can be found by using a = 1/(1+z) and integrating along a radial null geodesic
χ = c t/a = c a/a ȧ = -c ( a/ȧ) z.
Finally, using <ref>, the comoving distance
in units of R_c is given by
χ(z) = ∫χ = √(|Ω_K|)∫^z_0 x/√(Ω_m(1+x)^3+ Ω_K(1+x)^2+Ω_Λ) .
As we use CMB data to constrain topology we are interested in the diameter of the LSS, which we take to be at z_LS=1090.
The comoving diameter of the LSS in units of R_c is thus
= 2 χ(z_LS) .
The value of given in (<ref>) depends on cosmological parameters Ω_m and Ω_K.[Notice that the Friedmann equation (<ref>) evaluated today is Ω_m + Ω_Λ + Ω_K = 1, so only two of these densities are independent.]
The best-fit cosmological model from the CMB has a degeneracy between Ω_m and Ω_K.
To approximate this degeneracy we note from Planck 2018 <cit.>, Fig. 29, that the constraints on Ω_m as a function of Ω_K from primary CMB anisotropies (i.e., without lensing and baryon acoustic oscillations) are quite tight, allowing us to find an empirical relationship from the central contour in the Ω_m-Ω_K plane,
Ω_m≃ 0.314 - 3.71Ω_K.
Based on this relation, versus Ω_K in units of R_c is calculated and shown in <ref>.
However, it should also be noted that both the location of the central contour and the tightness of the fit to (<ref>) were obtained in the context of the specific power spectra adopted by Planck, which may very well require modification in the context of non-trivial topology, especially on large scales.
§ COSMOLOGICAL CONSTRAINTS ON DETECTABILITY OF LENS SPACES
To combine the previous two sections, we use the condition that all observers in a lens space will see circles in the sky when
d^(p,q)_max < f_O.
f_O d_LSS is the observational lower limit, from the CMB, on the length of the shortest closed spatial geodesic.
As discussed in <ref>, f_O≃ 0.985 at 95% confidence level, but is not much smaller at much higher confidence <cit.>.
Coupled with the upper limit on d^(p,q)_max from (<ref>), this can be used to rule out lens spaces with too small distances to the nearest clone.
Every observer in the lens space L(p,q) will see circles if
2πα/√(p) < f_O.
Solving for p and inverting the logic, the only lens spaces consistent with the non-detection of matched circles in the sky have
p < 4π^2 α^2/f_O^2^2≡ p^*.
We shall take f_O=1 for illustrative purposes in our figures, since our topology limits are already subject to some uncertainty due to their dependence on cosmological parameters.
The value of depends on cosmological parameters (e.g., <ref>) and can be reduced to a function of Ω_K using the approximation in <ref>.
The resulting dependence is shown in the left panel (a) of <ref>.
It thus follows that the bound set by p^* is also a function of Ω_K.
The parameter space of lens spaces as a function of Ω_K is shown in the right panel (b) of <ref> with the bound from (<ref>) as a consequence of the absence of matched circles in the CMB.
The white region represents the region of the parameter space (Ω_K, p) excluded for any valid choice of q, while the shaded region represents the allowed parameters for some q.
Recall that precise limits on (p,q) depend on the specific values of cosmological parameters.
We note that the parameter space allowed by the lack of matched circles, although much better constrained, is still large. These spaces are inhomogeneous for q>1 and could therefore be observed or constrained by their effect on cosmological observables such as the CMB anisotropies; we will study their exact effect in an upcoming paper.
As noted in the introduction, observational constraints on lens spaces had been considered in Gomero:2001gq prior to searches for circles in the sky.
When applied to the LSS, that work provides an alternative and q-dependent upper bound on allowed lens spaces.
They required
2π q/p < f_O ,
which translates into the upper bound
p < 2π q/f_O≡ p^*_q.
The tightest constraint on lens spaces is determined from a combination of these two bounds.
No observer will see circles when
min( 2πα/√(p), 2π q/p) > f_O,
where the first distance corresponds to the bound p^* and the second to p^*_q.
From this we see that p^* in (<ref>) provides a stricter bound than p^*_q in (<ref>) when
q > α√(p).
The connections between the maximum-minimum distance and cosmological data, as embodied in the curvature through Ω_K, for each lens space L(p,q) is summarized in <ref>.
The color represents the minimum value of for which some observers in L(p,q) would see circles.
For any value of less than this, some observers would see circles.
For any value of larger than this, no observers would see circles.
The black line separates the space into regions where the bound from Gomero:2001gq in (<ref>) on the distance is tighter (below the line) and the bound from this work in (<ref>) is tighter (above the line), with the transition happening when q=α√(p).
This result is independent of cosmology.
Additionally, the color can also be interpreted as giving a lower limit on the value of Ω_K (given in square brackets) for each lens space consistent with the non-detection of pairs of matched circles in the CMB sky.
The quoted cosmology-dependent value of Ω_K is computed by requiring d_max^(p,q) = (Ω_K)
for the fit (<ref>) to Planck 2018 cosmological data, as shown in panel (a) of <ref>.
Again we stress that the limit (<ref>) is independent of the specific values of cosmological parameters (and the details of power spectrum), whereas the Ω_K limit displayed in Fig. <ref> is not.
§ CONCLUSIONS
The local geometry of the Universe is nearly flat, which means that it is consistent with a small, positive (or negative), isotropic 3-curvature <cit.>.
However, positive curvature does not imply that the topology is necessarily that of the covering space of spherical geometry, S^3.
The 3-sphere admits a countable infinity of other topologies, among them the lens spaces L(p,q), which are quotients of S^3 by _p, the cyclic group of order p.
The integer parameter q (with 0<q<p) indexes different realizations of such quotients (cf., Eq. (<ref>)), though not all values of q in this range give manifolds, and not all values that do give manifolds are distinct, either topologically or physically.
All manifolds with S^3 local geometry, and in particular all lens spaces, are compact, and the larger p is, the smaller the volume of the space for fixed curvature radius R_c.
Meanwhile, independent analyses of both WMAP and Planck temperature data have shown us that the shortest closed distance around the Universe through our location is greater than f_O=98.5% of the diameter of the last-scattering surface, d_LSS
(as remarked above, we take f_O=1 rather than 0.985 for simplicity).
For fixed values of R_c, this places an upper limit on p and more specifically restricts the values of (p,q) according to (<ref>).
We have shown that this limit on (p,q) is considerably more stringent than the previous limit (<ref>) for most values of (p,q).
In Fig. <ref>, we have presented this limit, giving cosmology-independent values of the maximum-minimum clone separation in units of R_c, d_max^(p,q), and (in square brackets) cosmology-dependent values of the maximum allowed value of Ω_K given the empirical relation (<ref>) obtained from the primary CMB anisotropies measured by Planck <cit.>.
Future work will extend these limits to the other S^3 topologies: the prism manifolds (a.k.a. dihedral spaces), and the tetrahedral, octahedral, and icosahedral spaces.
We will also present the spin-0 and spin-2 eigenmodes of spherical geometries, and the correlation matrices of density and CMB fluctuations of all types, allowing us to predict the statistical properties of the CMB and of other cosmological observables in all S^3 manifolds.
S.S., C.J.C., and G.D.S. thank D. Singer for extended conversations about the geometry and topology of S^3 manifolds.
We thank J. Weeks for several valuable conversations.
This work made use of the High-Performance Computing Resource in the Core Facility for Advanced Research Computing at Case Western Reserve University. Y.A. acknowledges support by the Spanish Research Agency (Agencia Estatal de Investigación)'s grant RYC2020-030193-I/AEI/10.13039/501100011033, by the European Social Fund (Fondo Social Europeo) through the Ramón y Cajal program within the State Plan for Scientific and Technical Research and Innovation (Plan Estatal de Investigación Científica y Técnica y de Innovación) 2017-2020, and by the Spanish Research Agency through the grant IFT Centro de Excelencia Severo Ochoa No CEX2020-001007-S funded by MCIN/AEI/10.13039/501100011033. T.S.P. acknowledges financial support from the Brazilian National
Council for Scientific and Technological Development (CNPq) under grants 312869/2021-5
and 88881.709790/2022-0.
C.J.C., G.D.S., A.K., and D.P.M. acknowledge partial support from NASA ATP grant RES240737; G.D.S. and Y.A. from the Simons Foundation; G.D.S. and A.S. from DOE grant DESC0009946; G.D.S., Y.A., and A.H.J. from the Royal Society (UK); and A.H.J. from STFC in the UK.
A.T. is supported by the Richard S. Morrison Fellowship.
G.D.S. and Y.A. thank the INFN (Sezione di Padova), and G.D.S., S.A., D.P.M., and A.T. thank the IFT for hospitality where part of this work was accomplished.
utphys
|
http://arxiv.org/abs/2409.02300v1 | 20240903212713 | TreeTOp: Topology Optimization using Constructive Solid Geometry Trees | [
"Rahul Kumar Padhy",
"Pramod Thombre",
"Krishnan Suresh",
"Aaditya Chandrasekhar"
] | cs.CE | [
"cs.CE",
"cs.NA",
"math.NA"
] |
Topological communities in complex networks
Luís F Seoane
===========================================
§ ABSTRACT
Feature-mapping methods for topology optimization (FMTO) facilitate direct geometry extraction by leveraging high-level geometric descriptions of the designs. However, FMTO often relies solely on Boolean unions, which can restrict the design space. This work proposes an FMTO framework leveraging an expanded set of Boolean operations, namely, union, intersection, and subtraction. The optimization process entails determining the primitives and the optimal Boolean operation tree. In particular, the framework leverages a recently proposed unified Boolean operation approach. This approach presents a continuous and differentiable function that interpolates the Boolean operations, enabling gradient-based optimization. The proposed methodology is agnostic to the specific primitive parametrization and is showcased through various numerical examples.
§ INTRODUCTION
Topology optimization (TO) has emerged as a powerful computational tool for achieving optimal material distribution within a design domain, subject to constraints <cit.>. The maturity of TO methods is evident in their widespread industrial application, particularly facilitated by commercial software.
While various TO methods exist, density-based approaches are widely adopted <cit.>. Popular density-based approaches discretize the design space into finite elements and optimize a fictitious material density within each element to generate organic, free-form designs <cit.>. Consider, for instance, the design domain and boundary conditions in <ref>(a). An optimal design that maximizes stiffness, subject to a volume constraint, using density-based TO is illustrated in <ref>(b). While offering design freedom, the resulting designs can be challenging to interpret <cit.> and modify <cit.>. Furthermore, the density-based TO designs often require extensive post-processing <cit.> that leads to a deviation between the final design and the optimal solution <cit.>. This deviation becomes more pronounced when the structure is manufactured using stock material, with components of fixed shape but variable dimensions <cit.>.
To address these limitations, alternative techniques, referred to as feature mapping-based topology optimization (FMTO) <cit.>, have emerged. These approaches utilize high-level geometric descriptors (or primitives), to represent the design. For the design domain and boundary conditions illustrated in <ref>(a), an optimized design obtained using FMTO is presented in <ref>(b). By parameterizing the design using primitives, FMTO facilitates the upfront enforcement of manufacturing rules, and also aids in design interpretation <cit.>. However, FMTO methods often rely solely on Boolean unions, which can limit design flexibility and fail to exploit the complex geometric operations available in modern computer-aided designing (CAD) systems <cit.>. While some FMTO methods incorporate Boolean operations other than union, the constructive solid geometry (CSG) tree is typically predefined <cit.>. The optimization process is then limited to reordering operations or splitting features to introduce new branches <cit.>.
§.§ Contributions
This work introduces an FMTO framework with the following contributions:
* Expanded Boolean Operations: We extend traditional feature-mapping methods, typically limited to Boolean unions, to incorporate subtraction and intersection operations, significantly expanding the design space. Specifically, we employ a unified Boolean operation formulation <cit.>, where an interpolation function represents various Boolean operations. This formulation enables continuous transitions between different Boolean operations, thus facilitating gradient-based optimization.
* Concurrent Optimization: Unlike prior research <cit.> that focuses on optimizing only primitive parameters with fixed operators, our framework optimizes both primitive parameters and the Boolean operators simultaneously.
Since we rely on the unified Boolean operation presented in <cit.>, we also inherit the following features:
* Mitigating Vanishing Gradients: The utilization of unified Boolean operations, facilitated by a continuous and monotonic interpolation scheme, enables gradient-based optimization while mitigating the issue of vanishing gradients.
* Primitive Parameterization Flexibility: The proposed method is agnostic to the specific primitive parameterization, accommodating a wide range of geometric primitives.
§ RELATED WORK
As mentioned earlier, FMTO parameterizes the topology via high-level geometric features <cit.> or primitives that are mapped onto a mesh for analysis <cit.>. FMTO is driven by the need to integrate features into free-form designs, control specific structural dimensions, utilize stock materials, and generate a geometric representation that can be interpreted by CAD systems <cit.>. Some of the earliest work in this area combines free-form topology optimization with embedded geometric primitives <cit.>; for a detailed review, see <cit.>.
We focus on methods that represent the design exclusively using geometric primitives <cit.>. In particular, we discuss four popular approaches that represent the design as a union of primitives capable of translating, scaling, and modifying their shapes <cit.>:
* Moving morphable components/voids (MMC/MMV) method: The MMC/MMV method uses geometric primitives, such as B-spline-shaped holes or components, to represent the design <cit.>. This approach allows for control of the design's geometry by explicitly manipulating the boundaries of these primitives <cit.>.
* Geometry projection (GP) method: The GP method <cit.> uses geometric primitives, such as bars <cit.>, supershapes <cit.> and plates <cit.>, to optimize structural designs by projecting these primitives onto a fixed finite element mesh. This approach has been applied in various contexts, including 3D topology optimization <cit.>, multi-material design <cit.>, and the optimization of unit cells for lattice materials <cit.>.
* Method of Moving Morphable Bars (MMB): The MMB uses round-ended bars as primitives, allowing them to overlap and modify their shape and position within the design <cit.>.
* Moving Node Approach (MNA): Finally, the MNA uses polynomial functions to project geometric primitives, representing the building blocks of the design as mass nodes <cit.>.
The above approaches rely on the Boolean union of primitives <cit.>, which can restrict design flexibility. We propose a generalized framework in (<ref>) to enhance design flexibility. <Ref> covers the framework's key components: primitive parametrization (<ref>), projection of primitives onto the density field (<ref>), the use of an expanded set of Boolean operations (union, intersection, subtraction) on the density fields (<ref>), and optimization strategy (<ref>). Several examples demonstrating the proposed method are presented in <ref>. Finally, <ref> concludes this work.
§ PROPOSED METHOD
§.§ Overview
This study focuses on gradient-based topology optimization, which minimizes structural compliance under a volume constraint. We assume that the design domain, loads and restraints, has been prescribed. While the framework is agnostic to the type of primitives used, we will assume that the primitives are polygons for ease of implementation. The CSG tree is assumed to be a perfectly balanced binary tree with a specified depth. Our objective is to obtain an optimal configuration of the polygons at the leaf nodes and Boolean operations at all non-leaf nodes that, when applied, results in the optimized design; see <ref>.
§.§ Primitive Parametrization
We populate the leaf nodes of a perfectly balanced binary CSG with polygonal primitives. In particular, a tree with a (given) depth of n_d (discussed in <ref>) would correspond to n_p = 2^n_d primitives at the leaf nodes. The primitives are parameterized using the method proposed in <cit.> where:
* Each polygon primitive p^(i) is associated with a reference point (c_x^(i), c_y^(i)), and is the intersection of S (S ≥ 3) (straight-line) half-spaces, resulting in a polygon with {3, …, S} sides; S = 6 in <ref>(a).
* Each half-space h_j^(i) is initially oriented at an angle α̃^(i) = [0, π/3, 2π/3, …] (see <ref> (a)) and has an offset distance d_j^(i) > 0 from the reference point (see <ref> (b)).
* To vary the overall orientation of the polygon, we allow for the rotation of all of its half-spaces by an angle θ^(i), resulting in the final orientation angle α_j^(i) = θ^(i) + α̃_j^(i) for the j^th half-space (see <ref>(c)).
In summary, each polygon p^(i) is parameterized as:
p^(i) = {c_x^(i), c_y^(i), θ^(i), d_1^(i), …, d_S^(i)}, i = 1,…, n_p
Note that, by construction, each polygon is non-empty and can have between 3 and S sides <cit.>.
§.§ Geometry Projection
The aim of geometry projection is to map primitives, defined by the polygon's parameters (<ref>) onto a density field defined over a mesh <cit.>. To maintain differentiability, we first map each primitive (<ref>(a)) to a signed distance field (SDF), where the value of the SDF at any point (x, y) is defined as the shortest signed-distance to the primitive’s boundary (inside being negative and outside being positive). We begin by defining the SDF of each half-space as (<ref>(b)):
ϕ̂_j^(i)(x, y) = (x - c_x^(i))cos(α_j^(i)) + (y - c_y^(i))sin(α_j^(i)) - d_j^(i)
Since the SDF value of a point inside the polygon with respect to a half-space is negative, the SDF of the entire polygon at a given point is obtained by computing the maximum of the SDFs from each half-space at that point (<ref>(c)). To ensure differentiability, we use the LogSumExp (LSE) approximation of maximum <cit.>. This yields the SDF of the i^th polygon:
ϕ^(i)(x, y) = l_0/tlog( ∑_j=1^Sexp(t/l_0ϕ̂_j^(i)(x, y)) )
where, t is the LSE scaling factor (discussed later in <ref>) and l_0 is the length of the diagonal of the domain bounding box.
Next, we compute the density field of the polygon ρ̃(·), from ϕ(·) using the Sigmoid function; see <ref>(d):
ρ̃^(i)(x,y) = 1/1 + exp(-γ/l_0ϕ^(i)(x,y))
Observe that negative SDF values (representing the polygon's interior) are mapped to a density of one (solid), while positive values (representing the polygon's exterior) are mapped to zero (void). SDF values near zero (representing the polygon's boundary), are projected to intermediate density values ρ̃∈ (0,1), with the sharpness of transition controlled by the hyperparameter γ.
Finally, we impose a threshold filter on ρ̃ to drive intermediate densities towards 1/0 as <cit.>:
ρ(ρ̃) = tanh(β/2) + tanh(β (ρ̃ - 1/2))/2 tanh(β/2)
where the parameter β controls the sharpness of the threshold function. The obtained primitive densities are subsequently combined using the Boolean operations (as detailed in the following section) to obtain the design density.
§.§ Unified Boolean Operations
The final design density is obtained by applying Boolean operators (<ref>) to the primitive density fields (operands) defined in the previous section. A successful Boolean operations approach suitable for gradient-based optimization (<ref>) should possess the following properties:
* Differentiability with respect to operands (density fields).
* Differentiability with respect to the Boolean operators (union (∪), intersection(∩), difference(↦), negative difference(); see <ref>).
FMTO methods that demonstrate differentiability with respect to the operands <cit.> have been extensively employed. Further, the union and intersection operators are often approximated using smooth maximum/minimum functions <cit.>. Alternative techniques, such as the smooth blending operator <cit.> and R-functions <cit.>, have been proposed, exhibiting differentiability with respect to the operands and individual operators. However, a smooth transition between the operators is difficult to achieve. Furthermore, while a unified Boolean approach based on the modified R-function <cit.> was introduced to facilitate differentiable transitions between operators, it failed to satisfy key Boolean operator axioms, resulting in unexpected behavior <cit.>.
To address these issues, we adopt a unified Boolean operation approach proposed in <cit.>. Let 𝒳 and 𝒴 represent two primitives (polygons), with their corresponding density functions ρ_x and ρ_y. Then, a generalized Boolean operation between the two primitives is defined as (<ref>):
ℬ(ρ_x, ρ_y ; b) = (b_1 + b_2) ρ_x + (b_1 + b_3) ρ_y + (b_0 - b_1 - b_2 - b_3) ρ_x ρ_y
where b = {b_0, b_1, b_2, b_3} are interpolating parameters. The interpolating parameters are constrained by 0 ≤ b_i ≤ 1 and ∑_i=0^3 b_i = 1. When b is a one-hot vector, we recover the standard Boolean operators (<ref>):
Observe that <ref> continuously interpolates between individual operators. For the two polygons in <ref>, the continuous interpolation between the intersection and union operators is illustrated in <ref>, while a generic operator state is illustrated in <ref>.
Furthermore, the function ℬ is differentiable with respect to both the operands (ρ_x, ρ_y) and the operators (b_i). For example, the derivative w.r.t operand ρ_x is:
∂ℬ/∂ρ_x = (b_1 + b_2) + (b_0 - b_1 - b_2 - b_3)ρ_y
while the derivative with respect to operator coefficient b_1 is
∂ℬ/∂ b_1 = ρ_x + ρ_y - ρ_xρ_y
Finally, with the interpolated operators defined at all non-leaf nodes and the primitive densities at the leaf nodes, we can evaluate the CSG tree bottom-up to obtain the density field ρ(x,y) of the evolving design at any instance.
§.§ Finite element analysis
For finite element analysis, we use here a structured mesh with bilinear quad elements. Having obtained the design density field ρ(x,y), let ρ_e be the density at the center of element e. The corresponding Young's modulus E_e is obtained using the Solid Isotropic Material Penalization (SIMP) model <cit.> as:
E_e = E_min + (E_0 - E_min)(ρ_e)^p
where E_0 is Young's modulus of the solid material, E_min is a small constant added to prevent a singular global stiffness matrix, and p is the SIMP penalty. One can evaluate the element stiffness matrix as
[K_e] = E_e ∫_Ω_e [B]^T[D_0][B] d Ω_e
where [B] is the strain matrix, and [D_0] is the constitutive matrix with a Young’s modulus of unity and an assumed Poisson's ratio (see Table <ref>). This is followed by the assembly of the global stiffness matrix K. Finally, with the imposed nodal force vector f, we solve the state equation for the nodal displacements u = K^-1f.
§.§ Optimization
We now summarize various aspects of the optimization problem:
Objective: We consider here a compliance minimization problem. where the compliance is computed as J = f^T u.
Volume constraint: The design is subjected to a total volume constraint. With v_f^* being the maximum allowed volume fraction and v_e being the element areas, the volume constraint g_v is defined as:
g_v ≡∑_e=1^N_eρ(x_e) v_e/v_f^* ∑_e=1^N_e v_e - 1 ≤ 0
Optimization variables: The optimization variables are as follows:
* Recall that the design is defined by polygon parameters, including center coordinates, angular offset, and plane distances, resulting in n_p(S+3) free parameters (see Section <ref>). Additionally, the design requires n_b = 2^n_d - 1 boolean operations (b⃗ = b^(1), …, b^(n_b)) for a tree of depth n_d.
* For optimization, we define an augmented normalized design vector z = [z_c_x, z_c_y, z_θ, z_d, z_b] that lie in [0,1]. With c_x, c_y, θ, d being the lower bound and c_x, c_y, θ, d being the upper bound on the parameters of the polygons, we can retrieve the unnormalized x-center as c_x ←c_x + (c_x - c_x) z_c_x and so on. The design variable z_b^(i), corresponding to the i^th boolean operator is transformed into one-hot encoding using a softmax function <cit.>.
* All design variables z are uniform-randomly initialized (using ) with a default seed value of 2.
TO problem: The final TO problem is posed as:
zminimize J = f^T u(z)
subject to K(z)u = f
g_v (z) ≤ 0
0 ≤ z_i ≤ 1 ∀ i
Optimization method: We employ the method of moving asymptotes (MMA) <cit.> to perform the design updates. Specifically, we use a Python implementation with all default parameters corresponding to the version of MMA presented in <cit.>. The choices of the MMA move limit, the step tolerance, the Karush-Kuhn-Tucker (KKT) tolerances, etc are specified later under numerical experiments.
§.§ Sensitivity Analysis
Sensitivity computation, a crucial aspect of gradient-based optimization, involves determining the derivatives of the objective function and constraints with respect to the optimization parameters. Traditionally, this is conducted manually, which can be labor-intensive and error-prone. However, by utilizing the automatic differentiation (AD) <cit.> capabilities of frameworks such as JAX <cit.>, this step can be fully automated, ensuring accurate and efficient computation. In practical terms, we only need to define the forward expressions, and the derivatives of the objective and volume constraint with respect to the optimization variables is automatically computed with machine precision. Finally, we summarize the algorithm for the proposed framework in <ref>.
§ NUMERICAL EXPERIMENTS
In this section, we conduct several experiments to illustrate the proposed framework. The default parameters in the experiments are summarized in Table <ref>. All experiments are conducted on a MacBook M2 Air, using the JAX library <cit.> in Python. Additionally, all design parameters z are uniform randomly initialized.
§.§ Validation
We compare the designs from our proposed framework with those obtained using the SIMP-based optimization <cit.>. Using the default parameters outlined in <ref>, and with n_d=4, we optimize both the Boolean operators and polygon parameters for an MBB beam (<ref>(a). In the SIMP implementation, the density field is optimized with a filtering radius of 1.3, targeting a volume fraction of v_f^* = 0.5 ( <ref>(b)). For the proposed method, we display the final design in <ref>(c). We observe that both methods yield similar designs and performance. For illustration, we present the complete resulting CSG tree in <ref>.
Additionally, the convergence is illustrated in <ref>. The resulting density fields at the root nodes for the 10^th, 50^th, 75^th, and final (111^th) iterations are shown as insets. In comparison, the SIMP-based implementation took 126 iterations to converge. In our framework, the percentage of computational time is as follows: 0.6% for geometry projection, 8.1% for CSG tree evaluation, and 91.3% for FEA and sensitivity analysis; each iteration takes 1.6 seconds.
§.§ Effect of Tree Depth
In this experiment, we examine the effect of the CSG tree depth n_d on the performance of the design. We revisit the MBB beam in <ref>(a), with v_f^* = 0.5, and investigate the impact of different values of n_d. The CSG tree for n_d =2 and n_d =3 are illustrated in <ref>, while the final designs for n_d =4,5,6,7 are illustrated in <ref>. The performance improves as n_d increases; no significant improvements were observed beyond the depth of n_d = 4. Based on similar experiments, we recommend using a depth n_d ≥ 4; we use a depth of n_d = 6 for the remainder of the experiments.
§.§ Flexibility of Framework
In this experiment, we demonstrate the framework's ability to obtain optimal designs with specific Boolean structures. We again consider the MBB problem (<ref>(a)) with v_f^* = 0.5. First, we set the root node operator as a difference operator, resulting in the design shown in <ref>(a). Observe that the optimization can sometimes result in empty nodes. These nodes and their descendants are detected and pruned <cit.>. Next, we only allowed union operators; the resulting topology is illustrated in <ref>(b).
§.§ Mesh dependency
Next, we study the effect of the FE mesh on the computed topology using the MBB problem with v_f^*=0.5 (<ref>(a)), while keeping all other parameters at their default values. <Ref> presents the topologies for varying mesh sizes: 60 × 30, 80 × 40, and 100 × 50 elements. No significant difference in performance was observed across these mesh sizes. However, we note that the designs exhibit less noise as the number of mesh elements increases. The boundary exhibits small features for a coarse mesh, since the latter fails to capture the impact of small features on the performance. However, we have observed that these undesirable features tend to reduce as the mesh size increases.
§.§ Effect of Initialization
Recall all design variables z are uniform-randomly initialized with a default seed value of 2. We now investigate the influence of the seed value on the optimal designs. While all topology optimization techniques are inherently sensitive to the initial design <cit.>, it has been observed that this dependency is particularly pronounced in feature-mapping techniques <cit.>. In this specific example, we consider the design of an MBB beam (<ref>(a)) with 100 × 50 mesh elements. The resulting designs and their corresponding performances for various initial seeds are compared in <ref>. Observe that while we obtain diverse designs, the performances are similar. This suggests that, as expected, the landscape is highly non-convex with numerous local solutions.
§.§ Pareto
An essential consideration during the design phase is the exploration of the Pareto front; evaluating the trade-offs associated with various design choices. Let us consider the mid cantilever beam problem illustrated in <ref>(a). With 100 × 50 mesh elements, n_d = 6 and S = 6, we investigate the trade-off between the structure's compliance and volume fraction. <ref>(b) illustrates the resulting designs (and the intermediate densities at the first depth) at various volume fractions. As anticipated, we observe an increase in compliance as the allowable volume fraction is decreased. We did not impose explicit symmetry requirements on the CSG tree.
§ CONCLUSION
We introduce an FMTO framework that concurrently optimizes both the primitive parameters and the Boolean operators (union, intersection, and subtraction). Notably, by leveraging a unified Boolean operation approach, the framrwork improves upon existing FMTO methods that typically rely on predefined CSG trees and/or are limited to optimizing only primitive parameters operated with unions. The efficacy of the proposed method was demonstrated through various numerical examples.
The generated tree structure appears unnatural and dissimilar to human-designed trees. This presents an opportunity to combine the proposed method with machine learning-based frameworks <cit.> to mimic human-designed CSG trees. The method also suffers from some of the shortcomings inherent to FMTO methods. For example, the optimizer is highly susceptible to being trapped in local optima; it is also sensitive to initialization. Additionally, the optimizer may fail to converge to an optimal solution when fewer primitives are used <cit.>.
Nevertheless, several avenues for future research remain. For instance, manufacturing constraints such as symmetry, feature size control must be incorporated. One promising direction is to incorporate CNC operations into the TO process, ensuring that designs are both optimal and manufacturable <cit.>. Further, optimizing the primitive type and depth of the tree in conjunction with the operators and primitive parameters requires investigation <cit.>.
§ ACKNOWLEDGMENTS
The University of Wisconsin, Madison Graduate School supported this work.
§ COMPLIANCE WITH ETHICAL STANDARDS
The authors declare that they have no conflict of interest.
§ REPLICATION OF RESULTS
The Python code is available at https://github.com/UW-ERSL/TreeTOpgithub.com//UW-ERSL/TreeTOp
unsrt
|
http://arxiv.org/abs/2409.03529v1 | 20240905133853 | Irregular moons possibly injected from the outer solar system by a stellar flyby | [
"Susanne Pfalzner",
"Amith Govind",
"Frank Wagner"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.GA",
"astro-ph.SR"
] | |
http://arxiv.org/abs/2409.03665v1 | 20240905161803 | Quantum reservoir computing on random regular graphs | [
"Moein N. Ivaki",
"Achilleas Lazarides",
"Tapio Ala-Nissila"
] | quant-ph | [
"quant-ph",
"cond-mat.dis-nn",
"cond-mat.str-el"
] |
Quantum Technology Finland Center of Excellence, Department of Applied Physics, Aalto University,
P.O. Box 11000, FI-00076 Aalto, Finland
Interdisciplinary Centre for Mathematical Modelling and Department of Mathematical Sciences, Loughborough University, Loughborough, Leicestershire LE11 3TU, United Kingdom
Quantum Technology Finland Center of Excellence, Department of Applied Physics, Aalto University,
P.O. Box 11000, FI-00076 Aalto, Finland
Interdisciplinary Centre for Mathematical Modelling and Department of Mathematical Sciences, Loughborough University, Loughborough, Leicestershire LE11 3TU, United Kingdom
§ ABSTRACT
Quantum reservoir computing (QRC) is a low-complexity learning paradigm that combines the inherent dynamics of input-driven many-body quantum systems with classical learning techniques for nonlinear temporal data processing. Optimizing the QRC process and computing device is a complex task due to the dependence of many-body quantum systems to various factors. To explore this, we introduce a strongly interacting spin model on random regular graphs as the quantum component and investigate the interplay between static disorder, interactions, and graph connectivity, revealing their critical impact on quantum memory capacity and learnability accuracy. We tackle linear quantum and nonlinear classical tasks, and identify optimal learning and memory regimes through studying information localization, dynamical quantum correlations, and the many-body structure of the disordered Hamiltonian. In particular, we uncover the role of previously overlooked network connectivity and demonstrate how the presence of quantum correlations can significantly enhance the learning performance. Our findings thus provide guidelines for the optimal design of disordered analog quantum learning platforms.
Quantum reservoir computing on random regular graphs
Tapio Ala-Nissila
====================================================
Introduction.
Quantum machine learning leverages principles of quantum computing to enhance and accelerate traditional machine learning algorithms, offering potential breakthroughs in data processing and complex problem solving beyond classical capabilities <cit.>. The high-dimensional complex space of quantum states and the rich dynamics of quantum channels offer possibilities for designing low-complexity and high-performing platforms for quantum information processing <cit.>. Quantum reservoir computing (QRC) has recently emerged as a promising approach toward temporal data processing without the need for often inefficient and inaccurate gradient optimization <cit.>. QRC leverages the inherent dynamics of disordered and dissipative quantum systems to learn, predict, and classify various linear and nonlinear temporal tasks, both quantum and classical, including those inspired by human brain functions <cit.>. Essentially, QRC generalizes the classical reservoir learning frameworks such as chaotic liquid state machines and echo-state networks, which are known to significantly reduce the optimization complexity of conventional recurrent neural networks <cit.>. In QRC, a discrete or continuous stream of inputs interacts with a quantum system (the “reservoir”) and the system is then allowed to undergo quantum dynamical evolution described by a completely positive and trace-preserving quantum map <cit.>. After some time, determined by physical parameters, a set of measurements in the reservoir are post-processed using classical learning techniques. This is then the computing device. One should compare this approach with variational quantum algorithms on noisy systems <cit.>, which can suffer from notorious barren plateaus. In these cases, the optimization complexity diverges rapidly as the number of optimization parameters grows, causing learning to eventually fail and limiting scalability <cit.>. On the contrary, QRC systems can benefit from noise and dissipation and are much easier to scale up <cit.>.
Utilizing disordered, interacting quantum many-body systems as computing reservoirs <cit.> requires identifying the system setups and parameters which optimize the learning process. To this end, here we study the dependence of learnability performance on the underlying model, in particular, on the connectivity of the spin model and the strength of the interactions and disorder. While previous theoretical studies have focused either on one-dimensional or on fully connected models for the quantum reservoir, we study spin models defined on random regular graphs (RRGs), and find that the graph degree together with quantum correlations can significantly impact learnability. Notably, in the fully connected limit we observe that some advantages diminish. This is because in this limit the reservoir becomes “too effective" at spreading the information contained in the inputs non-locally while the measurements that are used to feed the classical post-processing are local. Interestingly, it has been shown that for certain spin models, in this densely-connected limit the level statistics demonstrate a suppression of quantum chaos <cit.>, which might be relevant to our findings.
Model and dynamics. We consider the following Hamiltonian of interacting quantum spins on a RRG:
ℋ=∑_ij,αJ^α_ijσ^α_i σ^α_j +∑_i,αh_i^ασ^α_i,
where σ^α_i is the Pauli spin-1/2 operator at the vertex i and α∈{x,z} determines the spin direction. The coupling J^α_ij=J^αA_ij, where A_ij are the matrix elements of the adjacency matrix of the graph. The degree k of a vertex (site) is defined as the number of edges connected to that vertex, and a RRG is a graph where all vertices have the same degree while the edges are otherwise randomly assigned. Such model Hamiltonian can represent different types of systems, including ion traps, as well as nuclear and electronic spins <cit.>. We set h^α_i=h^α+δ^α_i with δ^α_i∈ [-Δ^α,Δ^α] and fix J^z=1, h^z=0, h^x=1, Δ^z=0.2. The last term ensures the absence of extra Hamiltonian symmetries. All energy and time-scales are given in terms of J^z and the model is generally non-integrable (including the case Δ^x=J^x=0 for k>2).
We separate the total system into the auxiliary system 𝒮 used for data input and the reservoir 𝒮' (see Fig. <ref>). Computation is carried out in steps, as follows: Input data is encoded into the density matrix ρ_𝒮 for 𝒮, and the entire system is set to ρ_𝒮𝒮'→ρ _𝒮⊗ρ _𝒮', where ρ_𝒮'= Tr_𝒮[ρ_𝒮𝒮']; thus the ancillary qubits are set to the inputs, and the reservoir is left in a reduced state. The entire system is then allowed to evolve unitarily for the time-scale J^zΔ t, and the process is then repeated. Thus after n input steps, the state of ρ_𝒮𝒮' is
ρ_𝒮𝒮' (n Δ t) = 𝒰(Δ t) ρ_𝒮,n⊗ρ_𝒮'[(n-1) Δ t ] 𝒰^†(Δ t),
with 𝒰(Δ t)=e^-iℋΔ t, ρ_𝒮,n encodes the n-th input and ρ_𝒮'[(n-1) Δ t ] is the reservoir state after evolving for Δ t after the n-1-th input step. The quantum map describing this dynamics is strictly contractive, ensuring fading-memory and convergence in an optimal dynamical regime <cit.>. This resembles the Stinespring representation of a quantum channel, where the evolution of a physical open quantum system can be described as partial trace of a unitary operation on a composite system in a dilated Hilbert space <cit.>. Alternatively, as a result of consecutive input injections and resetting, this is equivalent to an incoherent process on the auxiliary system, which can be approximated by local dissipators of rank dim(𝒮) <cit.>.
Training, Learning and Readout. The input-output relation of a quantum reservoir supplemented with a classical learning layer can be summarized in a functional form as {y_n}=ℱ( {ρ_𝒮,n}, ρ_𝒮𝒮',n, 𝒲). Here, {y_n} indicates the set of predictions obtained after classical post-processing. These predictions are derived by minimizing an error measure with respect to a sequence of desired targets y_n through a learning process. During training, the measurements of local expectation values ⟨σ^α_i (nΔ t)⟩ = Tr[ρ_𝒮'(nΔ t) σ^α_i], and two-point correlation functions, ⟨σ^α_i(nΔ t) σ^α_j(nΔ t) ⟩, where i,j ∈𝒮', are used as “features". These features are combined linearly to fit the desired targets y_n, and the optimal weights 𝒲 are determined by this fitting process. Importantly, we only consider measurements in the computational basis α=z and do not apply time-multiplexing techniques <cit.>, which increases the complexity in number of measurements drastically. Moreover, we do not address the back-action of projective measurements <cit.>. However, by the virtue of the fading-memory and echo-state properties of a suitable reservoir, a full temporal re-initialization of the system is unnecessary; only the recent state of the system is relevant for making accurate predictions <cit.>. To quantify the information retrieval accuracy, we employ the Pearson correlation coefficient 𝒞_n=cov^2(y_n,y̅_n)/(var (y_n) ,var (y̅_n)) <cit.>, where y̅_n denotes the predicted value at step n. The coefficient 𝒞_n is bounded between 1 and 0, indicating complete or no linear correlation between the predicted and target values, respectively. We also estimate prediction accuracy using the averaged mean-squared error, defined as MSE=∑_n^N_L(y_n-y̅_n)^2/N_L, where N_L is the length of the input data.
Given this discussion, it is clear that the dynamical properties of the reservoir play a pivotal role in learning. In particular, localization behavior of quantum Hamiltonian models defined on RRGs is directly affected by the connectivity, i.e., the graph degree k <cit.>. Connectivity can also influence the integrability and chaotic properties in certain models <cit.>. In earlier work it was found that, akin to some classical computation models and for tasks which require both sufficient memory and degree of nonlinearity, QRC systems achieve optimal performance at the edge of chaos, or more generally at the vicinity of a (quantum) phase transition <cit.>. Here we will study whether this is also the case in our model.
Quantum Tomography and Memory. As our first example, we study a linear quantum task. Consider the following family of bipartite input quantum states, known as Werner states <cit.>
ρ_ W(η,d,t)= d-1+η(t)/d-1𝕀/d^2- η(t)/1-d𝒱/d ,
given as a mixture of a swap operator 𝒱 and a maximally mixed state 𝕀, with the mixing parameter η. The swap operator is defined as 𝒱(|Ψ⟩⊗|Φ⟩)=|Φ⟩⊗|Ψ⟩, exchanging the states of a bipartite quantum state. d is dimension of the input and here indicates the number of ancillary qubits. We consider two-qubit input states and can write ρ_ W(η',t)= 1/4 (1-η'(t)) 𝕀- η'(t) ρ_ B, with ρ_ B=|Φ⟩⟨Φ| and |Φ⟩=(|↑↓⟩-|↓↑⟩)/√(2) a singlet Bell state <cit.>. With 0≤η'≤1, ρ_ W is entangled for η'>1/(d+1) and separable otherwise <cit.>. For a given unitary 𝒰, this family of bipartite quantum states (by definition) satisfy ρ_ W=𝒰⊗𝒰 ρ_ W 𝒰^†⊗𝒰^†; a property which is of practical interest in quantum steering and communication protocols <cit.>. A large family of quantum states, including both Werner and isotropic states, can be expressed in a similar manner, where only a single mixing parameter can characterize the state uniquely. A high fidelity temporal learning of the mixing parameter η(t) is thus a proxy to learning dynamical evolution of quantum correlations of input states. Generalization of the described learning scheme to higher-dimensional inputs is straightforward and only requires the ability to encode such states. To evaluate the linear memory capacity, we set the learning target to recovering previous inputs, y_n,τ≡ y(nΔ t-τΔ t)=η' (nΔ t-τΔ t). The total memory capacity for delayed construction of previous inputs is defined as 𝒞_T=∑_n,τ𝒞_n,τ, with τ≥0 an integer specifying the delay time and 𝒞_n,τ the Pearson correlation coefficient for y_n,τ. We note that the total memory is also bounded and we have 𝒞_n,τ→ 0 when τ→∞. For a fixed finite delay time, we can normalize the averaged total memory and write 𝒞_T=𝒞_T/τ_ max.
Optimal Learning Regimes. Figure <ref> displays a learning diagram for 𝒞_T and MSE of the predicted values in the Δ^x-J^x plane. The optimal learning regime for our model occurs at around the boundary of chaotic-localized phase transitions. Notably, for a given interaction timescale J^zΔ t and for a graph with a fixed degree k, the addition of interactions σ^x_i σ^x_j in the disordered regimes is advantageous to both memory and also short-term predication accuracy. As we will numerically establish later, this term represents an entangling and delocalizing interaction. The behavior of memory capacity as a function of the 𝒮-𝒮' interaction timescale J^zΔ t is shown for selected points in Fig. <ref>. In certain dynamical regimes, there can be a window where the largest memory performance is achieved, in accordance with previous studies <cit.>. As disorder increases, only high-degree reservoirs exhibit rapid initial growth of memory, followed by saturation in the long-time limit. Interestingly, in this set, the graphs with k=2,3 exhibit anomalously slow behavior, which reflects the slow propagation of information in the reservoir in these regimes. Adding the ordering interactions recovers the fast growth of the quantum memory capacity and allows the system to reach the optimal possible performance for all degrees; we attribute this to their delocalizing effect.
The main features discussed above are also evident in Fig. <ref>, where we additionally observe that, for the specified time intervals, higher connectivity elevates the memory capacity and improves the short-term prediction errors. However, the most optimal learning regime, in terms of both memory and accuracy, is achieved by tuning moderate interactions and computing on graphs with an intermediate k/N ratio. While reservoirs defined on higher degree graphs can evade localization in the presence of stronger disorder, it becomes exceedingly difficult to extract the non-locally hidden inputs information through only (quasi-) local measurements when k→ N-1. In such cases, retrieving the inputs information in a chosen measurement basis possibly requires measuring higher-order correlation functions of the form ⟨σ^α_iσ^α_j⋯σ^α_N⟩, which should be useful as extra learnable features in the training stage. Changes in the quantum chaotic behaviour in the highly-connected limit <cit.> may also be responsible for the drop in learning performance.
Correlation and Entanglement. The performance of quantum reservoirs can be related to fundamental and physical measures, such as degree of “quantumness" <cit.>, information scrambling and dynamical correlations <cit.>. Here we study an alternative version of the correlation operator introduced in Refs. <cit.>, and define the auxiliary system-reservoir correlation as χ(t)=ρ_𝒮𝒮'(0)-ρ_𝒮𝒮'(t), with ρ_𝒮𝒮'(0)=ρ_𝒮(0)⊗ρ_𝒮'(0) and ρ_𝒮𝒮'(t)=𝒰(t) ρ_𝒮𝒮'(0) 𝒰^†(t). We numerically calculate ‖χ(t)‖, with ‖·‖ the Hilbert-Schmidt norm. This probe measures the degree of total correlation introduced by the dynamics between the initially unentangled 𝒮 and 𝒮', and here it can also be interpreted simply as a distance measure. We set ρ_𝒮=ρ_W with some random η' and average over different realization of initial states and disordered Hamiltonians. As shown in Fig. <ref>(a), initially ‖χ(0)‖=0, indicating no correlation between auxiliary and reservoir degrees of freedom. As time passes by, ‖χ(t)‖ displays an initial growth within the time-scale τ∝ 1/Δ^x for Δ^x/J^z>1. Notably, with an optimal disorder level, composite system can avoid localization while leveraging chaotic dynamics for rapid and enhanced developments of correlations. While being sensitive to certain forms of information localization and the spectral properties of the underlying Hamiltonian, this measure lacks the ability to distinguish between quantum and classical correlations.
A more useful measure of the dynamical quantum correlation build-up is the mixed-state entanglement between the auxiliary system 𝒮 and the reservoir 𝒮'. We characterize this here by the logarithmic negativity defined as ℰ_SS'=log_2‖ρ^T_𝒮_𝒮𝒮'‖, where T_𝒮 indicates partial transpose with respect to 𝒮 and ‖·‖ is the trace norm <cit.>. Finite logarithmic negativity quantifies entanglement cost and entanglement of distillation <cit.>. The dynamical behavior of ℰ_SS' for different disorder strengths is shown in Fig. <ref>(b). Slow logarithmic growth of entanglement can be attributed to (the onset of) localization, which should support area-law scaling with the system size, as opposed to the usual volume-law scaling in chaotic quantum systems <cit.>. As can be seen, adding the ordering interactions σ^x_iσ^x_j in the large disorder regime recovers the fast growth and produces strong entanglement at short times. The presence of quantum correlations can in turn improve the memory capacity and learnability accuracy of disordered reservoirs, as observed in the previous section.
Classical Logical Multitasking. To showcase the ability of our spin reservoir in performing nonlinear tasks, we now consider classical logical multitasking <cit.>. Given two independent sequences of binary inputs, the network tries to simultaneously learn how to 𝙰𝙽𝙳, 𝙾𝚁 and 𝚇𝙾𝚁 them. We set the state of each input spin to ρ_n = (1-η_n)|↑⟩⟨↑| + η_n|↓⟩⟨↓| with η_n ∈{0,1} encoding the input bits. Figure <ref> displays the accuracy of the learned operations as the disorder and connectivity are varied. The 𝚇𝙾𝚁 operation is the only operation that cannot be separated linearly in the two-dimensional input space and shows higher sensitivities to tuning the strength of disorder. However, adding moderate interactions recovers the maximal performance in most regimes, consistent with earlier observations. This supports the expectation that excessive local disorder, in the absence of quantum interactions, can quickly undermine nonlinear information processing capabilities <cit.>. Remarkably, in this case the critical disorder strength for the accuracy of the 𝚇𝙾𝚁 to fall just below ≈ 0.7 behaves almost linearly as a function of the graph degree, offering a practical tool to control the performance of QRC systems.
Summary and Discussion. We have introduced a many-body spin reservoir defined on RRGs and evaluated its learning capabilities for various tasks. Our findings demonstrate that, in an optimal dynamical regime, given in terms of disorder, interactions, and connectivity, our model captures the key properties of a high-performing quantum reservoir without requiring time-multiplexing. Our work paves the way towards designing practical analog QRC platforms by linking their performance to fundamental physical and geometrical properties. In experimental realizations, utilizing spatial-multiplexing for small quantum systems is a practical option <cit.>. Possibly, a few random measurements can provide enough information to perform a given learning task with high fidelity <cit.>. Additionally, it would be informative to study how different (symmetry) classes of unitary operations <cit.> and non-trivial electronic topology <cit.> affect the learning performance. Finally, while here we focused on supervised learning, it is worth exploring the potential for unsupervised approaches for classification tasks.
Acknowledgements. We wish to thank Alexander Balanov, Juan Sebastian Totero Gongora, Sergey Savielev, and Alexandre Zagoskin for useful discussions. We acknowledge the support of the UK Research and Innovation (UKRI) Horizon Europe guarantee scheme for the QRC-4-ESP project under grant number 101129663 and the Academy of Finland through its QTF Center of Excellence program (Project No. 312298).
apsrev4-1
§ APPENDIX A: DETAILS OF CALCULATIONS
Here, we provide some details of our numerical calculations. As usual <cit.>, a number of initial steps are discarded to eliminate transient effects. Specifically, we discard N_ transient=600-800 steps. We use N_ train=1000-2000 steps for training, where each “training steps” refers to a point in time where measurements are used to update the model's parameters by comparing the predicted outputs to the target values. Finally, N_ test=100-200 steps are reserved for testing the performance of the trained model. The training data is stored in a matrix of size N_O× N_ train, where the entry (i,j) represents the i-th observable measured at the j-th training step. Here, N_O=N_𝒮'×(N_𝒮'+1)/2, with N_O being the total number of measured observables and N_𝒮' the number of spins in the reservoir. The reported results are averaged over 50-200 independent realizations of random Hamiltonians and graphs, and the averages are taken over the results of the evaluation stage. We only consider connected graphs, i.e., graphs without isolated subgraphs.
To emulate encoding errors and avoid overfitting in the training process regarding the memory task studied in the main manuscript, we add small initial noise to inputs by setting η'(t)→η'(t)±δ^η with δ^η∈ [0, 0.02], and rescale properly. For the supervised learning, we utilize the ridge linear regression with appropriate regularization strength. For the classification of the logical multitasking, we use support vector machines with a nonlinear kernel <cit.>. In this case, the accuracy is calculated by simply matching the binary predictions with the inputs.
§ APPENDIX B: ADDITIONAL RESULTS
In this section we present supplementary data for the previously unexplored regions of the learning diagrams related to the memory task. Figure <ref> shows the memory capacity for (Δ^x, J^x)=(0,0), i.e., in the absence of disorder and interactions. Surprisingly, the performance declines as the graph degree increases. While we cannot provide a definitive explanation for this behavior, it may be attributed to several factors, including the specific form of the initial state, the locality of measurements, finite-size effects, and most notably, the possibility that the system is in a (dynamical) critical state in this limit. We note that this behavior is not related to integrability, as the model cannot be mapped to a free-fermionic system due to random or all-to-all connectivities. This can be further supported by calculating the gap ratio r_n= min[δ_n,δ_n+1]/ max[δ_n,δ_n+1], where n labels the sorted eigenvalues and δ_n=E_n+1-E_n. The mean level spacing averaged over different disordered Hamiltonians and graph realizations ⟨ r⟩ is ≈ 0.39 and ≈ 0.53, for localized and ergodic phases, respectively <cit.>. As shown in Fig. <ref>(b), as disorder increases, ⟨ r⟩ decreases from its expected thermal value. Notably, for (Δ^x, J^x)=(0,0), the gap ratio remains close to its ergodic value for all degrees. Additionally, introducing interactions deep within the localized regimes pushes the gap ratio higher, moving it closer to its ergodic value. However, a thorough investigation of this behavior would require finite-size scaling, which is beyond the scope of the current study.
Furthermore, we calculate the logarithmic negativity for the extreme cases with k=2 and k=N-1. For k=2 and Δ^x≠0, one observes a lower amount of entanglement generation between auxiliary system and the reservoir compared to the the case k=3 studied in the main text, as one might have expected. With k=7 and at finite disorder, the dynamical quantum correlations grow faster and reach higher values at finite times, compared to the former case. However, in the absence of disorder and interactions, the negativity exhibits an anomalous dynamical behavior. We have also verified that the qualitative behavior is not sensitive to the strength of the symmetry breaking term Δ^z, which breaks the Ising ℤ_2 symmetry ∏_i σ^x_i. The long-time saturation value of ℰ_SS'(t) is controlled solely by the system size, and thus the memory for rather long 𝒮-𝒮' interaction time-scale ∝ J^zΔ t is expected to be comparable in all cases, provided that the system is not strictly localized. One should note that while having a strong entanglement is necessary for optimal learning <cit.>, it does not guarantee it. This is reflected in the small Δ^x region of the learning diagrams of the main text. While not crucial to the results presented in the main manuscript, the observed behavior within this parameter regime and in the limit of fully connected graphs is believed to be mostly of dynamical nature, warranting further studies. In this context, it would be insightful to investigate whether some of the observed anomalies can be accounted for by the findings presented in Ref. <cit.>.
|
http://arxiv.org/abs/2409.02986v1 | 20240904180001 | X-ray polarisation in AGN circumnuclear media. Polarisation framework and 2D torus models | [
"Bert Vander Meulen",
"Peter Camps",
"Djordje Savic",
"Maarten Baes",
"Giorgio Matt",
"Marko Stalevski"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Polarisation framework and 2D torus models
X-ray polarisation in AGN circumnuclear media
B. Vander Meulen et al.
Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium
[email protected]
Institut d’Astrophysique et de Géophysique, Université de Liège, Allée du 6 Août 19c, 4000 Liège, Belgium
Astronomical Observatory, Volgina 7, 11060 Belgrade, Serbia
Dipartimento di Matematica e Fisica, Universitá degli Studi Roma Tre, via della Vasca Navale 84, 00146 Roma, Italy
Cold gas and dust reprocess the central X-ray emission of active galactic nuclei (AGN), producing characteristic spectro-polarimetric features in the X-ray band. The recent launch of IXPE allows for observations of this X-ray polarisation signal, which encodes unique information on the parsec-scale circumnuclear medium of obscured AGN. However, the models for interpreting these polarimetric data are under-explored and do not reach the same level of sophistication as the corresponding spectral models.
We aim at closing the gap between the spectral and spectro-polarimetric modelling of AGN circumnuclear media in the X-ray band by providing the tools for simulating X-ray polarisation in complex geometries of cold gas alongside X-ray spectra.
We lay out the framework for X-ray polarisation in 3D radiative transfer simulations and provide an implementation to the 3D radiative transfer code SKIRT, focussing on (de)polarisation due to scattering and fluorescent re-emission. As an application, we explored the spectro-polarimetric properties of a 2D toroidal reprocessor of cold gas, modelling the circumnuclear medium of AGN.
For the 2D torus model, we find a complex behaviour of the polarisation angle with photon energy, which we interpret as a balance between the reprocessed photon flux originating from different sky regions, with a direct link to the torus geometry. We calculated a large grid of AGN torus models and demonstrated how spatially resolved X-ray polarisation maps could form a useful tool for interpreting the geometrical information that is encoded in IXPE observations. With this work, we release high-resolution AGN torus templates that simultaneously describe X-ray spectra and spectro-polarimetry for observational data fitting with XSPEC.
The SKIRT code can now model X-ray polarisation simultaneously with X-ray spectra and provide synthetic spectro-polarimetric observations for complex 3D circumnuclear media, with all features of the established SKIRT framework available.
X-ray polarisation in AGN circumnuclear media
Bert Vander Meulen
1
Peter Camps
1
Đorđe Savić
2, 3
Maarten Baes
1
Giorgio Matt
4
Marko Stalevski
1, 3
Received May 17, 2024; accepted July 29, 2024
===================================================================================================================================================================================================================================
§ INTRODUCTION
Active galactic nuclei (AGN) are the compact central regions of massive galaxies whose excessive brightness across the electromagnetic spectrum is powered by the accretion of gas and dust onto a supermassive black hole (SMBH) <cit.>. AGN are the most luminous persistent sources in the Universe and play a crucial role in galaxy evolution by pumping energy and momentum into the interstellar gas and launching powerful jets and outflows. AGN feedback is one of the most important and least understood ingredients of galaxy evolution theories, with several observed correlations indicating that AGN and their host galaxy co-evolve and regulate each other’s growth <cit.>.
According to the unified AGN structure model, the observed dichotomy in AGN types is explained by a large-scale toroidal structure of gas and dust in the equatorial plane, which causes line-of-sight obscuration, depending on the observer's viewing angle <cit.>. This `torus' of gas and dust is then responsible for the extinction at optical and UV wavelengths, the thermal dust re-emission in the infrared, and reprocessing in the X-ray band. Furthermore, it explains the absence of broad optical lines in obscured AGN and the appearance of these lines in polarised light <cit.>. This circumnuclear torus medium could further be important as an accretion reservoir fuelling the active SMBH or as a direct probe on AGN feedback <cit.>.
Lately, a new picture has emerged of the dust structure in local active galaxies, challenging the classical `dusty torus' paradigm. Spectral modelling suggests that the circumnuclear medium has a more complex three-dimensional structure with clumps and filaments <cit.>, which is further supported by observations of AGN variability <cit.>. Furthermore, high-angular resolution imaging in the mid-infrared has shown that the circumnuclear medium is also extended in the polar direction, as opposed to a purely equatorial torus <cit.>.
The AGN circumnuclear medium could be further explored through X-ray observations, as most AGN spectra show a strong X-ray component, which is produced by Compton up-scattering in a corona of hot electrons close to the SMBH <cit.>. These coronal X-rays are then reprocessed by the circumnuclear material, producing characteristic spectral features that form a powerful probe regarding the distribution of gas and dust in local AGN <cit.>. X-rays have a high penetrating power, and therefore they shed light on the most obscured episodes of SMBH accretion. Indeed, a large population of obscured AGN is only revealed through X-ray observations, with spectra that are shaped by reprocessing in the AGN torus <cit.>. As many of these sources cannot be spatially resolved, they contribute to the cosmic X-ray background <cit.>.
The most prominent features of X-ray reprocessing by the AGN torus are the narrow Fe Kα line at 6.4 keV <cit.> and the Compton reflection hump peaking at about 30 keV <cit.>. These features directly probe the cold gas and dust surrounding AGN and are commonly used to constrain the geometry of the reprocessing medium <cit.>. Indeed, recent radiative transfer studies have demonstrated that X-ray spectra of obscured AGN carry detailed information on the 3D distribution of obscuring gas and dust, tracing clumpy structures <cit.> and polar extended material <cit.> in the circumnuclear medium, even outside of the line of sight.
Two additional observables can be extracted from the X-ray emission emerging from obscured AGN when dedicated polarisation instrumentation is available. The polarisation angle and polarisation degree encode complementary information on the circumnuclear medium, which can be used to constrain the geometry of the torus and its orientation relative to the host galaxy. The recent launch of the Imaging X-ray Polarimetry Explorer <cit.> introduced X-ray polarimetry as a new tool to study AGN in the 2-8 keV band, with five radio-quiet AGN that have been observed in the last two years: MCG-05-23-16 <cit.>, the Circinus galaxy <cit.>, NGC4151 <cit.>, IC4329A <cit.>, and NGC1068 <cit.>. The next generation of X-ray polarisation missions, with the X-ray Polarimeter Satellite (XPoSat) <cit.> and the enhanced X-ray Timing and Polarization mission (eXTP) <cit.>, further indicate a promising future for X-ray polarimetry.
The recent developments in observational X-ray polarimetry motivate the need for more advanced polarisation models that are based on 3D radiative transfer simulations. However, spectro-polarimetric X-ray models for the AGN torus are under-explored and do not reach the same level of sophistication as the corresponding X-ray spectral models <cit.>.
Historically, one of the first X-ray polarisation models was the wedge torus model by <cit.>, which has been used for the interpretation of the 772 ks of IXPE data on Circinus AGN <cit.> and for the exploration of the binary geometry of GRS 1915+105 <cit.>. However, this model allows for little geometrical flexibility.
The STOKES code <cit.> on the other hand offers a more flexible radiative transfer framework, and it has been used to make polarisation predictions for a range of 3D torus geometries <cit.>. STOKES applies a Monte Carlo technique to model scattering-induced polarisation in the X-ray, UV, optical, and infrared bands. Recently, the STOKES code has been used to model X-ray polarisation in a parsec-scale equatorial torus, assuming neutral and partially ionised gas <cit.>, which has been applied to the 1.15 Ms IXPE observation of NGC1068 <cit.>.
Most recently, the MONACO code <cit.> has been used to predict the X-ray polarisation signal of Circinus AGN, which was compared to the IXPE observational data <cit.>. MONACO, which builds on the Geant4 simulation toolkit <cit.>, implements X-ray polarisation in cold neutral material, which has been applied to post-process the 3D hydrodynamical torus simulations by <cit.> modelling the circumnuclear gas in the Circinus galaxy.
With this work, we aim to close the gap between the X-ray spectral modelling and X-ray spectro-polarimetric modelling of AGN circumnuclear media by setting up 3D radiative transfer simulations that simultaneously predict X-ray polarisation and X-ray spectra. For this, we focus on the well-established SKIRT code <cit.> so that our simulations are highly efficient in terms of computational runtime, allowing complex 3D models to be explored in a short time. Furthermore, the SKIRT code offers an unmatched geometrical flexibility for setting up radiative transfer simulations in full 3D, which has now become available to X-ray polarisation modelling. This work introduces the necessary tools for modelling X-ray polarisation in a general 3D context and presents an implementation to the SKIRT code, which is publicly available. As a first application, we explored the spectro-polarimetric properties of a simple 2D torus model. Future work will focus on more complex 3D geometries beyond the classical AGN torus.
The goal of this work is to lay out the X-ray polarisation framework for 3D radiative transfer codes and investigate the X-ray polarisation observables of a classical AGN torus model. The outline for this paper is as follows: In Sect. <ref>, we introduce the polarisation framework. In Sect. <ref>, we provide an implementation to the 3D radiative transfer code SKIRT. In Sect. <ref>, we study the polarisation observables of a 2D torus model and present the corresponding torus templates for observational data fitting. We discuss our findings in Sect. <ref> and summarise our results in Sect. <ref>.
§ X-RAY POLARISATION FRAMEWORK
§.§ Stokes vector formalism
In Monte Carlo radiative transfer simulations, the polarisation state of the (discretised) radiation field is most conveniently characterised in terms of the four Stokes parameters <cit.>. Together, these parameters form a Stokes vector , which is traced and updated throughout the simulated transfer medium for each `photon packet':
= [ I; Q; U; V ].
Stokes parameter I represents the total intensity of the photon packet, while Q and U describe linear polarisation, and V describes circular polarisation. The four Stokes parameters encode all polarisation information of the radiation field (except for its phase), so that the linear polarisation degree P_L can be reconstructed as
P_L = √(Q^2 + U^2)/I,
and the linear polarisation angle γ can be found as
γ = 1/2arctan_2(U/Q),
with arctan_2 being the inverse tangent function that preserves the quadrant, also noted as atan2(U, Q).
Stokes vectors are defined relative to a reference direction that is perpendicular to the photon propagation direction . Positive Q-values describe linear polarisation in the reference direction, while negative Q-values describe linear polarisation perpendicular to this reference direction. Equivalently, Stokes U describes linear polarisation along orthogonal axes rotated over 45^∘ with respect to the reference direction. One has
Q = I P_Lcos2γ,
U = I P_Lsin2γ.
For the remainder of this work, we focus on linear polarisation, as IXPE observations do not capture the Stokes V-component. Nevertheless, our polarisation framework includes a functional implementation of circular polarisation (which is not discussed).
When the reference direction is rotated by an angle φ about to a new reference direction _rot, the Stokes parameters transform as
_rot = R(φ) ,
with R(φ) being the rotation matrix, which is given as
R(φ) =
[ 1 0 0 0; 0 cos2φ sin2φ 0; 0 -sin2φ cos2φ 0; 0 0 0 1 ],
mixing the Q and U parameters. Combining Eq. (<ref>), (<ref>) and (<ref>), we obtain
Q_rot = I P_Lcos2(γ - φ),
U_rot = I P_Lsin2(γ - φ),
so that we have P_L_rot=P_L and γ_rot=γ - φ. This rotation transformation is important to describe scattering interactions relative to a reference direction that lays in the scattering plane and to record Stokes vectors relative to the north direction of the observer frame (following the IAU conventions).
In polarised radiative transfer, photon interactions generally depend on the polarisation state, meaning that the processes are described in terms of the Stokes parameters of that photon. In addition, scattering can modify the polarisation state of a photon, which can be described as a matrix multiplication on with the corresponding Müller scattering matrix M. In this work, Müller matrices M(θ, x) are a function of the scattering angle θ and the incoming photon energy x = E/m_ec^2. Considering that the reference direction of the incoming photon should first be rotated to an intermediate reference direction _rot in the scattering plane, the Stokes vector ' after scattering can be obtained as
'=M(θ,x) R(φ) ,
where φ is the angle by which the initial reference direction must be rotated about to end up in the scattering plane as _rot (see Fig. <ref>). Finally, we note that Stokes vector ' refers to a new reference direction ', which is just the intermediate reference direction _rot, rotated over the scattering angle θ in the scattering plane (i.e. a rotation about the scattering plane normal ×_rot), to assure that the new reference direction ' is perpendicular to the new propagation direction '. The 3D geometry of a scattering interaction is shown in Fig. <ref>, with unit direction vectors, unit reference vectors, and rotation angles indicated. For more details, we refer to <cit.>. Applying Rodrigues' rotation formula, we obtain an expression for ' as
'=cosφcosθ + sinφcosθ(×) -sinθ .
Various normalisation conventions are in use for the Müller matrices M (e.g. different conventions can be found in <cit.>, <cit.> and <cit.>). However, polarisation observables such as P_L and γ do not depend on this absolute normalisation factor, as they are calculated from Stokes parameter ratios, causing the normalisation factor to cancel out. The absolute polarised fluxes Q and U can be recovered from the normalised total flux I (similar to Eq. (<ref>)), which is discussed in Sect. <ref>.
§.§ X-ray radiative processes
In this work, we focus on radiative transfer simulations in cold atomic gas, modelling the circumnuclear medium that is causing most of the X-ray extinction in obscured AGN. Furthermore, this material is responsible for the distinct X-ray reflection features observed in Compton-thick AGN, such as the narrow Fe Kα line at 6.4 keV and the Compton reflection hump at about 30 keV. In particular, we consider photo-absorption by neutral atomic gas with self-consistent fluorescent line emission and bound-electron scattering.
Photo-absorption does not depend on the polarisation state of the incoming photon. Therefore, it can be implemented using the standard <cit.> cross-sections and a custom abundance table. Subsequently, the absorbed photon energy can be re-emitted as a fluorescent line photon, with a probability given by the fluorescent yield of that atom <cit.>. These line photons are unpolarised, regardless of the initial polarisation state of the absorbed photon <cit.>. Fluorescence thus resets the Stokes vector, and the sequence of photo-absorption followed by fluorescent re-emission effectively acts as a depolarisation process.
In cold atomic gas, X-ray photons are scattered by the electrons that are bound to the neutral gas atoms, which can be described in terms of atomic form factors and incoherent scattering functions <cit.>. However, this bound-electron scattering can be reasonably well approximated as Compton scattering on Z free electrons per neutral gas atom, which we assume in this work. For <cit.> solar gas abundances, this corresponds to 1.21 free electrons per H-atom, which changes for other abundance tables. This free-electron approximation, applied to X-ray polarisation in a general 3D context, is the focus of this work, and we refer to future work for a treatment of polarised bound-electron scattering <cit.>.
In the next section, we describe the details of polarised Compton scattering (approximating bound-electron scattering), which is polarisation dependent and updates the polarisation state of the incoming photon. In particular, we focus on the total scattering cross-section and the scattering phase function for polarised photons and the Müller matrix for updating the Stokes parameters after a scattering interaction. This framework could also be used to model media of free electrons only, in addition to the cold gas that is the focus here.
§.§ Polarised Compton scattering
Compton scattering describes the inelastic scattering of high-energy photons on free electrons, which is often used to approximate bound-electron scattering in cold atomic gas <cit.>. The Müller matrix for Compton scattering is given by <cit.>, using the same sign convention as the IAU <cit.>:
M(θ, x) ∝[ S_11(θ, x) S_12(θ, x) 0 0; S_12(θ, x) S_22(θ, x) 0 0; 0 0 S_33(θ, x) 0; 0 0 0 S_44(θ, x) ],
with θ being the scattering angle and x = E/m_ec^2 the photon energy scaled to the electron rest energy. Moreover, we assume an isotropic distribution for the electron spin directions, which causes the non-diagonal matrix elements in the fourth row and fourth column to be zero <cit.>. The non-zero matrix elements of M(θ, x) are
S_11(θ, x) = C^3(θ, x) + C(θ, x) -C^2(θ, x)sin^2θ,
S_12(θ, x) = -C^2(θ, x)sin^2θ,
S_22(θ, x) = C^2(θ, x) (1+cos^2θ),
S_33(θ, x) = 2C^2(θ, x)cosθ,
S_44(θ, x) = (C^3(θ, x) + C(θ, x)) cosθ,
with C(θ, x) being the Compton factor:
C(θ, x) = (1+x (1-cosθ))^-1.
In the low energy limit, C(θ, x)→ 1, so that Eq. (<ref>) converges to the Müller matrix for non-relativistic Thomson scattering.
Matrix element S_11(θ, x) relates the scattered intensity I' to the incoming intensity I in case of unpolarised photons and is proportional to the <cit.> differential cross-section for unpolarised Compton scattering. For (partially) polarised radiation, S_12(θ, x) introduces an azimuthal modulation to the Klein-Nishina formula, which depends explicitly on the polarisation state of the incoming photon. Together, S_11(θ, x) and S_12(θ, x) make up the differential cross-section for polarised Compton scattering:
dσ/dΩ(θ, φ, x, ) = 3σ_T/16π[S_11(θ, x) + S_12(θ, x) P_Lcos2(φ-γ)],
with σ_T≈ 6.65 × 10^-25 cm^2 being the Thomson cross-section and φ the azimuthal scattering angle relative to the polarisation reference direction (so that φ-γ is the azimuthal scattering angle relative to the incoming polarisation direction ).
For fully polarised photons (P_L=1), Eq. (<ref>) reduces to
dσ/dΩ(θ, φ, x, )|_P_L=1 = 3 σ_T/16π[ C^3 + C -2 C^2sin^2θcos^2(φ-γ)],
with C ≡ C(θ, x). In the IXPE range, the angular dependence of C(θ, x) is weak, and Eq. (<ref>) is roughly symmetric around the incoming polarisation direction , as the factor sinθ cos(φ-γ) is just the cosine between ' and . For a full derivation from quantum electrodynamics, we refer to <cit.>.
By integrating Eq. (<ref>) over the unit sphere, we obtain the total cross-section for polarised Compton scattering as
σ(x) = 3 σ_T/4[1+x/(1+2x)^2 + 2/x^2 + (1/2x-x+1/x^3) ln(2x+1) ],
which is the exact same expression as the total cross-section for unpolarised Compton scattering. This cross-section does not depend on the polarisation state, meaning that polarisation effects do not influence the Compton scattering efficiency. Furthermore, this cross-section is roughly constant over the IXPE energy range, decreasing from 0.99 σ_T at 2 keV to 0.97 σ_T at 8 keV.
Normalising Eq. (<ref>) on the unit sphere, we obtain the phase function for polarised Compton scattering, shown in Fig. <ref>:
Φ(θ, φ, x, ) = 3σ_T/16πσ(x)[S_11(θ, x) + S_12(θ, x) P_Lcos2(φ-γ)]
= Ψ(θ, x) - 3σ_T C^2(θ, x)/16πσ(x)sin^2θ P_Lcos2(φ-γ).
This phase function Φ(θ, φ, x, ) is the sum of the standard phase function for unpolarised Compton scattering Ψ(θ, x)∝S_11(θ, x) and an azimuthal modulation that is characteristic for polarised Compton scattering. The azimuthal cosine modulation introduces a bias towards scattering into the plane perpendicular to the polarisation direction (φ= γ + 90^∘ and φ= γ + 270^∘), with an amplitude that is roughly proportional to sin^2θ× P_L. Therefore, the modulation is most pronounced at θ=90^∘, where it maximally reduces the probability for scattering into the polarisation direction (φ= γ or φ= γ + 180^∘). As expected, the modulation strength is linearly proportional to the polarisation degree P_L, as only polarised photons experience the modulation effect.
The left panel of Fig. <ref> shows the Compton phase function at 2 keV (solid line) and 8 keV (dashed line), illustrating how the azimuthal modulation does not depend on the photon energy in the IXPE range. The only difference is the slightly stronger preference for forward scattering at higher energies, which is regulated by the polarisation-independent phase function term Ψ(θ, x), as is the case for unpolarised Compton scattering. The right panel of Fig. <ref> visualises the scattering phase function for three (incoming) polarisation states, illustrating how the phase function for unpolarised photons (P_L=0) is symmetric around the incoming photon direction , while for fully polarised photons (P_L=1), the phase function is almost perfectly symmetric around the incoming polarisation direction . For partially polarised photons, the scattering phase function is truly asymmetric, as illustrated for P_L=0.5 on the right panel of Fig. <ref>.
After a Compton scattering interaction, the Stokes parameters of the scattered photon are updated as
[ I'; Q'; U'; V' ]∝[ S_11(θ, x) I + S_12(θ, x) (cos2φ Q+sin2φ U); S_12(θ, x) I + S_22(θ, x) (cos2φ Q+sin2φ U); S_33(θ, x) (-sin2φ Q+cos2φ U); S_44(θ, x) V ],
combining Eq. (<ref>), (<ref>), and (<ref>). In addition, the photon energy is reduced to E'=C(θ, x) × E, and the updated Stokes vector ' is normalised so that I'=C(θ, x) × I, to comply with the conservation of four-momentum for the photon-electron pair.
Through Compton scattering, the radiation field obtains a linear polarisation degree P'_L, which depends on the initial polarisation state (P_L and γ) and the scattering direction (θ and φ):
P'_L = √(( S_12 + S_22 P_Lcos2(γ-φ))^2 + (S_33 P_Lsin2(γ-φ))^2)/S_11 + S_12P_Lcos2(γ-φ),
combing Eq. (<ref>), (<ref>) and (<ref>), with S_ij≡S_ij(θ, x) being the Müller matrix elements given by Eq. (<ref>). For various incident polarisation degrees P_L, the resulting polarisation degree P'_L is shown in Fig. <ref>, illustrating how any polarisation degree between 0 and 1 can be obtained, depending on the specific scattering direction. Regardless of the initial polarisation state, scattered photons are maximally polarised (P'_L=1) for θ=90^∘, while forward scattering and backscattering leaves the polarisation degree unchanged. The resulting polarisation degree is shown at 2 keV (solid line) and 8 keV (dashed line), demonstrating how the energy dependence of P'_L is negligible in the IXPE range.
Equivalently, the linear polarisation angle obtained through Compton scattering is
γ' = 1/2arctan_2(S_33(θ, x) P_Lsin2(γ-φ)/S_12(θ, x) + S_22(θ, x) P_Lcos2(γ-φ)),
combing Eq. (<ref>), (<ref>) and (<ref>). However, we remind that γ' as given by Eq. (<ref>) refers to a different reference direction for each pair of scattering angles (θ, φ) (see Eq. (<ref>)). Therefore, these polarisation angles cannot be directly compared, except for some special symmetric cases such as the scenario discussed in Sect. <ref>. For unpolarised photons (P_L=0), we recover the standard result that scattering induces polarisation perpendicular to the scattering plane (γ'=90^∘, with ' in the scattering plane).
§ SKIRT IMPLEMENTATION
§.§ The radiative transfer code SKIRT
For the remainder of this work, we focus on radiative transfer simulations with the radiative transfer code SKIRT <cit.>. SKIRT is a state-of-the-art Monte Carlo radiative transfer code, developed and maintained at Ghent University, which implements a Monte Carlo photon life cycle emulating absorption, scattering, and re-emission in complex 3D transfer media <cit.>. SKIRT simulations in full 3D are facilitated by the implementation of various acceleration techniques <cit.>, advanced grids for discretising the transfer medium <cit.>, and a hybrid parallelisation scheme which combines multi-threading with multi-processing <cit.>. The SKIRT code is open-source, well-documented[<https://skirt.ugent.be>], and publicly available online[<https://github.com/SKIRT/SKIRT9>], with tutorials for users and developers.
SKIRT models absorption, scattering, and re-emission in dusty astrophysical systems <cit.>, which includes emission from stochastically heated dust grains <cit.>, polarisation due to scattering on spherical dust grains <cit.>, and polarised emission from aligned spheroidal dust grains <cit.>. Beyond dust, SKIRT models scattering on free electrons <cit.>, absorption and emission at the 21 cm line of neutral hydrogen <cit.>, non-LTE line radiative transfer in the (sub)mm and infrared <cit.>, and resonant line scattering of H Lyα photons <cit.>. Recently, the SKIRT code was extended into the X-ray regime, to model Compton scattering on free electrons, photo-absorption and fluorescence by cold atomic gas, scattering on bound electrons, and extinction by dust <cit.>. Furthermore, the kinematics of moving sources and moving transfer media are self-consistently incorporated into the SKIRT radiative transfer calculations, so that the effect of bulk velocities and velocity dispersions can be properly modelled <cit.>.
The SKIRT code features a large suite of model geometries, radiation sources, medium characterisations, instruments, and probes <cit.>, in addition to interfaces for post-processing hydrodynamical simulations <cit.>. SKIRT can calculate self-consistent fluxes, images, spectra and polarisation maps from mm to X-ray wavelengths, with recent applications in galaxies <cit.>, active galactic nuclei <cit.>, and other fields of astrophysics <cit.>.
§.§ X-ray polarisation implementation
The polarisation framework for tracing Stokes vectors in SKIRT has been implemented by <cit.>, while the X-ray physical processes in cold neutral media (without polarisation support) have been implemented by <cit.>. As photo-absorption and fluorescence do not depend on the polarisation state of the incoming photon, the original implementation is retained for these processes, with the important distinction that the Stokes vector is reset after fluorescent re-emission (i.e. the Q', U', and V' parameters are set to zero; see Sect. <ref>).
In this work, bound-electron scattering in cold atomic gas is approximated as free-electron scattering on Z free electrons per neutral gas atom, as discussed in Sect. <ref>. For a gas column with column density N_H, the optical depth for scattering is thus
τ(E) = (∑_Z a_Z Z) · N_H·σ(E) ,
with a_Z being the number density of element Z relative to H i, and σ(E) the Compton scattering cross-section given by Eq. (<ref>).
As this optical depth does not depend on the polarisation state of the incoming photon, sampling for scattering positions is conceptually identical to the free-electron scattering implementation as presented in <cit.>. Given the abundance table { a_Z}, one can easily generate a random interaction point in the forward direction, following for example <cit.>.
Once the scattering location is set, a pair of scattering angles (θ, φ) can be sampled from the phase function Eq. (<ref>) using the conditional distribution method. Due to its symmetry around Ψ(θ, x), the azimuthal modulation term does not contribute to the marginal probability distribution for θ, and therefore, random scattering angles θ can be generated from the same univariate distribution p(θ; x) as for unpolarised Compton scattering:
p(θ; x)
= 3 σ_T S_11(θ, x)sinθ/8 σ(x).
This distribution for θ is a complex function of the incoming photon energy, but can be sampled efficiently using a variation on Khan's technique described by <cit.>. Once a scattering angle θ is generated, we obtain the conditional distribution for the azimuthal scattering angle φ as
p_θ(φ; x, S)
= 1/2π( 1 + S_12(θ, x)/S_11(θ, x)P_Lcos2(φ-γ)),
which explicitly depends on the incoming polarisation state. A random scattering angle φ can be sampled from Eq. (<ref>) through numerical inversion as
χ = ∫_0^φ p_θ(φ'; x, S) dφ'
= 1/2π( φ + S_12(θ, x)/S_11(θ, x)P_Lsinφcos(φ-2γ)),
with χ being a uniform deviate between 0 and 1.
Once the scattering angles θ and φ are generated, the photon energy of the interacting photon is updated to E'=C(θ, x) × E and the Stokes parameters are updated as described by Eq. (<ref>) (normalised so that I'=C(θ, x) × I, as discussed in Sect. <ref>). Hereafter, the scattered photon continues its way through the transfer medium, in a direction ' that can be calculated from the original direction and the scattering angles (θ, φ).
Eventually, all photons packets are recorded by (a set of) SKIRT instruments (corresponding to preset observer directions), using the peel-off technique as described in <cit.>. We remind that the Stokes vectors of recorded photons are first rotated to correspond to the observer's north direction (following the IAU conventions; see Sect. <ref>), before they are binned to produce Stokes spectra and polarisation maps.
§.§ Implementation verification
§.§.§ Simulation setup
To verify the X-ray polarisation implementation in SKIRT, we set up dedicated SKIRT simulations, specifically designed to recover the details of individual photon-electron interactions. The simulation setup is shown in Fig. <ref> and consists of a collimated beam illuminating a cloud of free electrons at the origin. The 4 keV source photons are emitted in the positive z-direction and are linearly polarised along the x-axis, with P_L=0.5. Without loss of generality, we can choose =e_x as the reference direction (with e_x being the Cartesian unit vector in the positive x-direction), so that γ = 0 by definition. The central electron cloud is small compared to its distance to the source and has a column density of 10^19 cm^-2, making the flux contribution of photons that scatter more than once negligible (i.e. <0.001% of the one-time-scattered flux). The distance to the observer is D, and the spherical coordinates of the observer direction ' are just the scattering angle θ and the azimuthal scattering angle φ.
For any observer direction ', the scattered photon flux can be obtained with SKIRT, and because of the peel-off detection technique, this is possible for any exact (θ, φ)-direction, with no spurious blurring due to averaging over some finite solid angle around ' <cit.>. Furthermore, the `smart' photon detectors in SKIRT can recover the individual flux contributions of direct (i.e. non-interacting) and reprocessed photons <cit.>, in addition to recording the total Stokes I, Q, U, and V spectra. For all (non-forward) directions, the total observed flux is equal to the scattered flux component, which is virtually equal to the one-time-scattered flux (as further interactions are negligible at N_e = 10^19 cm^-2).
As a first sanity check, we calculated the ratio of the observed direct spectrum over the input spectrum for the forward direction (θ=0), recovering the expected result of 1-exp[-N_e·σ(E)], with σ(E) being the scattering cross-section given by Eq. (<ref>). In the next subsections, we use the SKIRT simulation output to recover the phase function (Sect. <ref>), the polarisation degree (Sect. <ref>), and the polarisation angle (Sect. <ref>), which are then verified against the analytical formula obtained in Sect. <ref>.
§.§.§ Phase function
The scattering phase function Φ for polarised photons (P_L>0) depends explicitly on the azimuthal scattering angle φ, which is not the case for unpolarised Compton scattering (see Eq. (<ref>)). Using the SKIRT output for the simulation setup described in Sect. <ref>, we can infer the scattering phase function Φ_SKIRT as implemented in SKIRT by relating the (one-time) scattered photon flux density I(θ, φ) in a direction ' to the fraction of the specific beam luminosity L that interacted inside the electron cloud:
Φ_SKIRT(θ, φ; E, )= I(θ, φ; E')· C(θ, E)^2 · D^2/(1-exp[-N_e·σ(E)])· L(E),
with I(θ, φ) being the photon flux measured at E' = C(θ, E) × E, as Compton scattering is inelastic, and L and D being known input parameters. We note that the factor C(θ, E)^2 appears in the nominator of Eq. (<ref>) as the interval dE around E corresponds to the interval dE'=C(θ, E)^2×dE around E' after scattering.
Fig. <ref> shows the scattering phase function Φ_SKIRT, which was inferred from the SKIRT results for the simulation setup described in Sect. <ref>. Comparing this Φ_SKIRT to the theoretical phase function given by Eq. (<ref>), we find an excellent agreement for all scattering angles (θ, φ), recovering the prominent azimuthal modulation that governs the scattering physics for polarised photons. Using Eq. (<ref>), we verified the SKIRT results for a range of polarisation states and photon energies E (especially beyond the IXPE range, where the energy dependence becomes more pronounced), assuring a correct implementation of polarisation-dependent Compton scattering in SKIRT.
§.§.§ Polarisation degree
The Stokes I(θ, φ), Q(θ, φ), and U(θ, φ) spectra calculated with SKIRT can be used to obtain the polarisation degree for the setup described in Sect. <ref> by applying Eq. (<ref>). Fig. <ref> shows the inferred polarisation degree as a function of the scattering angle, together with the theoretical formula for P_L', given by Eq. (<ref>). As the polarisation degree induced through Compton scattering is virtually symmetric around θ=90^∘ (see Fig. <ref>), we focus on scattering angles θ≤ 90^∘ only. We find an excellent agreement between the SKIRT results and the theoretical formula Eq. (<ref>), recovering the complex behaviour of P_L' with θ and φ. We find that forward scattering (θ= 0) does not change the polarisation degree, while P_L'=1 when θ=90^∘, as predicted by Fig. <ref>.
§.§.§ Polarisation angle
Similarly, the linear polarisation angle γ' can be calculated from the Stokes spectra obtained with SKIRT by applying Eq. (<ref>). The inferred polarisation angle γ'_SKIRT is then defined relative to the north direction of the observer frame, as discussed in Sect. <ref>. This γ'_SKIRT can be directly compared to the theoretical formula as given by Eq. (<ref>), as for this particular setup (Sect. <ref>), Eq. (<ref>) refers to a reference direction ' that is also the observer north direction[Using Eq. (<ref>) with =e_z and =e_x (see Sect. <ref>), ' is just equal to the spherical unit vector e_θ. The observer north direction is -e_θ, but Stokes vectors are invariant under rotations over 180^∘ (see Eq. (<ref>)).].
The linear polarisation angle inferred from the SKIRT output is shown in Fig. <ref>, for various scattering angles. For forward scattering and backscattering (θ= 0 and 180^∘, respectively), the observed γ' is just the projected[Projected on the sky as seen from the observer location.] incident polarisation direction, as the polarisation direction is retained by scattering at θ = 0^∘ or θ = 180^∘ (see Eq. (<ref>) and (<ref>)). For intermediate scattering angles, the observed polarisation angle lays between γ'=90^∘ (i.e. perpendicular to the scattering plane) and the specific angle corresponding to the projected incoming polarisation direction. For φ = 90^∘ or 270^∘, the projected incident polarisation direction is zero, and therefore, the observed polarisation direction is just perpendicular to the scattering plane (γ'=90^∘), identical to the case of unpolarised incident photons (see Sect. <ref>).
We find an excellent agreement between the SKIRT results and the theoretical formula Eq. (<ref>) for γ', recovering the complex behaviour of the polarisation angle with the scattering direction. Together with the results of Sect. <ref> and Sect. <ref>, this assures us that the Compton scattering formula are implemented correctly in SKIRT.
§ TORUS MODELS
§.§ Model setup
As a first application of the new X-ray polarisation capabilities of the SKIRT code, we explored the spectro-polarimetric properties of a 2D toroidal reprocessor of cold gas, modelling the parsec-scale circumnuclear medium of AGN. This medium is expected to reprocess the primary X-ray emission of the central X-ray corona, producing a distinct polarisation signal which can be used to constrain the geometry of the reprocessor in heavily obscured AGN <cit.>.
We adopted a wedge torus geometry of uniform-density gas, centred on an isotropic point source representing the X-ray corona (see Fig. <ref>). This torus geometry is identical to the torus geometries of the BNTorus <cit.> and borus <cit.> spectral models and is also known as a `flared disc'. The X-ray radiative transfer problem is scale invariant, and therefore the model geometry is fully defined by the torus covering factor CF = cosθ and the equatorial hydrogen column density N_H, which are indicated on Fig. <ref>. For the wedge torus geometry, all obscured sightlines (cosi < CF) have the same line-of-sight N_H, while unobscured sightlines have N_H = 0.
We considered photo-absorption, fluorescence, and scattering by cold neutral gas as described in Sect. <ref>, and adopted <cit.> solar gas abundances. We assumed a standard power law spectrum for the central X-ray source, characterised by a photon index Γ and an exponential cut-off energy E_cut. The primary X-ray photons were assumed to be unpolarised (P_L=0), so that the emerging polarisation signal could be entirely attributed to reprocessing in the AGN torus. Indeed, IXPE observations of unobscured AGN indicate that the coronal polarisation levels are low <cit.>, so that the initial polarisation state of the coronal X-ray photons is easily washed out by multiple scattering in the AGN torus <cit.>.
We set up radiative transfer simulations in the wedge torus geometry, using the SKIRT code with X-ray polarisation enabled[SKIRT version 9 git commit .]. First, we specified the spatial grid on which the torus medium was discretised. In this case, the axial symmetry of the torus could be exploited to build a 2D grid, which would speed up the radiative transfer simulations significantly. Furthermore, the density distribution in the azimuthal plane could be gridded using a 2D polar grid with angular bins coinciding with the torus edges, so that the gridded distribution would equal the exact model distribution, with no discretisation effects.
Stokes spectra were calculated over the 0.3 to 200 keV range, adopting 1000 logarithmic wavelength bins, which corresponds to a spectral resolution of R=154. In SKIRT, X-ray radiative transfer interactions were modelled up to 500 keV, as the X-ray reflection spectrum and the corresponding polarisation signal are produced by Compton down-scattering at energies beyond 200 keV <cit.>.
§.§ Radiative transfer results
The SKIRT radiative transfer results are shown in Fig. <ref> for one particular realisation of the wedge torus model (CF=0.45 and cosi=0.4), with different panels representing different values for the column density, ranging from logN_H=23.4 (Compton-thin, top left) to logN_H=25.0 (Compton-thick, bottom right). The adopted source parameters are representative for the nearby AGN in the Circinus galaxy (Γ = 1.8, E_cut=200 keV, and L_2-10=2.8 × 10^42 erg s^-1, observed at a distance of 4.2 Mpc). We focus on the SKIRT results in the 2-8 keV IXPE range.
The Stokes I, Q, and U spectra are shown at the top of each panel. The observed Stokes U parameter is always zero, which was expected as the north direction of the observer frame was chosen to just coincide with the projected symmetry axis of the torus (while further considering that a 2D system cannot exhibit features that are asymmetric relative to its projected symmetry axis). In a realistic observational context, the observed system will be rotated about the line of sight of the observer, mixing the Q and U parameters as described by Eq. (<ref>). This rotation-induced mixing of the Stokes parameters can be observed, which could be used to infer the orientation of the principal axis of the AGN system, forming a powerful probe on the (spatially unresolved) circumnuclear medium <cit.>.
From these Stokes spectra, the linear polarisation degree P_L and the linear polarisation angle γ were calculated as a function of the photon energy, using Eq. (<ref>) and (<ref>). As Stokes U is always zero, γ can only be 0^∘ or 90^∘. Therefore, one could visualise the observed polarisation direction with positive P_L values denoting polarisation in the north direction (γ=0^∘) and negative P_L values denoting polarisation in the horizontal direction (γ=90^∘) (see P_L, in purple, at the bottom of each panel in Fig. <ref>). As the north direction of the observer frame was chosen to just coincide with the projected torus axis, γ=0^∘ and γ=90^∘ correspond to polarisation parallel and perpendicular to the torus symmetry axis, respectively.
The radiative transfer results shown in Fig. <ref> demonstrate the complex behaviour of the polarisation observables of a 2D torus model in the 2-8 keV range. Most prominently, we note that the Stokes Q spectra have a fixed negative sign up to a certain photon energy E_flip (indicated in green on Fig. <ref>), beyond which Q becomes positive. With Stokes U being zero, this means that for E<E_flip, the observed emission is polarised perpendicular to the torus symmetry axis (i.e. γ=90^∘), while at higher energies, the polarisation is parallel to this symmetry axis (i.e. γ=0^∘).
The exact photon energy E_flip where the polarisation angle γ flips from 90^∘ to 0^∘ is a function of the opacity of the torus medium, increasing from 2.0 keV at logN_H=23.4 to 6.6 keV at logN_H=24.8[For logN_H=25.0, E_flip = 10.4 keV, outside of the IXPE range.]. Furthermore, the effect of the torus opacity is also observed in the logN_H=24.8 panel of Fig. <ref>, where the polarisation angle γ flips back to 90^∘ at 7.1 keV (i.e. at the Fe K absorption edge), where the torus opacity rises discontinuously. A second flip from γ=90^∘ to γ=0^∘ is then observed at 8.8 keV. This behaviour of the polarisation angle with photon energy is discussed in Sect. <ref>.
Away from E_flip, Stokes Q is relatively featureless and roughly follows the spectral shape of the reflected continuum. In particular, the polarised flux (i.e. Stokes Q) does not contain any fluorescent lines, as fluorescent line photons are emitted at P_L=0 (see Sect. <ref>). However, as the total flux (i.e. Stokes I) contains strong fluorescent lines, the linear polarisation degree is heavily diluted at the fluorescent line energies, which produces line features where P_L approaches zero (see Fig. <ref>). Similarly, at low N_H, the polarisation signal is heavily diluted by the direct flux of the (unpolarised) primary X-ray source.
§.§ Model grid
We calculated a grid of AGN torus models for observational data fitting based on the wedge torus geometry described in Sect. <ref> by varying N_H between 10^22 and 10^26 cm^-2, the photon index Γ between 1 and 3, and the exponential cut-off energy log E_cut between 1.5 and 2.9 (corresponding to 30 keV and 800 keV, respectively). We considered torus covering factors between 0.05 and 0.95, observed at inclinations between 0^∘ and 90^∘, with all parameters being sampled as described in Table <ref>. This leads to 203280 unique torus model realisations, forming a parameter space that is covering most of the obscured AGN observed in the local Universe <cit.>. For each parameter combination, the Stokes I, Q, and U spectra were calculated from 0.3 to 200 keV, with a spectral resolution of R=154 (see Sect. <ref>).
For all model realisations, the number of simulated photon packets was kept at 10^8, resulting in output spectra with limited Monte Carlo noise. The simulation run time of a single model realisation depends mostly on the covering factor and the column density of the torus, as more material requires more interactions to be modelled. In addition, the spectral hardness of the source has a small effect, as hard X-ray photons experience more scattering events before being absorbed. For a standard source spectrum with Γ = 1.8 and E_cut=200 keV, individual simulations take between 0.9 and 33.1 minutes on a modest 2.2 GHz 16-core node, demonstrating the computational efficiency of the SKIRT code. By using the high-performance computing infrastructure at the Flemish Supercomputer Centre, it was possible to run all 203280 torus model realisations in just 55 hours.
The radiative transfer results were converted to an XSPEC <cit.> table model named xskirtorsmooth, which is publicly released with this work[<https://github.com/BertVdM/xskirtor>]. This model can be used for observational data fitting within XSPEC as
model polrot × atable{xskirtor smooth.mod}.
In addition to the five free model parameters listed in Table <ref>, the table model requires a redshift and a luminosity normalisation[Following the same convention as the cutoffpl model in XSPEC, the norm parameter is defined as the constant factor K in the unobscured flux density of the primary source: F_E(E) = K E^-Γexp(-E/E_cut), in units of counts/s/keV/cm^2. The norm parameter is approximately equal to the unobscured flux density at 1 keV.], providing the physical scaling of the model spectra. polrot then sets the roll angle θ of the torus system around the line of sight of the observer, as the table model corresponds to an observer that has its north direction coinciding with the projected torus symmetry axis. This brings the number of free model parameters to eight. The xskirtorsmooth model can be further combined with other XSPEC models, such as a tbabs component to model galactic foreground extinction <cit.>.
The xskirtorsmooth model predicts both X-ray spectra and X-ray spectro-polarimetry, enabling the simultaneous fitting of spectral and polarimetric observations in the X-ray band. As both the spectral coverage and the spectral resolution of the model templates are largely exceeding the capabilities of modern X-ray polarisation observatories, they can be applied for the interpretation of observational IXPE, XPoSat, and eXTP data, proposal writing, and the definition of future missions.
The corresponding spectral model (modelling Stokes I) is well suited for fitting CCD-based X-ray spectra as obtained with modern X-ray observatories such as XMM-Newton, Chandra, NuSTAR, Swift, INTEGRAL, AstroSat, and more. Furthermore, we provided an additional XSPEC table model for the 1.5 to 15 keV subrange, with an adaptive spectral resolution achieving Δ E = 0.5 eV around the strongest fluorescent lines, for fitting the high-resolution X-ray spectra that are being obtained with the XRISM/Resolve <cit.> and will be obtained with the Athena/X-IFU <cit.>.
§ DISCUSSION
§.§ Polarisation angle as a function of the photon energy
In Sect. <ref>, we described how the reprocessed torus emission is polarised perpendicular to the torus symmetry axis (γ=90^∘) at low energies, while it is parallel to the torus symmetry axis (γ=0^∘) at higher energies. This behaviour of the polarisation angle γ with photon energy is observed for a wide range of torus parameter combinations (see Sect. <ref>), with a transition energy E_flip that scales with the torus opacity. To understand how this behaviour is related to the torus geometry, we can inspect the spatially resolved Stokes surface brightness maps, which can be calculated with SKIRT for each parameter combination.
As a demonstration, we focus on the logN_H=23.8 torus model realisation that was discussed in Sect. <ref>, for which the Stokes spectra are shown in the top right panel of Fig. <ref>. The corresponding Stokes Q (left), Stokes U (middle), and Stokes I (right) surface brightness maps are shown in Fig <ref>, with the top row representing the 2 keV to E_flip energy band, and the bottom row representing the E_flip to 8 keV band, with E_flip = 2.9 keV. In addition, the total surface brightness map (Stokes I, on the right) is overlaid with a linear polarisation map, visualising the polarisation degree and polarisation angle as derived from the Stokes Q and Stokes U surface brightness maps.
Looking at the top right panel of Fig. <ref>, we find that for E<E_flip, the total observed flux is dominated by reprocessed photons that scattered off the illuminated (unobscured) backside of the torus. For E>E_flip on the other hand (bottom right), source photons are able to penetrate the obscuring front side of the torus, so that the total observed flux is dominated by direct source emission. However, this unpolarised direct flux merely dilutes the observed polarisation signal, without influencing the polarisation angle γ that is eventually observed. Therefore, we can focus on the reprocessed flux only, finding that for E>E_flip, scattered photons originate from a more extended region on the illuminated backside of the torus, while they can also escape through the obscuring front side of the torus, as evident from the bottom right panel of Fig. <ref>.
Using the smart photon detectors in SKIRT (see Sect. <ref>), we can confirm that the reprocessed torus emission is dominated by one-time-scattered photons, which significantly simplifies the interpretation of the linear polarisation maps shown in Fig. <ref>. As the primary source photons are unpolarised (see Sect. <ref>), Compton scattering induces polarisation that is exactly perpendicular to the scattering plane (see Eq. (<ref>)). This means that the projected polarisation pattern is always circular (i.e. in each pixel, the observed polarisation direction is perpendicular to the direction towards the central pixel), as shown in Fig. <ref>. Indeed, the Stokes Q surface brightness is positive in the east and west quadrants (describing polarisation in the vertical direction) and negative in the north and south quadrants (describing polarisation in the horizontal direction). Similarly, Stokes U is positive in the north-east and south-west quadrants and negative in the north-west and south-east quadrants (see Fig. <ref>). The polarisation direction of individual reprocessed photons thus depends on the specific sky region of the final scattering interaction inside the torus (in this specific case, dominated by single scattering).
In addition, we find that also the observed polarisation degree depends on the sky region, with P_L being high in the east and west quadrants and low in the north and south quadrants (see Fig. <ref>, where P_L is visualised by the length of the polarisation map segments). This behaviour is a direct result of the projected inclined torus geometry: In the north and south quadrants, the distribution of last scattering angles (i.e. to reach the observer) forms a narrow peak centred on 130^∘ and 50^∘, respectively. For both quadrants, this correspond to a distribution of polarisation degrees having P_L<0.3 for 60% of all photons and P_L<0.5 for 90% of all photons, as predicted by Eq. (<ref>). On the other hand, the distribution of scattering angles is much broader in the east and west quadrants and peaks at 90^∘, which corresponds to higher average polarisation degrees (see Fig. <ref>). Indeed, in these side quadrants, 50% of all photons are having P_L>0.6.
Summarising, the net (i.e. spatially integrated) polarisation direction that is eventually observed, is the result of the precise balance between the polarised flux originating from the different sky regions, with both the total flux and the polarisation degree of each sky region being closely related to the torus geometry. As the projected torus geometry is perfectly symmetric around the observer north direction, the Stokes U surface brightness maps are perfectly antisymmetric around this axis (see Fig. <ref>), so that the spatially integrated Stokes U fluxes are always zero, as discussed in Sect. <ref>. The Stokes Q surface brightness maps on the other hand do not exhibit such trivial symmetry, so that the spatially integrated polarisation angle γ depends on the balance between the polarised flux in the east and west quadrants (where Stokes Q>0 and γ=0^∘) compared to the north and south quadrants (where Stokes Q<0 and γ=90^∘).
We can now explain the behaviour of the polarisation angle with photon energy for this particular setup: At low energies (E<E_flip), the reprocessed torus emission is purely dominated by photons that scattered off a small region on the illuminated backside of the torus. This backside region mostly covers the northern sky quadrant where Stokes Q is negative, so that the spatially integrated polarisation signal is perpendicular to the torus symmetry axis (γ=90^∘). At higher energies (E>E_flip), the reprocessed torus emission originates from a more extended region on the torus backside and also escapes through the obscuring front side of the torus, so that all four sky quadrants are covered. At E_flip, the polarised flux is still dominated by the unobscured torus backside, but now the positive Q contributions on the torus backside (close to the torus front edge, as shown in red) start to dominate over the negative Q contributions (shown in blue; see the bottom left panel of Fig. <ref>). Indeed, while most of the reprocessed flux is still contained within the northern sky quadrant, we find that the polarisation degree is significantly higher in the east and west quadrants, so that the polarised flux is actually dominated by the contribution of these side quadrants, and the net polarisation is parallel to the torus axis (γ=0^∘). At even higher energies, the polarised flux is dominated by photons escaping thought the obscuring front side of the torus, mostly from the east and west sky regions, so that Q>0 and γ=0^∘.
The flux balance between the northern sky region (Q<0) and the east and west sky regions (Q>0) is determined by the level of obscuration of the latter regions. This explains why the transition energy E_flip scales with the torus opacity (see Fig. <ref>), as higher torus column densities require higher photon energies (with more penetrating power) to escape from the east and west sky regions. This is a direct result of the specific torus geometry, and we conclude that spatially resolved Stokes surface brightness maps form a powerful tool to study the geometrical effects that are encoded in spectro-polarimetric observations. The SKIRT code allows for calculating these surface brightness maps at a high signal-to-noise, in limited computational time.
§.§ Inclination - covering factor contours
Inspired by Sect. <ref>, where we found an interesting behaviour of the polarisation observables at cos i ≲CF, this section discusses the wedge torus results of Sect. <ref> as a function of the cos i and CF model parameters. This analysis should also be useful for the interpretation of observational data, to constrain torus properties from broadband polarimetry. Fig. <ref> shows the total polarisation degree (over the 2-8 keV band) as a function of cos i and CF, with different panels representing different values for the torus column density (increasing from logN_H=23 to logN_H=25).
We find that the total polarisation degree is virtually zero (P_L<0.3%) for unobscured sightlines (cos i > CF), where the polarisation signal of the AGN torus is mostly diluted by the direct source emission[This is when assuming unpolarised primary emission (Sect. <ref>). Alternatively, the torus signal is `diluted' by the polarisation signal of the X-ray corona <cit.>.]. The polarisation degree reaches a maximum at cos i ≲CF, where the illuminated backside of the torus can be observed without significant obscuration, similar to Sect. <ref>. Furthermore, the total polarisation degree is observed to increase from 1% to 30% when logN_H increases from 23 to 25, as more torus material results in more scattering interactions inducing a stronger polarisation signal, while also the unpolarised direct flux component is more obscured.
Finally, two distinct polarisation maxima can be observed in each panel of Fig. <ref>, separated by a region where P_L approaches zero. One local maximum is located at cos i ≲CF = 0.7 to 0.9 (top right corner of each panel, with γ=90^∘), while the other local maximum is located at cos i ≲CF = 0.2 to 0.4 (bottom left corner of each panel, with γ=0^∘). Similar to Sect. <ref>, we calculated the spatially resolved Stokes surface maps to study this behaviour, which are shown in Fig. <ref>. We find that both of these polarisation maxima are related to scattering on the illuminated backside of the torus, similar to Sect. <ref>.
At cos i ≲CF = 0.7 to 0.9 (i.e. tori that are almost entirely closed), the illuminated backside of the torus mostly covers the northern sky quadrant (where Stokes Q<0), so that the net polarisation signal is perpendicular to the torus axis (γ=90^∘). For cos i ≲CF = 0.2 to 0.4 on the other hand (i.e. thin, disc-like tori), the illuminated backside mostly covers the east and west quadrants (where Stokes Q>0), so that the polarisation is parallel to the torus axis (γ=0^∘). For parameter combinations in between these two maxima, the torus backside covers all three sky quadrant, so that positive and negative Stokes Q contributions (partially) cancel out, eventually reaching P_L=0 at the border region that separates the two local maxima.
§.§ Polarisation angle flip for different sightlines
In Sect. <ref>, we presented Stokes spectra for one sightline only: cos i ≲CF, which offers an unobscured view on the illuminated backside of the torus. The reason for this specific choice is that we plan to compare these results to simulations that include a polar component in follow-up work, showing similar spectra[Indeed, polar-extended dusty gas could act as an illuminated reflector that is visible through unobscured sightlines, which would require less fine-tuning of the observer inclination (forming a natural solution for strong reflection spectra).]. However, because of this very specific viewing angle, the trends that were found for CF=0.45 and cos i =0.4 in Sect. <ref> might differ from more general trends at arbitrary obscured sightlines, which we investigate in this section. In particular, we focus on the photon energy E_flip where the polarisation angle γ flips from 90^∘ to 0^∘, as a function of the torus column density.
Using the radiative transfer results of the torus model grid that was presented in Sect. <ref>, we measured E_flip for a broad range of torus parameter combinations, focussing on the same source parameters as Sect. <ref> (Γ = 1.8 and E_cut=200 keV), but a higher torus covering factor (CF=0.85) to allow for more obscured sightlines (cos i < CF). In Fig. <ref>, the polarisation flip energy E_flip is shown as a function of the torus column density for different (obscured) observer inclinations, generalising the trend that was found in Sect. <ref>: A linear relation between logN_H and logE_flip is found for all viewing angles, which is broken at the Fe K absorption edge at 7.1 keV. Indeed, similar to Sect. <ref>, E_flip is found to correlate with the torus opacity, which increases discontinuously at the Fe K edge for a fixed column density.
For CF=0.85, the polarisation flip happens when the torus medium becomes sufficiently transparent, so that the reprocessed flux escaping through the front side of the torus (mostly Q>0) starts to dominate over the flux related to scattering on the torus backside (Q<0). As the former region is obscured by the torus while the latter region is not, the ratio between these two components naturally scales with the torus opacity, explaining the trend with logN_H (see Fig. <ref>). In addition, as the flux of the torus backside decreases with increasing inclination (which is a projection effect), the transmitted flux (having Q>0) starts to dominate at lower E_flip for higher inclinations (i.e. lower cos i), explaining the observed trend with cos i in Fig. <ref>.
Finally, we note how the details of the polarisation angle flip depend on the specific combination of CF and cos i. For CF=0.85, we found a balance between transmission through the torus front side and scattering on the torus backside for all cos i, which is different from the scenario discussed in Sect. <ref>, where we found a balance between positive and negative Stokes Q contributions on the torus backside for CF=0.45. For smaller covering factors, the positive Stokes Q contributions on the torus backside could be dominating over the entire IXPE range (so that no polarisation flip is observed), while for some CF and cos i combinations, two flips could occur. These intricacies, and their link to the torus geometry, can be studied in great detail with SKIRT, in particular based on Stokes surface brightness maps, which can be calculated in little computational time.
§.§ XSPEC torus model
With this work, we release an AGN torus model that describes both X-ray spectra and X-ray polarisation for observational data fitting with XSPEC (see Sect. <ref>). This xskirtorsmooth model represents a smooth toroidal reprocessor of cold gas, positioned in the equatorial plane, as described in Sect. <ref> (see Fig. <ref>). While similar smooth torus models have been very successful in modelling the observational X-ray spectra of obscured AGN <cit.>, these models do not incorporate geometrical complexities such as clumpy or filamentary substructures, or polar-extended dusty gas, which might be omnipresent in local AGN (Sect. <ref>). Therefore, the xskirtorsmooth model should be used as a tool to gain insights into the representative properties of the circumnuclear medium (such as its covering factor or average column density), more than the final solution for the AGN torus geometry, when being applied to observational data.
The xskirtorsmooth model is provided in two different flavour variations: a coupled configuration which provides the direct and reprocessed flux components as a single table, and a decoupled configuration which allows for varying the line-of-sight column density independently from the equatorial column density[In fact, all model parameters could be varied independently (given that there would be a physical motivation to do so).] <cit.>. As a final remark, we note that the exponential cut-off energy of the primary X-ray source (Sect. <ref>) has a noticeable effect in the 2-8 keV range, even when E_cut>100 keV[For example, for E_cut = 100 keV, the effect of the exponential cutoff on the X-ray spectrum at 8 keV is 8%.]. Therefore, the logE_cut model parameter (see Table <ref>) is relevant beyond the modelling of hard X-ray spectra and should not be neglected at lower X-ray energies such as the IXPE range.
§ SUMMARY AND OUTLOOK
In this work, we presented a general framework for modelling X-ray polarisation in 3D radiative transfer simulations of cold gas and dust, to model the spectro-polarimetric signal that is produced by X-ray reprocessing in AGN circumnuclear media. We discussed how radiative transfer processes depend on the polarisation state of the incoming photon and how the polarisation state is updated by Compton scattering and fluorescence (Sect. <ref>). We described how polarised X-ray radiative transfer can be implemented using a Monte Carlo method and provided an implementation to the 3D radiative transfer code SKIRT, which is publicly available online[<https://skirt.ugent.be>] (Sect. <ref>).
As a first application, we focussed on a 2D torus geometry in Sect. <ref>, to demonstrate the new X-ray polarisation capabilities of the SKIRT code without going into the details of 3D structure and its effect on spectro-polarimetric observables. However, we note that the current SKIRT implementation works in full 3D already, so that more complex models <cit.> can be run with X-ray polarisation enabled (i.e. without any further modifications to the code). In future work, we will focus on these models with a truly 3D structure beyond the classical torus.
For the 2D wedge torus model (Sect. <ref>), we calculated Stokes spectra at a high signal-to-noise (Fig. <ref>) and computed the linear polarisation angle and polarisation degree as a function of photon energy. We found that the polarisation angle flips from 90^∘ to 0^∘ at a specific energy inside the IXPE range (Sect. <ref>), which we interpreted as a balance between the reprocessed flux originating from different regions of the torus, with a direct link to the torus geometry (Sect. <ref>). Furthermore, we found that the polarisation degree reaches a maximum at (obscured) sightlines having cos i ≲CF (Sect. <ref>), where the torus backside can be observed without significant obscuration. However, depending on the torus covering factor, this polarisation maximum can be parallel or perpendicular to the torus axis (see Fig. <ref>). Finally, we found that the specific photon energy E_flip where the polarisation angle γ flips from 90^∘ to 0^∘ scales with the torus opacity (Sect. <ref>), as the polarisation flip is related to a balance between torus regions with different levels of obscuration. These intricacies, and their link to the torus geometry, were studied in great detail using the Stokes surface brightness maps, which SKIRT can calculate in a short amount of computational time.
With this work, we release spectro-polarimetric templates for fitting observational data of obscured AGN based on the torus model grid presented in Sect. <ref> (and discussed in Sect. <ref>). This X-ray torus model is provided as an XSPEC table named xskirtorsmooth, which can simultaneously describe X-ray spectra and spectro-polarimetry over the 0.3 to 200 keV range, with a spectral resolution of R=154 (Sect. <ref>). We provided an additional high-resolution XSPEC table model with an adaptive energy resolution over the 1.5 to 15 keV subrange, for fitting the microcalorimeter X-ray spectra obtained with XRISM/Resolve and Athena/X-IFU. All tables are publicly available online[<https://github.com/BertVdM/xskirtor>].
The SKIRT code can now model X-ray polarisation in AGN circumnuclear media and predict the spectro-polarimetric X-ray signal of complex 3D models, with all features of the established SKIRT framework available. SKIRT is highly optimised in terms of computational efficiency, allowing for complex 3D models to be explored in a short timeframe. Furthermore, the SKIRT code offers an unmatched geometrical flexibility for setting up simulations in full 3D, which has now become available to X-ray polarisation modelling. Finally, SKIRT can calculate polarisation maps at a high signal-to-noise (see Fig. <ref> and <ref>), which forms a powerful tool to study the geometrical effects that are encoded in spectro-polarimetric observations. SKIRT can calculate fluxes, images, spectra and polarisation maps from mm to X-ray wavelengths, and the community is warmly invited to use the code in any way they see fit.
B. V. acknowledges support by the Fund for Scientific Research Flanders (FWO-Vlaanderen, project 11H2121N). M. S. acknowledges support by the Science Fund of the Republic of Serbia, PROMIS 6060916, BOWIE and by the Ministry of Education, Science and Technological Development of the Republic of Serbia through the contract No. 451-03-9/2022-14/200002. G. M. acknowledges financial support from Italian MUR under grant PNRR-M4C2-I1.1-PRIN 2022-PE9-An X-ray view of compact objects in polarized light -F53D23001230006-Finanziato dall'U.E.-NextGenerationEU. We wish to thank K. A. Arnaud for support with the XSPEC package.
aa
|
http://arxiv.org/abs/2409.02250v1 | 20240903192248 | Elastic screening of pseudo gauge fields in graphene | [
"Christophe De Beule",
"Robin Smeyers",
"Wilson Nieto Luna",
"E. J. Mele",
"Lucian Covaci"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
Department of Physics and NANOlab Center of Excellence, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp, Belgium
Department of Physics and NANOlab Center of Excellence, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp, Belgium
Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
Department of Physics and NANOlab Center of Excellence, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp, Belgium
§ ABSTRACT
Lattice deformations in graphene couple to the low-energy electronic degrees of freedom as pseudo scalar and gauge fields. Using molecular dynamics simulations, we show that the optical component of the displacement field, i.e., the relative motion of different sublattices, contributes at equal order as the acoustic component and effectively screens the pseudo gauge fields. In particular, we consider twisted bilayer graphene and corrugated monolayer graphene. In both cases, optical lattice displacement significantly reduce the overall magnitude of the pseudo magnetic fields. For corrugated graphene, we further show that the corresponding electronic bands are significantly modified by the screened pseudo magnetic field. Previous studies based on continuum elasticity, which ignores this effect, have therefore systematically overestimated the strength of the strain-induced pseudo magnetic field. Our results have important consequences for the interpretation of experiments and design of straintronic applications.
Elastic screening of pseudo gauge fields in graphene
Lucian Covaci
September 9, 2024
====================================================
It is well known that lattice deformations in graphene couple to the low-energy electronic degrees of freedom as effective scalar and gauge fields <cit.>. Intuitively, this can be understood from local symmetry breaking which shifts the Dirac cones near charge neutrality both in energy and momentum, respectively. Indeed, the microscopic 𝒞_3z symmetry of pristine graphene pins the two Dirac points at the zone corners (valleys) of the Brillouin zone <cit.>. Atomic displacements that break this symmetry, i.e., shear strain, therefore result in a spatially-varying shift of the Dirac point. This is the action of a pseudo vector potential with opposite sign in the two valleys which preserves time-reversal symmetry. For specific strain configurations, the corresponding pseudo magnetic fields give rise to pseudo Landau levels <cit.> with field strengths that can exceed several hundreds of Tesla <cit.>. Moreover, similar pseudo gauge fields also arise in strained 2D semiconductors <cit.>, as well as in 3D topological semimetals <cit.>. In fact, pseudo gauge fields were first considered in semiconductors in the 1980s <cit.>. In the context of graphene, they were first discussed in carbon nanotubes where the curvature of the tube gives rise to a pseudo gauge field that results in a band gap for nanotubes that should otherwise be metallic <cit.>.
In most electronic continuum theories, the pseudo gauge field is derived from deformations obtained from continuum elasticity which accounts for acoustic displacements. Since graphene has two sublattices [see Fig. <ref>(a)], there are both center-of-mass (acoustic) and relative (optical) displacements, and both are important in the long-wavelength limit. In this paper, we show that the optical displacement field contributes at the same order of magnitude to the pseudo vector potential as the acoustic shear strains. This finding resolves a long-standing conundrum from earlier works that combine molecular dynamics (MD) simulations and electronic tight-binding models, for which the pseudo magnetic field is found to be much smaller as predicted by elastic theory <cit.>.
We demonstrate our theory for two graphene systems: twisted bilayer graphene and corrugated graphene, see Fig. <ref>(c). The displacement fields are computed from MD simulations with the lammps code <cit.>. In particular, for twisted bilayer graphene near the magic angle we show that optical displacements induced by lattice relaxation reduce the pseudo magnetic field by almost one order of magnitude.
Pseudo gauge fields in graphene revisited.—Consider a sheet of graphene subject to atomic displacement fields that vary slowly relative to the lattice. These may be induced by external stresses or lattice relaxation, e.g., in a graphene moiré. In the long-wavelength limit, we define fields u_σ( r) and h_σ( r) for sublattice σ = A,B [see Fig.<ref>(a)] projected on and normal to the nominal graphene xy plane, respectively. The long wavelength acoustic and optical displacement fields are defined as
u = ( u_A + u_B ) / 2, h = ( h_A + h_B ) / 2,
v = u_A - u_B, w = h_A - h_B.
Atomic displacements modulate the electronic hopping amplitude, which couples to the low-energy Dirac electrons as effective scalar and gauge fields <cit.>. We revisit this theory, starting from a tight-binding description for the p_z electrons of graphene in the nearest-neighbor approximation. We show that there is an important contribution to the pseudo gauge field from optical displacements [v and w in Eq. (<ref>)] that was not considered in previous works and which cannot be obtained from continuum elasticity <cit.>. Here we do not consider the pseudo scalar field or deformation potential <cit.> which to leading order only depends on acoustic displacements as it enters through on-site potentials. Moreover, the deformation potential gives rise to electron-hole puddles and is strongly screened <cit.>.
In the presence of atomic displacements, the Hamiltonian for p_z electrons in graphene can be written as
H = -∑_ r∑_n=1^3 t_n( r) c_A^†( r) c_B[ r + δ_n( r) ] + h.c.,
where the first sum runs over cells r and the second one over nearest neighbors. Here c_σ^†( r) [c_σ( r)] are electron creation (annihilation) operators. The position of A atoms is given by r + u_A( r) + h_A( r) ẑ and δ_n( r) = δ_n^0 + u_B( r + δ_n^0) - u_A( r) + [ h_B( r + δ_n^0) - h_A( r) ] ẑ are the nearest-neighbor bond vectors with δ_n^0 those of pristine graphene, see Fig. <ref>(a). Taking the continuum limit, one obtains an effective low-energy Hamiltonian <cit.>
H_eff = ħ v_F ∑_τ∫ d^2 r ψ_τ^†( r) [ -i ∇ + eτ/ħ A( r) ] ·σψ_τ( r),
with ħ v_F = √(3)t_0a/2, σ = (τσ_x, σ_y), and field operators ψ_τ( r) = [ ψ_τ A( r), ψ_τ B( r) ]^t where τ = ± 1 is the valley index. See Supplementary Material (SM) for details [See Supplementary Material at [insert link] for more details on the derivation of the continuum model and the pseudo gauge field, the lammps calculations for twisted bilayer graphene and corrugated graphene, as well as continuum elasticity for the corrugated graphene]. Here we take the zigzag direction along the x axis. In this case, the pseudo vector potential is defined as
A_x( r) - i A_y( r) = -1/ev_F∑_n=1^3 δ t_n( r) e^i K ·δ_n^0,
where δ t_n = t_n - t_0 is the change in hopping with t_0 ≈ 2.8 eV and K_+ = 4π/(3a) x̂ is the zone corner [see Fig. <ref>(b)] with a ≈ 2.46 Å the graphene lattice constant <cit.>. In lowest order of displacements and their gradients,
δ_n( r)- δ_n^0 ≃( δ_n^0 ·∇) ( u+ h ẑ) - ( v + w ẑ),
where the first and second term give acoustic and optical contributions, respectively.
From Eqs. (<ref>) and (<ref>),
A = √(3)ħβ/2ea[ [ u_yy - u_xx; u_xy + u_yx ] + 2 √(3)/aẑ×( v + w ∇ h ) ],
with β = -a/√(3)t_0. ∂ t/∂ d|_0 ≈ 3
<cit.>
and u_ij = [∂_i u_j + ∂_j u_i + ( ∂_i h ) ( ∂_ j h ) ]/2
the strain tensor for acoustic displacements. Note this expression only holds for the orientation of the unstrained graphene shown in Fig. <ref>(a). See SM for the general case <cit.>. The first term in Eq. (<ref>) is the usual acoustic contribution (A_ac) while the second term (A_op) is new and gives a contribution from the relative motion between sublattices. This is one of the main results of our work. It is reminiscent of the renormalization of the electron-phonon coupling by optical phonons in pristine graphene <cit.>. In general, the optical component gives a pseudo magnetic field (PMF)
B_op = ẑ·( ∇× A_op) = 3 ħβ/ea^2∇·( v + w ∇ h ).
To demonstrate our theory, we performed lammps molecular dynamics simulations for two graphene systems: moiré graphene and corrugated graphene. In all cases, we find that A_op acts to reduce the overall magnitude of the PMF.
Graphene moirés.—We first consider twisted bilayer graphene (TBG) <cit.>, a twist moiré formed by stacking two layers of graphene with a relative twist, see Fig. <ref>(c). In moirés, the atomic stacking between layers varies spatially. Certain stackings are favorable, and the systems relaxes to minimize the total elastic and adhesion energy. This gives rise to atomic displacements and concomitant pseudo gauge fields <cit.>.
As before, one defines displacement fields projected on the xy plane, u_l and v_l with l = 1,2 the layer index, and similar for out-of-plane displacements. These displacements are calculated for a relaxed structure with lammps using the REBO intralayer <cit.> and DRIP interlayer potential <cit.>. As expected for a twist moiré, the acoustic in-plane displacement field is almost entirely solenoidal, consistent with previous theory <cit.> and experiment <cit.>. On the other hand, the optical field is mostly irrotational yielding a finite optical PMF. See SM for details on the displacement fields <cit.>. The acoustic, optical, and total PMF are shown in Fig. <ref>(a). We only show the PMF for one layer since the D_6 symmetry of the moiré yields A_2(x,y) = diag(-1,1) A_1(-x,y) from 𝒞_2y rotation symmetry. Moreover, the PMF is odd under r ↦ - r due to 𝒞_2z symmetry. Consequently, we have B_2(x,y) = B_1(x,-y). We find that the acoustic and optical PMFs have similar shapes but opposite signs such that B_tot = B_ac + B_op is about five times smaller than B_ac. We quantify this by plotting the root mean square (RMS) as a function of twist angle in Fig. <ref>(b). The magnitudes of both B_ac and B_tot are nearly constant above the magic angle <cit.> and the ratio √(<B_ac^2>/<B_tot^2>)≈ 4.8 for twists from 1^∘ to 4^∘. We also show the PMF from Eq. (<ref>) in Fig. <ref>(c), using the atomic positions from lammps and intralayer hopping
t(d) = V_ppπexp[ - β( √(3) d / a - 1 ) ],
that only depends on the interbond distance d with V_ppπ = 2.8 eV and β = 3.37 <cit.>. This is a good approximation because the nearest neighbors lie approximately in the same plane such that the overlap integral is dominated by V_ppπ.
Our results show that optical displacements, i.e., the relative motion of different sublattices, effectively screen the PMF in twisted graphene, yielding much lower values as predicted by continuum elasticity <cit.>. Physically, we find that the relative displacements act to restore the microscopic 𝒞_3z symmetry by reducing changes in the bond length.
Corrugated graphene.—As a second example, we consider monolayer graphene subjected to a long-wavelength corrugation, see Fig. <ref>(c). This setup may be realized by engineering a suitable substrate <cit.> or through a buckling transition as was observed for graphene on NbSe_2 <cit.>. In particular, we consider a periodic corrugation with C_3v symmetry <cit.> commensurate with graphene and given by h_sub( r) = h_0 ∑_n=1^3 cos( _n · r + θ) where g_1 = 4π / ( √(3) L) ( 0, 1 ) and g_2,3 = 4π / ( √(3) L ) ( ∓√(3)/2, -1/2 ). These are the three shortest nonzero reciprocal vectors of the corrugation related by 𝒞_3z rotations. Here h_0 and θ control the amplitude and shape of the corrugation and L = Na is the superlattice constant with integer N≫1.
The corrugated substrate is modeled in lammps with a dense honeycomb lattice to ensure commensuration and mimic a smooth substrate. Interactions in the graphene are modeled with REBO <cit.> and the interaction with the substrate by a 12-6 Lennard-Jones potential <cit.>. The resulting in-plane displacements are analyzed using a Helmholtz decomposition, u = ∑_ g( α_ gẑ× g + β_ g g ) e^i g · r / (ig^2) and similar for v. Here the transverse and longitudinal coefficients α_ g and β_ g are c-numbers corresponding to the curl and divergence, respectively. These coefficients are constrained by symmetry <cit.> and together with h_ g and w_ g there are four complex coefficients for each reciprocal star. For example, for θ = 0 (modulo π/3) the corrugation has C_6v symmetry. In this case, 𝒞_2z implies u( r) = - u(- r) and h( r) = h(- r) but v( r) = v(- r) and w( r) = -w(- r) since a 𝒞_2z rotation exchanges the sublattices.
As a first approximation, one can use continuum elasticity for the acoustic in-plane displacement field. In the limit where graphene is pinned to the substrate, we have h=h_sub <cit.> and the only nonzero in-plane coefficients are β_1 = (1 - 3ν) π^2 h_0^2 e^-2iθ / (3L^2), β_2 = (3-ν) π^2 h_0^2/ (3L^2), and β_3 = 2π^2 h_0^2 e^2iθ/ (3L^2) where the subscript indexes the star and ν is the Poisson ratio of graphene. From lammps, we also find that volumetric components are dominant for both u and v. While some rotational components are symmetry-allowed, they are at least one order of magnitude smaller <cit.>. Hence, unlike for twist moirés, u is mostly irrotational for corrugations.
As was the case for TBG, optical displacements are significant even in the regime where continuum elasticity accurately describes the acoustic displacements (h_0/L < 0.02). While generally v is at least one order of magnitude smaller than u, the former contributes to the pseudo gauge field at zeroth order and the latter via the strain tensor, see Eq. (<ref>). Hence the acoustic part is suppressed by a factor a/L such that ultimately both fields contribute at the same order. In Fig. <ref>(a) we show the PMFs for a corrugation with h_0 = 5 Å and θ = 15^∘. We see that the total PMF is more concentrated and its magnitude is halved compared to B_ac. This is further illustrated in Fig. <ref>(b) where the RMS of the PMFs is plotted as a function of h_0/L. Note that for h_0/L < 0.02, the PMF from Eq. (<ref>) matches Eq. (<ref>) only if we include optical displacements. For large amplitudes h_0/L > 0.02, there are higher-order corrections which further reduce the magnitude of the PMF.
The suppression of the pseudo gauge field due to optical displacements strongly modifies the electronic minibands near charge neutrality. In Fig. <ref>, we show the bands calculated with the continuum model together with the bands from the tight-binding model in Eq. (<ref>) using atomic positions from lammps with the hopping amplitude from Eq. (<ref>). As expected, optical contributions reduce the minigaps between these bands and increase the bandwidth, while the topology of the bands remains unchanged. The latter is given by the valley Chern number 𝒞 = ( 𝒞_+ - 𝒞_- ) / 2 since the magnetic point group in a single valley breaks time-reversal symmetry. For a C_3v corrugation, it is given by
C_3v(C_3) = < 𝒞_3z, ℳ_x 𝒯> where ℳ_x (x ↦ -x) is an in-plane mirror and 𝒯 is spinless time reversal. Importantly, the band manifold near charge neutrality is well reproduced by the continuum model only if we include optical contributions. However, the continuum model cannot reproduce the higher bands from tight binding. This discrepancy is likely due to higher-order corrections in the continuum theory such as a position-dependent Fermi velocity <cit.>. Moreover, using Eq. (<ref>) directly by taking its Fourier transform changes only slightly the continuum bands <cit.>.
The reduced band flattening and gaps from the elastically screened gauge field, has important consequences for the feasibility of symmetry-broken phases <cit.> and fractional Chern insulators <cit.> in periodically corrugated graphene. However, in the presence of an electric field normal to the xy plane, which couples to the height modulation <cit.>, one can still obtain isolated and flattened minibands. This results from the sublattice polarization induced by the PMF in real space <cit.> such that a scalar potential V( r) = V_0 h( r) / h_0 effectively acts as a staggered sublattice potential on the superlattice scale. This is shown in Fig. <ref>(c) where we plot the bandwidth W and gaps Δ of the highest valence band versus V_0 for h_0 = 6 Å and θ = 15^∘. As the gap opens at the Dirac point, the 2nd valence band closes and reopens to a topological phase with a minimum bandwidth of 20 meV for a field strength 40 mV/nm. Moreover, in this case the trace condition <cit.> is violated on the order of 10%, see Fig. <ref>(d). Hence, the reduced sublattice polarization from the screened PMF makes this system less favorable for hosting a fractional Chern insulator <cit.>.
Conclusions.—We developed a theory of pseudo gauge fields in graphene that takes into account contributions from both acoustic and optical displacement fields. The latter correspond to the relative motion of different sublattices. Using molecular dynamics simulations, we have shown that the optical displacements significantly modify the resulting pseudo magnetic field. Specifically, we applied our theory to moiré graphene and corrugated graphene. In all cases studied, optical contributions screen the acoustic contribution resulting in an overall reduction of the pseudo magnetic field. A simple explanation is given by that fact that the internal relaxation tends to restore the bond lengths to their pristine value.
Our theory elucidates the origin of discrepancies between continuum and tight-binding calculations that use microscopic theories to model lattice relaxation. It also introduces a novel way to engineer pseudo magnetic fields through the optical displacement though this may be difficult to achieve in practice and requires microscopic theories that go beyond continuum elasticity. Furthermore, our theory will help to understand experiments that probe pseudo gauge fields. For example, in twisted bilayer graphene, elastic screening of pseudo magnetic fields may explain why initial predictions based on continuum elasticity <cit.> were not observed.
We conclude that continuum elasticity, which only yields the acoustic displacement field, cannot fully describe pseudo gauge fields in graphene and most likely other low-dimensional materials with nonprimitive lattices, e.g., transition metal dichalcogenides. For displacements that are mostly in the nominal graphene plane, a reduction factor <cit.> may be used for qualitative results which reduces the strength of the pseudo magnetic field by a constant factor. However, the reduction factor is not universal and may vary depending on the amount of strain and microscopic details. Moreover, when out-of-plane displacements are significant, a reduction factor is inadequate even for qualitative results because both the magnitude and shape of the pseudo magnetic field are modified.
CDB and EJM are supported by the U.S. Department of Energy under Grant No. DE-FG02-84ER45118. RS, WNL and LC acknowledge support from Research Foundation-Flanders (FWO) research project No. G0A5921N. We thank Shaffique Adam and Mohammed M. Al Ezzi for interesting and helpful discussions. We further acknowledge Gayani N. Pallewela for sharing lammps data of twisted bilayer graphene for different molecular dynamics potentials.
Supplemental Material
§ ELECTRONIC THEORY
§.§ Continuum limit
The tight-binding Hamiltonian of graphene in the nearest-neighbor approximation in the presence of strain can be written as
H = -∑_ r∑_n=1^3 t_n( r) c_A^†( r) c_B[ r + δ_n( r) ] + h.c.,
where the first sum runs over cells r and the second one over nearest neighbors. Here t_n( r) > 0 is the hopping amplitude between nearest neighbors that is modulated by strain, and c_σ^†( r) [c_σ( r)] are creation (annihilation) operators for sublattice σ = A,B. The position of A atoms is given by r + u_A( r) + h_A( r) ẑ where u_A is the in-plane displacement, i.e, projected on the original graphene xy plane, and h_A is the out-of-plane displacement in the z direction, and similar for B atoms.
To study the low-energy physics near the K_± point, we take the continuum limit. This amounts to the replacement
c_σ( r) →√(A_c)∑_τψ_τσ( r) e^i K_τ· r,
where A_c = A/N is the unit cell area. Here, ψ_τσ^†( r) [ψ_τσ( r)] are field operators that create (annihilate) a fermion of sublattice σ at position r composed of small momentum components | q|a ≪ 1 near valley K_τ with τ = ± 1 the valley index, and which obeys the usual fermionic anticommutation relations. The effective Hamiltonian becomes
H_eff = -∑_τ∑_n=1^3 ∫ d^2 r t_n( r) e^i K_τ·δ_n( r)ψ_τ A^†( r) ψ_τ B[ r + δ_n( r) ] + h.c.
≈ -∑_τ∫ d^2 r ψ_τ A^†( r) ∑_n=1^3 e^i K_τ·δ_n^0[ t_0 δ_n^0 ·∇_ r + δ t_n( r) ] ψ_τ B( r) + h.c.,
where we let ∑_ r→ A_c^-1∫ d^2 r. In the second line, we expanded everything up to lowest order in gradients and displacements with t_n( r) = t_0 + δ t_n( r), where we take t_0 = 2.8 eV <cit.>. We further defined the nearest-neighbor bond vectors of pristine graphene δ_n^0. For example, for the orientation shown in Fig. <ref>, we have δ_1^0 = ( 0, a_0 ), δ_2^0 = a_0 ( -√(3)/2, -1/2 ), and δ_3^0 = a_0 ( √(3)/2, -1/2 ) where a_0 = 1.42 Å is the nearest-neighbor distance.
Moreover, we have assumed that intervalley coupling is negligible. This is justified in the limit L ≫ a where the strain field varies slowly with respect to the graphene lattice. In the following, we consider a general orientation of the graphene where φ is the angle between the zigzag direction and the x axis. For example, we have φ = 0 for zigzag orientation, shown in Fig. <ref>, and armchair orientation would correspond to φ = -π/2. We now take K_± = ± R(φ) (4π/3a, 0) with R(φ) the standard 2 × 2 rotation matrix for a counterclockwise rotation by an angle φ. One finds <cit.>
-t_0 ∑_n=1^3 e^i K_τ·δ_n^0δ_n^0 ·∇_ r = -i ħ v_F e^iτφ( τ∂_x - i ∂_y ),
with ħ v_F = √(3) t_0 a/2 and we define the pseudo vector potential A = (A_x, A_y) as
-∑_n=1^3 δ t_n( r) e^i K_τ·δ_n^0≡ e v_F e^iτφ[ A_x( r) - i τ A_y( r) ],
with -e the electron charge. Explicitly,
A( r) = R(φ)/2ev_F[ δ t_2 + δ t_3 - 2 δ t_1; √(3)( δ t_3 - δ t_2 ) ].
The effective low-energy Hamiltonian thus becomes
H_eff = ħ v_F ∑_τ∫ d^2 r ψ_τ^†( r) {[ -i ∇_ r + eτ/ħ A( r) ] · (τσ_x, σ_y) }ψ_τ( r),
with ψ_τ( r) = [ e^-iτφ/2ψ_τ A( r), e^iτφ/2ψ_τ B( r) ]^t.
To compute the change in hopping amplitude δ t_n due to strain, we first consider the change in the nearest-neighbor bond vectors. In lowest order of displacements and their gradients, we have
δ_n( r)- δ_n^0 = u_B( r + δ_n^0) - u_A( r) + [ h_B( r + δ_n^0) - h_A( r) ] ẑ
≈( δ_n^0 ·∇) [ u( r) + h( r) ẑ] - [ v( r) + w( r) ẑ],
where
u = ( u_A + u_B ) / 2, h = ( h_A + h_B ) / 2,
v = u_A - u_B, w = h_A - h_B,
are the center-of-mass (acoustic) and relative (optical) displacements, respectively. For example, in a classical microscopic theory of the in-plane phonon modes of a pristine graphene sheet, one can show that in the long-wavelength limit <cit.>
v( r) = ( κ - 1 ) a/2√(3) R(3φ) [ ∂_x u_y + ∂_y u_x; ∂_x u_x - ∂_y u_y ],
with κ∼ 1/3 the so-called reduction factor, whose precise value depends on microscopic details. We do not attempt to find a relation between the optical and acoustic displacements for the cases we consider, e.g., moiré graphene and corrugated graphene. Instead, our microscopic theory is given by a molecular dynamics simulation which yields the fields u_A,B and h_A,B.
Next we calculate the change in the hopping amplitude. To this end, we assume that t( d) = t(d) only depends on the bond distance d (two-center approximation) and expand them in lowest order of the displacements:
δ t_n = t( δ_n ) - t_0
≈. ∂ t/∂ d_i|_0 ( δ_n - δ_n^0 )_i + 1/2. ∂^2 t/∂ d_z^2|_0 ( δ_n - δ_n^0 )_z ( δ_n - δ_n^0 )_z
= -3β t_0/a^2[ δ_n^0 ·( δ_n - δ_n^0 ) + 1/2( δ_n - δ_n^0 )_z ( δ_n - δ_n^0 )_z ],
≡ -3β t_0/a^2[ δ_ni^0 δ_nj^0 u_ij - δ_ni^0( v_i + w ∂_i h ) + w^2/2],
where β = -a/√(3)t_0. ∂ t/∂ d|_nn,0≈ 3 <cit.>, u_ij is the strain tensor, and we used that d_z = 0 in the absence of strain. We further used (i = x, y)
∂ t/∂ d_i = d_i/d∂ t/∂ d,
∂ t/∂ d_z = d_z/d∂ t/∂ d,
∂^2 t/∂ d_z^2 = 1/d∂ t/∂ d - d_z^2/d^3∂ t/∂ d + d_z^2/d^2∂ t^2/∂ d^2,
∂^2 t/∂ d_z ∂ d_i = d_i d_z/d^2∂^2 t/∂ d^2 - d_i d_z/d^3∂ t/∂ d.
Plugging the result for δ t_n into Eq. (<ref>), we obtain the pseudo vector potential
A = A_ac + A_op = √(3)ħβ/2ea[ R(3φ) [ u_yy - u_xx; u_xy + u_yx ] + 2 √(3)/aẑ×( v + w ∇ h ) ],
which is preserved under a global 𝒞_3z rotation, as both the graphene lattice and the valley are left unchanged. Note that we omitted a leading-order contribution proportional to
∑_n=1^3 e^i K_τ·δ_n ( r) ≈ i K_τ·∑_n=1^3 [ δ_n ( r) - δ_n^0 ] e^i K_τ·δ_n^0
≈ i K_τ·∑_n=1^3 e^i K_τ·δ_n^0[ ( δ_n^0 ·∇) u( r) - v( r) ]
= -2π e^iτφ/√(3)( ∂_x u_x - i τ∂_y u_x ),
since it corresponds to a gauge transformation which does not affect the long-wavelength physics <cit.>.
For completeness, we also consider the second nearest-neighbor hopping t_m'( r) between atoms of the same sublattice. This yields an additional term V( r) ψ_τ^†( r) ψ_τ( r) in the effective Hamiltonian of Eq. (<ref>) where V( r) is the deformation potential. In lowest order of the displacements and up to a constant energy shift, we find
V( r) = -∑_m=1^6 δ t_m'( r) e^i K_τ·η_m^0
= δ t_1' + δ t_2' + δ t_3'
= 3a/2. ∂ t/∂ d|_nnn,0( u_xx + u_yy),
where η_1^0 = a_1, η_2^0 = a_2, η_3^0 = - a_1 - a_2, η_4^0 = - a_1, η_5^0 = - a_2, and η_6^0 = a_1 + a_2 are the six second nearest-neighbor bond vectors of pristine graphene with a_1/2 = δ_1^0 - δ_2/3^0 the primitive lattice vectors. Here we also used the two-center approximation such that δ t' is independent on reversal of the bond vector. We can estimate the prefactor in front of tr(u) = u_xx + u_yy in Eq. (<ref>) with a simple model for the hopping amplitude,
t(d) = t_0 exp[ - β( d / a_0 - 1 ) ],
which yields a prefactor -3 √(3)β t(a) / 2. The deformation potential has full rotational symmetry and is finite even in the presence of microscopic 𝒞_3z such as for biaxial strain <cit.>. Moreover, it does not depend on the relative displacements in lowest order because it couples equal sublattices.
§.§ Tight binding
In this section, we describe the tight-binding method used to calculate the electronic properties. The advantage of this approach is that it allows us to construct a model directly from the atomic configurations obtained from the molecular dynamics simulations, fully taking into account the microscopic details of the lattice relaxation. The tight-binding Hamiltonian is given by
H = -∑_< i, j > t_ij c_i^† c_j + ∑_i ξ_i c_i^† c_i,
where the atomic sites are labeled by i and j and the first sum runs over nearest neighbors. The nearest-neighbor hopping amplitude t_ij is approximated as
t_ij = t_0 exp[- β( | r_i - r_j|/a_0 - 1 ) ],
where we use the values t_0 = 2.8 eV, a_0 = a / √(3) = 0.142 nm, and β = 3.37 <cit.>. This is expected to be a good approximation since the nearest-neighbors lie approximately in the same plane, which generally differs from the xy plane in the presence of corrugation. Hence, the relevant overlap integral is still given by V_ppπ = t_0.
The second term in Eq. (<ref>) is an on-site electrostatic potential, which can originate from an external electric field or the pseudo scalar field <cit.>.
§.§ Pseudo magnetic field on a discrete grid
We can further calculate the pseudo magnetic field (PMF) directly from the atomic positions, allowing us to determine the PMF that effectively enters the tight-binding calculations. For the zigzag orientation shown in Fig. <ref>, the vector potential is given by
A_x( r) - i A_y( r) = -1/e v_F∑_n=1^3 δ t_n( r) e^i K ·δ_n( r),
where δ t_n = t_ij - t_0 is the change in hopping energy, K_+ = 4 π / (3a) x̂ the zone corner of the graphene BZ and
δ_n( r) = δ_n^0 + u_B( r + δ_n^0) + h_B( r + δ_n^0) ẑ
- u_A( r) - h_A( r) ẑ,
the modified bond vector. Please note that the definition in Eq. (<ref>) differs from our previous definition since we also take into account changes in the phase factor. These only modify the PMF at next-to-leading order as the lowest order contribution from the phase factor corresponds to a gauge transformation. We prefer this definition for this section since all contributions to (<ref>) are automatically taken into account in tight-binding calculations, whereas in the continuum theory we only consider leading-order terms from an expansion in displacements and momentum <cit.>.
The PMF is given by B = ∇× A and calculated on a discrete (strained) atomic grid using Stokes' theorem. In two dimensions, this relates the surface integral of the curl of a vector field A to the contour integral around the boundary of the same field:
∬_Σ (∇× A) · dΣ = ∮_∂Σ A · d r.
If we consider B to be slowly varying within the graphene unit cell, the surface integral in (<ref>) becomes trivial:
∇× A( r) ≈1/S∮_∂Σ A · d r
= 1/S( ∫_ r + δ_1^ r + δ_2 + ∫_ r + δ_2^ r + δ_3 + ∫_ r + δ_3^ r + δ_1) A · d r
≈1/S[ A( r + δ_1) + A( r + δ_2) ] ·δ_2 - δ_1/2
+ 1/S[ A( r + δ_2) + A( r + δ_3) ] ·δ_3 - δ_2/2
+ 1/S[ A( r + δ_3) + A( r + δ_1) ] ·δ_1 - δ_3/2
= A( r + δ_1) ·δ_2 - δ_3/2S + A( r + δ_2) ·δ_3 - δ_1/2S
+ A( r + δ_3) ·δ_1 - δ_2/2S,
where we approximated the line integral along the triangle contour shown in Fig. <ref>. The enclosed area S is calculated by taking the cross product of any two sides,
S = |(δ_1 - δ_2) × (δ_3 - δ_2)|/2.
Finally, we note that this formula is only gauge invariant up to the same order of approximation. In particular, if we send A ↦ A + ∇χ and approximate ∇χ( r+δ_n) ≃∇χ( r) + ( δ_n ·∇) ∇χ( r) one can show that the final expression in Eq. (<ref>) remains unchanged.
§ MOIRÉ GRAPHENE
In this section, we discuss the displacement fields from lattice relaxation in twisted bilayer graphene (TBG) obtained from lammps simulations. In particular, we consider commensurate structures that have the periodicity of the moiré lattice with twist angles defined by <cit.>
cosϑ_m = 3m^2 + 3m + 1/2/3m^2 + 3m + 1.
Moreover, we place the twist center at the center of a graphene hexagon such that ϑ = 0 corresponds to AA stacking. These structures have point group D_6 = < 𝒞_6z, 𝒞_2x> where 𝒞_6z is a rotation by π/3 about the z axis and 𝒞_2x is a π rotation about the x axis <cit.>, as illustrated in Fig. <ref>(a).
We then define the displacement fields as
r_lσ = r_lσ^0 + u_lσ( r_lσ^0) + h_lσ( r_lσ^0) ẑ,
where l= 1,2 is the layer, σ = A, B the sublattice, and
r_lσ^0= R[(-1)^l+1ϑ/2] ρ_σ are the rigid coordinates in the absence of relaxation. Here ρ_σ = n_1 a_1 + n_2 a_2 + δ_σ are the atomic positions of monolayer graphene with n_1,n_2 ∈ Z and δ_σ the sublattice position in the graphene cell.
§.§ Displacement fields
Assuming the moiré periodicity is preserved after lattice relaxation, we define the smooth fields
u_lσ( r) = ∑_ g u_lσ, g e^i g · r,
and similar for out-of-plane displacements. Here g are moiré reciprocal vectors and u_ g = u_- g^* are complex Fourier components. In practice, the Fourier components are obtained by taking a discrete Fourier transform of the lammps data. We can now define the acoustic and optical displacement fields for each layer,
u_l = ( u_lA + u_lB ) / 2, h_l = ( h_lA + h_lB ) / 2,
v_l = u_lA - u_lB, w_l = h_lA - h_lB.
One can now make similar superpositions between layers in terms of homo and hetero displacements. For example, one distinguishes between out-of-plane buckling (homo) and breathing (hetero) displacements. The hetero displacements are given by
u = u_1 - u_2, h = h_1 - h_2,
v = v_1 - v_2, w = w_1 - w_2,
which are shown in Fig. <ref> for ϑ≈ 1.018^∘. We see that u is mostly solenoidal, i.e., ∇· u ≈ 0. It gives rise to local co-twisting near the AA stacking center (origin) and counter-twist near AB and BA stacking centers <cit.>. This reduces the size of AA regions and increases the size of AB and BA regions. Similarly, the acoustic out-of-plane hetero displacements, i.e., the interlayer distance, conforms to the in-plane stacking. The in-plane optical displacement field v has significant volumetric contributions and is over one order of magnitude smaller than u, while the out-of-plane field w is about six orders of magnitude smaller than h and can be safely neglected.
The in-plane components can be written using a Helmholtz decomposition. For example,
u_ g = α_ gẑ× g + β_ g g/ig^2,
for g = | g| ≠ 0 and where α_ g = α_- g^* and β_ g = β_- g^* are complex numbers. Note that u_ 0 is a constant relative shift between layers which does not affect the long-wavelength physics for small twists. These coefficients are related to the divergence and curl:
∇× u = ∑_ g i g × u_ g e^i g · r = ẑ∑_ gα_ g e^i g · r,
∇· u = ∑_ g i g · u_ g e^i g · r = ∑_ gβ_ g e^i g · r,
which are the rotational and in-plane volumetric components of the displacement gradient. From the lammps simulations, we find that u is dominated by real rotational coefficients while v is mostly given in terms of imaginary volumetric coefficients. In Fig. <ref>, we show these Fourier coefficients as function of twist angle for the first six reciprocal stars. The scaling of the first Fourier component of the acoustic displacement field yields an estimate of the V_1 / μ where V_1 is the first Fourier coefficient of the adhesion energy <cit.> and μ is the shear Lamé constant. From this, we estimate that relaxation in our molecular dynamics simulations is about 1.5 stronger as compared to density-functional theory calculations using the local-stacking approximation <cit.>. We further find that the Re( g × u_ g) [Im( g · u_ g)] can be fitted to a polynomial odd (even) in 1/θ.
All results obtained from lammps molecular dynamics simulations are consistent with the emergent D_6 symmetry of twisted bilayer graphene. To illustrate how symmetries constrain the displacement fields, consider an in-plane symmetry 𝒮. The displacement fields then satisfy 𝒮 u( r) = u(𝒮 r) and h( r) = h(𝒮 r). In reciprocal space, one then finds that α_ g and β_ g transform as a pseudoscalar and scalar, respectively. Explicitly,
h_𝒮 g = h_ g,
β_𝒮 g = β_ g,
α_𝒮 g = (𝒮) α_ g.
The optical displacements transform similarly except for the fact that any transformation that interchanges the sublattices gives an extra minus sign. For example, in the presence of 𝒞_2z rotation symmetry we have u(- r) = - u( r) and h(- r) = h( r), while v(- r) = v( r) and w(- r) = -w( r). The symmetry-allowed coefficients for the first five reciprocal stars are listed in Table <ref>. Here each reciprocal star consists of six reciprocal vectors closed under 𝒞_6z rotations.
§.§ Pseudo magnetic fields
The valley-preserving symmetries of the emergent moiré lattice in small-angle twisted bilayer graphene form the dichromatic group 6'2'2 = < 𝒞_6z𝒯, 𝒞_2y> also denoted as D_6(D_3) = D_3 + (D_6 ∖ D_3 ) 𝒯 <cit.> where 𝒯 is spinless time-reversal symmetry with 𝒯^2 = 1. These symmetries yield the following constraints on the pseudo gauge fields,
A_1,2( r) = A_1,2(- r) + ∇χ,
A_1,2( r) = 𝒞_3z^-1 A_1,2(𝒞_3z r) + ∇χ,
A_1(x,y) = [ -1 0; 0 1 ] A_2(-x,y),
where the subscript is the layer index and χ is a scalar function. This implies that the pseudomagnetic field (PMF) satisfies
B_1,2( r) = -B_1,2(- r) = B_1,2(𝒞_3z r),
B_1(x,y) = -B_2(-x,y) = B_2(x,-y),
such that we only need to consider one layer. To verify these symmetries, we show the PMFs for both layers obtained from the acoustic and optical displacement fields in Fig. <ref> for two twist angles.
Similar as for the displacement fields, we can write the PMF as a Fourier series
B( r) = ∑_ g ≠ 0 B_ g e^i g· r,
where we restrict the sum since the uniform part vanishes for periodic displacement fields, giving a zero net pseudoflux. We show the Fourier components of the acoustic (B_ac) and total PMF (B_tot = B_ac + B_op) for the first three reciprocal stars in Fig. <ref> as a function of twist angle. We see that the PMF is well approximated by
B_1,2(x,y) ≃± 2 B_0 ∑_i=1^3 sin( g_i · r),
where g_1,2,3 are the shortest nonzero moiré reciprocal vectors related by 120^∘ rotations. Here |B_0| ≈ 3 T for twist angles in the range 1.5^∘ < ϑ < 4^∘ and where we used β = 3.37. The independence of the magnitude of the PMF for twist angles above the magic angle (ϑ≈ 1^∘) was also found in other studies that only considered the acoustic part <cit.>. Note that the form in Eq. (<ref>) satisfies the symmetry constraints in Eq. (<ref>) since the first star is symmetric under x ↦ -x in the presence of 𝒞_3z symmetry.
Finally, the root mean square is then given by
RMS = √(< B^2 >) = √(∑_ g |B_ g|^2),
which is shown in Fig. <ref> of the main text. If we define the magnetic length from the RMS, one finds ℓ_RMS > 9 nm for the twist angles under consideration. We also show the ratio √(<B_ac^2> / < B_tot^2 >) in Fig. <ref> as a function of twist angle. This ratio gives an estimate of the reduction factor <cit.> due to optical displacements.
§.§ Molecular dynamics
For the molecular dynamics simulation of twisted bilayer graphene we divide the interactions between interlayer and intralayer. For the interlayer interaction we take the dihedral-angle-corrected registry-dependent potential (DRIP) benchmarked with EXX-RPA DFT calculations <cit.>, which has proven to be an improvement in comparison to the usual Kolgomorov-Crespi <cit.> potential. For the intralayer interactions we use the usual REBO potential <cit.> with a 2 Å cutoff to avoid interactions between atoms in different layers. For the geometric optimization we enforce periodic boundary conditions and use the "fire" minimization style which uses damped dynamics. This ensures that the system does not get stuck in a local energy minimum. As a stopping criteria, we take the force tolerance to be 10^-4 eV/Å between subsequent steps.
§.§ Comparing different microscopic models
In Fig. <ref> we show the first-star transverse Fourier component of the pseudo vector potential of the first layer, ẑ·( A_1, g_1× g_1 ), and the RMS of the PMF as a function of twist angle for different MD potentials. In the main text we showed results for the interlayer DRIP <cit.> and intralayer AIREBO <cit.> potential. Here we also show results using the Kolgomorov-Crespi (KC) <cit.> interlayer potential instead of DRIP. Moreover, we also show data <cit.> for the intralayer REBO <cit.> and the same interlayer DRIP as before. We finally also show data for DRIP with the intralayer machine-learning potential GAP20 <cit.>.
We see that the specific microscopic details, corresponding here to different MD potentials, not only yield different total PMFs, but also different acoustic and optical contributions. Most strikingly, the results for DRIP with GAP20 yield a noticeably smaller reduction as compared to the other cases which have similar reduction factors. Even though DRIP with GAP20 gives smaller acoustic contributions, i.e., less strain compared to using REBO or AIREBO for intralayer interactions, the optical contributions are much smaller such that in the end the total PMF is larger for GAP20.
§ CORRUGATED GRAPHENE
In this section, we consider the structural and electronic properties of monolayer graphene subjected to a periodic corrugation. Specifically, we consider a triangular height modulation with C_3v symmetry, commensurate with the graphene lattice and defined by <cit.>
h_sub( r) = h_0 ∑_n=1^3 cos( _n · r + θ),
with amplitude h_0 and where θ controls the shape. The superlattice is defined by the reciprocal vectors
_1 = 4π/√(3)L[ 0; 1 ], _2,3 = 4π/√(3)L[ ∓√(3)/2; -1/2 ],
and _3 = - _1 - _2 where L = Na is the lattice constant of the height modulation. The corresponding lattice vectors l_1,2 are chosen such that g_i · l_j = 2 πδ_ij. Here we have taken the coordinate system shown in Fig. <ref> with the zigzag direction along the x axis.
The height profile from Eq. (<ref>) preserves C_3v = < 𝒞_3z, ℳ_x > symmetry on the superlattice scale where 𝒞_3z is a rotation by 120^∘ about the z axis and ℳ_x is a mirror x ↦ -x. Note that these are not the microscopic symmetries of the graphene lattice, which are broken by the corrugation. In general, the corrugation breaks 𝒞_2z rotation symmetry since this operation is equivalent to θ↦ -θ. Moreover,
in the long-wavelength limit L ≫ a we can restrict to θ∈ [0, π/3[. This follows from the fact that ±θ are 𝒞_2z partners, while θ↦θ + 2π/3 is equivalent to a translation y ↦ y + 2L/√(3). Hence, for the special case θ = 0 mod π/3, the point group of the superlattice becomes C_6v = < 𝒞_6z, ℳ_x >.
§.§ Symmetry constraints
To minimize the elastic energy, the corrugated graphene lattice will relax, giving rise to in-plane displacement fields. Since the corrugation is smooth and periodic on the atomic scale, the in-plane and out-of-plane displacement fields can be written in terms of Fourier series,
u_σ( r) = ∑_ u_σ, e^i· r,
h_σ( r) = ∑_ h_σ, e^i· r,
respectively, where σ = A, B is the sublattice index, are reciprocal lattice vectors of the corrugation, and h_ = h_-^* and u_ = u_-^* are Fourier components. Here the uniform components (| g|=0) are set to zero as these only result in an overall translation of the graphene.
As described in the main text, we can then define acoustic and optical displacement fields. The displacement fields calculated with lammps for a corrugated substrate with h_0 = 5 Å and θ = 15^∘ are shown in Fig. <ref>. Since the relaxed structure is assumed to be adiabatically connected to the rigid corrugation, the displacements fields obey the same symmetries. In particular, due to 𝒞_3z symmetry and the reality of the fields, the displacements are characterized by three complex numbers α_m, β_m, and h_m for each star of reciprocal vectors, which is indexed by m. Here, we define a reciprocal star by six reciprocal vectors that are related by 𝒞_6z and we use a Helmholtz decomposition for the in-plane fields.
We show the symmetry-allowed values for a corrugation with C_3v symmetry in Table <ref> for the first five stars.
We also show the volumetric Fourier coefficients of the in-plane displacements, i.e., projected on the xy plane, in Fig. <ref> for θ = 15^∘ as a function of h_0/L for the first nine reciprocal stars.
§.§ Continuum elasticity
The long-wavelength acoustic displacements in graphene can be modeled using the continuum theory of elasticity <cit.>. Here one views the graphene as an elastically isotropic membrane. The elastic potential energy <cit.>
and substrate interaction are modeled as <cit.>
H_elas = 1/2∫ d^2 r [ λ u_ii u_ii + 2 μ u_ij u_ji + κ( ∇^2 h )^2 ],
H_sub = γ/2∫ d^2 r [ h( r) - h_sub( r) ]^2,
where λ and μ are in-plane Lamé constants, κ is the out-of-plane bending rigidity, and γ controls the interaction with the substrate. Here summation over repeated indices is implied and
u_ij( r) = 1/2( ∂ u_j/∂ r_i + ∂ u_i/∂ r_j + ∂ h/∂ r_i∂ h/∂ r_j),
is the strain tensor with i,j = x,y. We further assume that the graphene is pinned to the substrate such that h( r) = h_sub( r). This is justified in the limit L ≫( κ / γ)^1/4≈ 1 nm <cit.> where L is periodicity of the corrugation and the numerical value is for graphene on SiO_2 <cit.>. In this case, the interaction with the substrate dominates over the curvature term [last term of Eq. (<ref>)] since the latter scales as κ h^2 / L^4 while the former scales as γ h^2. In this limit, H_elas is a functional of the in-plane field only.
Under these assumptions, the elastic energy density
ℋ_elas = 1/A_c∫_cell d^2 r [ ( λ/2 + μ) ( u_xx^2 + u_yy^2 ) + λ u_xx u_yy + 2 μ u_xy^2 ] + constant,
where A_c is the area of the supercell defined by the periodic height modulation. For the triangular height profile, we have A_c = √(3) L^2 / 2. For convenience, we define the tensor
f_ij( r) ≡[ ∂_i h( r) ] [ ∂_j h( r) ] = ∑_ f_ij e^i · r,
where
f_ij = - ∑_' h_' h_ - ' g_i' ( g_j - g_j'),
with f_ij,- = f_ij^*. The strain tensor thus becomes
u_ij( r) = 1/2∑_[ i ( g_i u_j + g_j u_i) + f_ij] e^i · r,
where we set u_i 0 = 0 since this amounts to a uniform translation. In terms of the Helmoltz decomposition,
u_xx g = g_x ( g_x β_ g - g_y α_ g)/g^2 + f_xx g,
u_yy g = g_y ( g_y β_ g + g_x α_ g)/g^2 + f_yy g,
u_xy g = 2 g_x g_y β_ g + ( g_x^2 - g_y^2 ) α_ g/g^2 + f_xy g,
with g = | g| and where α_ g and β_ g are the rotational and volumetric components of the in-plane displacement field, respectively. Plugging the Fourier expansions into the energy density ℋ_elas gives <cit.>
1/A_c∫_cell d^2 r u_ii^2 = 1/A_c∑_,'∫ d^2 r ( i g_i u_i + f_ii /2) ( i g_i' u_i' + f_ii '/2) e^i ( + ' ) · r
= ∑_| i g_i u_i + f_ii /2|^2,
1/A_c∫_cell d^2 r u_xx u_yy = ∑_( i g_x u_x + f_xx /2) ( -i g_y u_y^* + f_yy ^*/2)
= 1/2∑_[ ( i g_x u_x + f_xx /2) ( -i g_y u_y^* + f_yy ^*/2) + c.c.],
1/A_c∫_cell d^2 r u_(xy)^2 = 1/4∑_( i g_x u_y + i g_y u_x + f_xy) ( -i g_x u_y^* - i g_y u_x^* + f_xy^* ).
We obtain
ℋ_elas = ( λ/2 + μ) ∑_( i g_x u_x + f_xx /2) ( -i g_x u_x^* + f_xx ^*/2)
+ ( λ/2 + μ) ∑_( i g_y u_y + f_yy /2) ( -i g_y u_y^* + f_yy ^*/2)
+ λ/2∑_[ ( i g_x u_x + f_xx/2) ( -i g_y u_y^* + f_yy^*/2) + c.c.]
+ μ/2∑_( i g_x u_y + i g_y u_x + f_xy) ( -i g_x u_y^* - i g_y u_x^* + f_xy^* ) + κ/2∑_ | h_ |^2 g^4.
Minimizing with respect to u_i^* for nonzero g yields the solutions for the Fourier components u_i in terms of f_ij (and thus h_i). We find
∂ℋ_elas/∂ u_x^* = -i g_x [ ( λ + 2μ) ( i g_x u_x + f_xx /2) + λ( i g_y u_y + f_yy /2) ] - i μ g_y ( i g_x u_y + i g_y u_x + f_xy),
∂ℋ_elas/∂ u_y^* = -i g_y [ ( λ + 2 μ) ( i g_y u_y + f_yy /2) + λ( i g_x u_x + f_xx /2) ] - i μ g_x ( i g_x u_y + i g_y u_x + f_xy),
and setting these equations equal to zero, gives
u_x = i/2 ( λ + 2 μ) g^4{ f_xx g_x [ g_x^2 ( λ + 2 μ) + g_y^2 ( 3 λ + 4 μ) ] + ( f_yy g_x - 2 f_xy g_y ) [ g_x^2 λ - g_y^2 ( λ + 2 μ) ] },
u_y = i/2 ( λ + 2 μ) g^4{ f_yy g_y [ g_y^2 ( λ + 2 μ) + g_x^2 ( 3 λ + 4 μ) ] + ( f_xx g_y - 2 f_xy g_x ) [ g_y^2 λ - g_x^2 ( λ + 2 μ) ] },
and
α_ g = i g × u_ g = ( f_xx g - f_yy g) g_x g_y + f_xy g( g_y^2 - g_x^2 )/g^2 = 1/g^2∑_ g' h_ g' h_ g- g'( g' · g ) ( ẑ· g' × g ),
β_ g = i g · u_ g = μℱ_ g/λ + 2 μ - f_xx g + f_yy g/2,
with ℱ_ g = ( g_x^2 f_yy - 2 g_x g_y f_xy + g_y^2 f_xx) / g^2. We find that α_ g = 0 when h( r) is restricted to the first star. For completeness, we also give the explicit forms for f_ij g for the height profile from Eq. (<ref>):
f_ g_1 = 2π^2/3h_0^2 e^-2iθ/L^2[ 3 0; 0 -1 ],
f_ g_1+2 g 2 = 2π^2/3h_0^2/L^2[ -3 0; 0 1 ],
f_2 g_1 = 2π^2/3h_0^2 e^2iθ/L^2[ 0 0; 0 -2 ],
which transform as f_𝒮 g = 𝒮 f_ g𝒮^-1.
Using the relations
μ = E/2 ( 1 + ν ), λ = ν E/1 - ν^2,
for isotropic linear elastic two-dimensional materials, with E the Young modulus and ν the Poisson ratio of graphene, we obtain <cit.>
u_xx + u_yy = 1-ν/2ℱ_ g,
u_xx - u_yy = 1+ν/2g_y^2 - g_x^2/g^2ℱ_ g,
u_xy + u_yx = -1+ν/22 g_x g_y/g^2ℱ_ g,
for nonzero g and u_ij 0 = f_ij 0/2. These are the volumetric and shear strains, respectively. In this work we use the value ν = 0.165 from experiment <cit.>. Importantly, since |ℱ_| ∼ h^2 / L^2 we expect the linear theory to be valid only for L ≫ h.
For the triangular corrugation with C_3v symmetry, continuum elasticity gives a relaxed in-plane acoustic displacement field that is irrotational such that α_ g = 0 even though it is symmetry allowed in some stars. The volumetric components are finite only in the first three stars with
β_1 = β_ g_1 = ( 1 - 3 ν) π^2/3h_0^2/L^2 e^-2iθ,
β_2 = β_ g_1+2 g_2 = ( 3 - ν) π^2/3h_0^2/L^2,
β_3 = β_2 g_1 = 2π^2/3h_0^2/L^2 e^2iθ,
consistent with the symmetry analysis. Hence, continuum elasticity yields u = ∇ϕ with ϕ_ g = -β_ g/g^2.
§.§ Electronic band structure
In the presence of a periodic scalar and gauge fields, the valley-projected electronic continuum Hamiltonian from Eq. (<ref>) can be diagonalized by Fourier transformation,
ψ_τ( r) = 1/√(A)∑_ k∑_ e^i ( k - ) · r c_τ, k - ,
where A is the system size and the sum over k is restricted to the superlattice Brillouin zone (SBZ) and c_τ, k-^† (c_τ, k-) are two-component creation (annihilation) operators that create a fermion in valley τ with momentum k -. The Hamiltonian becomes
H = 1/A∑_τ∑_ k, k'∑_, '∫ d^2 r c_τ, k'-'^† e^-i( k' - ') · r{ħ v_F [ k - + τ e/ħ A( r) ] ·( τσ_x, σ_y ) + V( r) σ_0 } e^i( k - ) · r c_τ, k-.
Next, for any function f( r) with the periodicity of the superlattice, we have
∫ d^2 r e^-i( k' - ') · r f( r) e^i( k - ) · r = A δ_ k k' f_-',
where we used that k - g with k in the SBZ and g a reciprocal vector of the superlattice, is a unique momentum decomposition. We further used ∫ d^2 r = ∑_ R∫_cell d^2 r with ∑_ R e^i k · R = N δ_ k 0 where the sum runs over superlattice cells. We then obtain
H = ∑_τ∑_ k∑_, ' c_τ, k-'^†{ħ v_F [ ( k - ) δ_' + τ e/ħ A_-'] ·( τσ_x, σ_y ) + V_-'σ_0 } c_τ, k-,
which can be diagonalized numerically by taking a sufficient number of vectors for convergence. All results shown in this work were obtained with a cutoff || < 12k_0 where k_0 = 4π / 3L.
For the triangular periodic height profile that is aligned with the zigzag direction, the electronic continuum theory has wallpaper group 14 (p3m1) with point group C_3v and superlattice translations. The symmetries of the valley-projected Hamiltonian are generated by 𝒞_3z and ℳ_x 𝒯 where ℳ_x is a mirror symmetry (x ↦ -x) and 𝒯 is spinless time-reversal symmetry with 𝒯^2 = 1. While ℳ_x and 𝒯 both interchange valleys, their combination leaves the valleys invariant. This yields the magnetic point group 3m' also denoted as C_3v(C_3) <cit.>.
In Fig. <ref>(a) we show the band structure calculated with the continuum model in the presence of a perpendicular electric field V( r) = V_0 h( r)/h_0 where h( r) is the relaxed height profile. Parameters are h_0 = 6 Å, θ = 15^∘, and V_0 = 71 meV corresponding to Fig. <ref>(c) and (d). We also show the Berry curvature and quantum metric of the first valence band in valley K_+.
§.§ Quantum geometry
We calculate the quantum geometry of an isolated Bloch band in a given valley with the gauge-invariant product. To this end, consider a square plaquette in the Brillouin zone of area δ^2 centered at k with corners: k_1 = k + δ2(-1, -1), k_2 = k + δ2(-1, 1), k_3 = k + δ2(1, 1), and k_4 = k + δ2(1, -1). The gauge-invariant product is then given by
∏_m=1^4 ⟨ u_ k_m | u_ k_m+1⟩,
where k_5 = k_1. It is straightfoward to show that
tr( g_ k) = lim_δ→ 0δ^-2 Re( 1 - ∏_m=1^4 ⟨ u_ k_m | u_ k_m+1⟩),
Ω_ k = lim_δ→ 0δ^-2 ∏_m=1^4 ⟨ u_ k_m | u_ k_m+1⟩,
where
g_ k^μν = Re( ⟨∂^μ u_ k | ∂^ν u_ k⟩) + ⟨ u_ k | ∂^μ u_ k⟩⟨ u_ k | ∂^ν u_ k⟩,
Ω_ k = -2 Im( ⟨∂_k_x u_ k | ∂_k_y u_ k⟩),
is the (single band) Fubini-Study quantum metric with tr( g_ k) = g_ k^xx + g_ k^yy and Berry curvature, respectively. They form the real and imaginary components of the (single band) quantum geometric tensor,
Q_ k^μν = ⟨∂^μ u_ k | ( 1 - | u_ k⟩⟨ u_ k | ) | ∂^ν u_ k⟩
= g_ k^μν - i/2 ϵ^μνΩ_ k.
One can further show that <cit.>
tr(g_ k) ≥ | Ω_ k |,
which is the trace inequality. The trace condition refers to the saturation of this bound and holds for Landau levels. One route of engineering flat bands that may host fractional Chern insulators at fractional filling of the band, is so-called Landau level mimicry <cit.>. For example, a flat Bloch band with unit Chern number that satisfies the trace condition emulates the lowest Landau level, whose exact ground state at one over odd integer filling in the presence of short-range interactions is given by a Laughlin-type wave function <cit.>.
§.§ Valley polarization
The band structure from tight-binding calculations always features Kramers' pairs of minibands that approximately belong to different valleys since the long-wavelength corrugation does not couple the valleys. We quantify this from the expectation value of the valley polarization P_v. The simplest choice for P_v is given in terms of the next-nearest neighbor Haldane hopping,
P_v = 1/i3√(3)∑_<< i,j > > e^3iϕ_ij c_i^† c_j,
where ϕ_ij is the angle of the next-nearest neighbor bond vector with the x axis. For pristine graphene, one can diagonalize P_v in momentum space:
P_v = ∑_ k g( k) c_ k^† c_ k,
with g( K_± + q) = ± 1 + 𝒪(q^2a^2). In the presence of potentials that vary slowly with respect to the graphene lattice constant a, the valleys remain approximately decoupled such that < P_v > ≈± 1 for valley K_±.
§.§ Molecular dynamics
For the lammps simulations, we consider a graphene sheet spanned by l_1,2 with L = | l_1,2 | = 59a ≈ 14.5 nm. In order to model a generic substrate that induces a smooth van der Waals force on the graphene sheet, we use a honeycomb lattice with lattice constant a/3. This ensures commensurability with the graphene and an interaction potential that is smooth on the graphene lattice scale due to the higher density of the substrate. We then apply the out-of-plane displacement field given by Eq. <ref> to the substrate for h_0 ranging from 0.05 nm to 0.6 nm in steps of 0.05 nm and for θ = 15^∘. Energy minimization was performed with the lammps code <cit.> using the FIRE minimization procedure, using force tolerance of 10^-6 eV/Å as a stop criteria. To describe the potential energy landscape of the system we use a combination of the AIREBO interatomic potential <cit.> for short-range interactions between carbon atoms in the graphene sheet with a cutoff of 2 Å, and a 12-6 Lennard-Jones potential for the interaction between the graphene and the substrate. The Lennard-Jones interaction was parametrized with an energy constant of 10 meV, a zero-crossing distance of 3.7 Å, and a cut-off radius of 10 Å.
|
http://arxiv.org/abs/2409.02214v1 | 20240903183620 | Checkpoint and Restart: An Energy Consumption Characterization in Clusters | [
"Marina Moran",
"Javier Balladini",
"Dolores Rexachs",
"Emilio Luque"
] | cs.DC | [
"cs.DC"
] |
Checkpoint and Restart
Marina Morán et al.
Marina Moran, Javier Balladini, Dolores Rexachs and Emilio Luque
Facultad de Informática, Universidad Nacional del Comahue, Buenos Aires 1400, 8300 Neuquén, Argentina
[email protected],
Departamento de Arquitectura de Computadores y Sistemas Operativos, Universitat Autònoma de Barcelona, Campus UAB, Edifici Q, 08193 Bellaterra (Barcelona), España
Checkpoint and Restart: An Energy Consumption Characterization in Clusters
Marina Morán1 Javier Balladini1 Dolores Rexachs2 Emilio Luque2
September 9, 2024
==========================================================================
§ ABSTRACT
The fault tolerance method currently used in High Performance Computing (HPC) is the rollback-recovery method by using checkpoints. This, like any other fault tolerance method, adds an additional energy consumption to that of the execution of the application. The objective of this work is to determine the factors that affect the energy consumption of the computing nodes on homogeneous cluster, when performing checkpoint and restart operations, on SPMD (Single Program Multiple Data) applications. We have focused on the energetic study of compute nodes, contemplating different configurations of hardware and software parameters. We studied the effect of performance states (states P) and power states (states C) of processors, application problem size, checkpoint software (DMTCP) and distributed file system (NFS) configuration. The results analysis allowed to identify opportunities to reduce the energy consumption of checkpoint and restart operations.
§ INTRODUCTION
The final authenticated version is available online at https://doi.org/10.1007/978-3-030-20787-8_2
High Performance Computing (HPC) continues to increase its computing power while increasing its energy consumption. Given the limitations that exist to supply energy to this type of computers, it is necessary to know the behavior of energy consumption in these systems, to find ways to limit and decrease it. In particular, for the exaescala era, a maximum limit of 20 MW is estimated <cit.>.
The fault tolerance method most currently used in HPC is the rollback-recovery method by using checkpoints. This, like any other fault tolerance method, adds an additional energy consumption to the execution of the application <cit.>. Due to this, it is important to know and predict the energy behavior of fault tolerance methods, in order to manage their impact on the total energy consumption during the execution of an application.
The objective of this work, which is an extension of <cit.>, is to determine the factors that affect the energy consumption of checkpoint and restart operations (C/R), on Single Program Multiple Data (SPMD) applications on homogeneous cluster, contemplating different configurations of hardware and software parameters. A cluster system has compute nodes, storage nodes, and at least one interconnection network. We have focused on the energetic study of computing nodes, and we have extended the previous work contemplating, on the one hand, a second experimentation platform, and on the other hand, the storage node, in particular when studying the impact of checkpoint files compression. The energy consumption of the network is not considered in this article.
The contributions of this article are:
* A study of the system's own factors (hardware and software) and of applications factors that impact the energy consumption produced by checkpoint and restart operations.
* The identification of opportunities to reduce the energy consumption of checkpoint and restart operations.
Section <ref> presents some related works, while in section <ref> the factors that affect the energy consumption are identified. Section <ref> mention the experimental platform and the design of experiments, whose results and their analysis are presented in the section <ref>. Finally, the conclusions and future work can be found in the section <ref>.
§ RELATED WORK
<cit.> and <cit.> evaluates the energetic behavior of the coordinated and uncoordinated C/R with message logs. In <cit.> they also evaluate the parallel recovery and propose an analytic model to predict the behavior of these protocols at exascale. In <cit.> they use an analytic model to compare the execution time and the consumed energy of the replication and the coordinated C/R. <cit.> and <cit.> presents an analytic model to estimate the optimal interval of a multilevel checkpoint in terms of energy consumption. They do not measure dissipated power but use values from other publications. In <cit.> they measure the power dissipated and the execution time of the high level operations involved in checkpoint (coordinated, uncoordinated and hierarchical) varying the number of cores involved. They do not use different processor frequencies, nor do they indicate whether the checkpoint is compressed or not. In <cit.>, <cit.> and <cit.> a framework for energy saving of the C/R are presented. In <cit.>, many small I/O operations are replaced by a few large, single-core operations to make the checkpoint and restart more energy efficient. They use RAPL to measure and limit energy consumption. <cit.> propose to have a core to execute a replica of all the processes of the node in order to avoid re-execution from the last checkpoint and analytically compare the energy consumption of this proposal with the traditional checkpoint. In <cit.> a runtime that allows modifying the clock frequency and the number of processes that carry out the I/O operations of the C/R are designed, in order to optimize the energy consumption. Another work that analyze the impact of the dynamic scaling of frequency and voltage on the energy consumption of checkpoint operations is in <cit.>. They measure the power at the component level while writing the checkpoint locally and remotely and compare variations in remote storage: NFS using the kernel network stack and NFS using the IB RDMA interface. <cit.> evaluate the energy consumption of an application that uses compressed checkpoints. They show that when using compression, more energy is spent but time is saved, so that the complete execution of the application with all its checkpoints can be benefited from an energy point of view.
Our work focuses on coordinated C/R at the system level. The dissipated power of the checkpoint and restart operations are measurements obtained with an external physical meter. We have not found papers that evaluate the impact of C states and NFS configurations on the energy consumption of C/R operations.
§ FACTORS THAT AFFECT THE ENERGY CONSUMPTION
Energy can be calculated as the product between power and time. Any factor that may affect one of these two parameters should be considered and then analyze how it affects energy consumption. These factors belong to different levels: Hardware, Application Software and System Software.
§.§ Hardware
The Advanced Configuration and Power Interface (ACPI) specification provides an open standard that allows the operating system to manage the power of the devices and the entire computing system <cit.>. It allows managing the energy behavior of the processor, the component that consumes the most energy in a computer system <cit.>. ACPI defines Processor Power States (Cx states), where C0 is the execution state, and C1...Cx are inactive states.
A processor that is in the C0 state will also be in a Performance State (Px states). The status P0 means an execution at the maximum capacity of performance and power demand. As the number of state P increases, its performance and demanded power is reduced. Processors implement P states using the Dynamic Frequency and Voltage Scaling technique (DVFS) <cit.>. Reducing the voltage supplied reduces the energy consumption. However, the delay of the logic gates is increased, so it is necessary to reduce the clock frequency of the CPU so that the circuit works correctly. In certain multicore processors, each core is allowed to be in a different P state.
When there are no instructions to execute, the processor can be put in a C state greater than 0 to save energy. There are different C state levels, where each of the levels could turn off certain clocks, reduce certain voltages supplied to idle components, turn off the cache memory, etc. The higher the C state number, the lower the power demanded, but the higher the latency required to return to state C0 (execution status). Some processors allow the choice of a C state per core.
As both states, C and P, have an impact on power and time, it is necessary to evaluate their impact on energy consumption during C/R operations.
The GNU/Linux kernel supports frequency scaling through the subsystem CPUFreq (CPU Frequency scaling). This subsystem includes the scaling governors and the scaling drivers. The different scaling governors represent different policies for the P states. The available scaling governors are: performance (this causes the highest frequency defined by the policy), power save (this causes the lowest frequency defined by the policy), userspace (the user defines frequency), schedutil (this uses CPU utilization data available from the CPU scheduler), ondemand (this uses CPU load as a CPU frequency selection metric) and conservative (same as ondemand but it avoids changing the frequency significantly over short time intervals which may not be suitable for systems with limited power supply capacity). The scaling drivers provide information to the scaling governors about the available P states and make the changes in those states [https://www.kernel.org/doc/html/v4.14/admin-guide/pm/cpufreq .html]. In this work we use userspace and ondemand governors.
§.§ Application software
Basically, a checkpoint consists in saving the state of an application, so that in case of failure it can restart the execution from that saved point. The larger the problem size of the application, the longer the time required to save its state. Its incidence, at least over time, converts problem size into a factor that affects the energy consumption of C/R operations.
§.§ System software
There are two types of system software highly involved in C/R operations. On the one hand, the system that carry out these operations. On the other hand, since the checkpoint file needs to be protected in stable and remote storage, it is necessary to use a distributed file system. In our case, the system software we use is Distributed MultiThreaded CheckPointing (DMTCP) <cit.> and Network File System (NFS). Both have configuration options that affect the execution time and/or power, and therefore are factors that affect the energy consumption of the C/R operations.
In the NFS case, folders can be mounted synchronously (sync option) or asynchronously (async option). If an NFS folder is mounted with the sync option, writes at that mount point will cause the data to be completely downloaded to the NFS server, and written to persistent storage before returning control to the client[https://linux.die.net/man/5/nfs]. Thus, the time of a write operation is affected by varying this configuration.
In the case of DMTCP, it is a tool that performs checkpoints transparently on a group of processes spread among many nodes and connected by sockets, as is the case with MPI (Message Passing Interface) applications. DMTCP is able to compress (using the gzip program) the state of a process to require less disk storage space and reduce the amount of data transmitted over the network (between the compute node and the storage node). The use or not of compression impacts the time and the power required, therefore it is another factor that affects energy consumption.
§ EXPERIMENTAL PLATFORMS AND DESIGN
§.§ Experimental platform
The experiments were carried out on two platforms. Platform 1, on
which most of the analysis of this work is carried out, is a cluster of computers with a 1 Gbps Ethernet network. Each node, both computing and storage, has 4 GiB of main memory, a SATA hard disk of 500 GB and 7200 rpm, and an Intel Core i5-750 processor, with a frequency range of 1.2 GHz to 2.66 GHz (with the Intel Turbo Boost [https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html] disabled), four cores (without multithreading), 8 MiB of cache and 95 W TDP. The clock frequency range goes from 1.199 GHz. to 2.667 GHz. Platform 2 is a compute node connected to a storage node with a 1 Gbps Ethernet network. The compute node has an Intel Xeon E5-2630 processor, a frequency range of 1.2 GHz to 2.801 GHz (with the Intel
Turbo Boost mechanism disabled), six cores (with multithreading disabled),
16 GiB of main memory, 15 MiB of cache and TDP of 95 W. It uses a
Debian 9 Stretch operating system. The storage computer has an Intel Core 2 Quad Q6600 processor, four cores (without multithreading), 8 GiB of main memory and 4 MiB of cache memory. The clock frequency range goes from 1.2 GHz. to 2.801 GHz.
The nodes of Platform 1 and the computing node of Platform 2 use the
GNU/Linux operating system Debian 8.2 Jessie (kernel version 3.16 of 64 bits), OpenMPI version 1.10.1 as an MPI message passing library, and the tool checkpoint DMTCP version 2.4.2, configured to compress the checkpoint files. The network file system used to make the remote writing of the checkpoint files is NFS v4 (Network File System).
For power measurements we use the PicoScope 2203 oscilloscope (whose
accuracy is 3%), the TA041 active differential probe, and the PP264
60 A AC/DC current clamp, all Pico Technology products. The electrical
signals captured by the two-channel oscilloscope are transmitted in
real time to a computer through a USB connection. The voltage is measured
using the TA041 probe that is connected to an input channel of the
oscilloscope. The current of the phase conductor, which provides energy
to the complete node (including the power source) is measured using
the current clamp PP264, which is connected to the other input channel
of the oscilloscope.
The selected application[https://computing.llnl.gov/tutorials/parallel_comp/#ExamplesHeat] for system characterization is a SPMD heat transfer application written in MPI that uses the float data type. This application describes, by means of an equation, the change of temperature in time over a plane, given an initial temperature distribution and certain edge conditions.
§.§ Experimental design
Two compute nodes are used (unless otherwise specified) and each compute node writes to a dedicated storage node through an NFS configured in asynchronous mode (unless otherwise specified). The sampling rate used for both channels
of the oscilloscope was set at 1000 Hz. The power measurements correspond
to the power dissipated by the complete node including the source.
The tests were performed with the processor C states option active (unless otherwise specified).
For the measurements of the checkpoint and restart time we use the
option provided by DMTCP. To change the frequency of the processor,
the userspace governor is used. The same frequency is used in all cores at the same time.
Each experiment consists of launching the application with one process
per core, letting it run during a preheating period (20 seconds),
performing a checkpoint manually, aborting the application from the
DMTCP coordinator and re-starting the application from the command
line with the script generated by DMTCP. The experiment is repeated
three times for each frequency and problem size due to the low
variability of the measurement instruments used.
§ EXPERIMENTS AND RESULTS ANALYSIS
In this section we show the experiments and analyze how the mentioned factors influence power, time, and energy consumption of CR operations.
The first subsections (<ref> to <ref>) analyze the effect of clock frequency (see subsection <ref>) and problem size (see subsection <ref>) on Platforms 1 and 2. As there are no significant differences between the two platforms for these factors, we have carried out the study of the remaining factors, NFS configuration and compression of the checkpoint files (subsections <ref> and <ref>) only on Platform 1. We understand that these last factors are influenced mainly by the network, which is the same in both platforms.
§.§ Real power
In order to know the energy we need to know the average power dissipated of each operation. Fig. <ref> shows the real power on both Platforms, obtained with the measurements delivered by the oscilloscope, when an application is executed, a checkpoint is done, a fault is injected, and a restart is initiated. These power measurements are averaged over the duration of the checkpoint and restart operation. In the rest of the work, when referring to dissipated power, we refer to the average dissipated power.
Platform 2, unlike Platform 1, has two high phases and two low phases during the checkpoint. To know what happened in those phases we measured the CPU and network usage in both platforms, which is shown in Fig. <ref>. When observing the use of CPU, we see that the high power phases coincide with a higher CPU usage, and that low phases coincide with a low CPU usage. During a checkpoint, the CPU is used to compress. From this it follows that in Platform 2, DMTCP, at times, stops compressing. When observing the use of the network, we see that on both platforms the transmission begins a few seconds after the checkpoint has started and continues until the end of the checkpoint. We also note that in the first low phase of Platform 2, the transmission rate drops by half, causing inefficient use of the network. Studying these inefficiencies could be part of future work. In the case of the restart, the transmission rate remains stable throughout its duration, on both platforms.
§.§ Processor's P states
The impact of processor's P states on the C/R operations energy consumption was evaluated. The figures <ref> and <ref> show the average dissipated power (a) and time (b) for different frequencies of the processor, on both platforms. It is observed that as the clock frequency increases, the dissipated power increases and the time decreases. The functions obtained are strictly increasing for the case of dissipated power and strictly decreasing for the case of time. It is also observed that the checkpoint is more affected by clock frequency changes, both in the power dissipated and in time.
§.§ Processor's C states
During the writing or reading of a checkpoint file it is possible that the processor has idle moments and therefore transitions between C states can occur. These transitions can affect the power dissipation of the processor. To study its behavior, C/R operations were performed with the C states enabled and disabled, for several frequencies of the processor, on Platform 1 and 2. The results are shown in the figures <ref> and <ref>.
In both platforms it is observed that the power measurements with the C states enabled show greater variability, especially in the restart at Platform 1. It is also observed how the difference between the power dissipated with C states enabled and disabled increases with the increasing frequency of the processor. On Platform 1, this difference becomes approximately 9 % for the checkpoint (at the maximum frequency), and 10% for the restart (at the frequency 2.533 GHz). In Platform 2 these differences are smaller, 6% for the checkpoint, and 5% for the restart, in the frequency 2.6 GHz in both cases.
The consumption of energy is mainly benefited by the use of C states. On Platform 1, for the checkpoint case, the consumption is up to 13% higher when the C states are disabled, and in the restart case, up to 20% higher. On Platform 2 these differences are smaller, up to 5%, except for the 1.4 GHz frequency, where the differences are greater than 15%, both for checkpoint and restart.
In any case, the best option is to keep the C states enabled
since they reduce the energy consumption by up to 13% for the checkpoint,
and up to 20% for the restart, at certain CPU frequencies. Times showed no variation
when enabling or disabling the C states.
§.§ Problem size
The impact of the problem size on energy consumption of C/R operations was evaluated. The power dissipated and the time of the checkpoint and restart were measured, for different problem sizes, on both platforms.
Problem sizes do not exceed the main memory available in the compute node. In the figures <ref>(a) and <ref>(a) it is observed that the power dissipated almost does not vary when varying the problem size (the differences do not exceed 4% in any case). In the figures <ref>(b) and <ref>(b) it is observed that the time increases as the problem size increases, as expected.
§.§ NFS configuration
The impact on energy consumption of the use of the NFS sync and async options was evaluated. The figure <ref> compares the dissipated power, time and energy consumed by a checkpoint stored on a network file system mounted with the option sync and async, for three different clock frequencies (minimum, average and maximum of the processor), on Platform 1. For the three frequencies, the dissipated power is greater and the execution time is shorter when the asynchronous configuration is used. The shortest time is explained by the operating mode of the asynchronous mode, which does not need to wait for the data to be downloaded to the server to advance. The greater dissipated power may be due to the fact that the asynchronous mode decreases idle times of the processor, and therefore C states of energy saving does not activate (see subsection <ref>). However, for the minimum and average frequency, these differences are small, resulting in a similar energy consumption, as shown in Fig. <ref>(c). The asynchronous mode consumes 1.5% more energy at the minimum frequency. It could be said that this configuration of the NFS does not affect the energy consumption when using this frequency. In the medium frequency, the asynchronous mode consumes 7% less energy.
If we now observe what happens at the maximum frequency, we see that the asynchronous mode consumes 25% less energy. Although the dissipated power is 37% higher, the time is 85% lower, and this means that at the maximum frequency, it is convenient to use the asynchronous mode for lower energy consumption.
§.§ Checkpoint file compression
The impact of checkpoint files compression on the energy consumption of the computation node was analyzed, and in this work, we added the study of the storage node. The experiments were performed on a single computation node writing on a storage node, for three different clock frequencies (minimum, average and maximum available in the processor) and for the governor ondemand (see section <ref>), on Platform 1.
The figure <ref> shows the power and time of checkpoint and restart, with and without compression of the checkpoint files.
The results obtained show the following:
* The dissipated power, both in checkpoint and restart, is greater when using compression. This is due to the higher CPU usage that is required to run the compression program (gzip) that DMTCP uses.
* Without compression, checkpoint and restart times almost do not vary for different clock frequencies.
Fig. <ref> shows the energy consumption of computation and storage nodes. In the case of the storage node, the clock frequencies indicated are the frequencies used by the computing node. Because the application does not share the storage node with other applications, the energy considered for the storage node is calculated using the total dissipated power (base power plus dynamic power). In the computation node, the energy consumption of the checkpoint is always greater when using compression (up to 55% higher in the minimum frequency). In the storage node, the energy consumption of the checkpoint is always lower when using compression, except for the minimum frequency. The energy consumption of the restart is always lower when using compression (up to 20%), both in computing and storage node.
Let's see the total energy consumption, considering both nodes (computation and storage), in Fig. <ref>. For the checkpoint case, it is never advisable to compress, and even less at the minimum frequency. Although, when no compression is used, the minimum frequency is the most convenient, because it is the one with the lowest energy consumption. In any case, the energy consumption when no compression is used is similar for all frequencies studied, with differences that do not exceed 12%.
For the restart case, it is always convenient to compress. When using compression, the lowest energy consumption is obtained with the frequency 1.999 GHz. However, by not using compression, the lowest energy consumption is obtained with the ondemand governor. Studying this behavior can be part of future work.
Taking into account that, in general, more checkpoint operations than restart operations are carried out, it is advisable not to use compression to reduce energy consumption.
§ CONCLUSIONS AND FUTURE WORK
This work shows, from a series of experiments on an homogeneous cluster executing an SPMD application, how different factors of the system and the applications impact in the energy consumption of the C/R coordinated operations using DMTCP. The impact of the P and C states of the processor was studied. It was found that checkpoint operations are more sensitive to changes in P states than restart operations. On the contrary, the changes of the C states of the processor affect more the restart. In the evaluated platforms, the use of C states allows energy savings (up to 15% for the checkpoint and 20% for the restart). The increase in the application problem size under study always results in an increase in energy consumption, due to its high incidence in the time taken by the operations. It was found that by using the maximum clock frequency, up to 25% energy savings was possible when using the asynchronous mode in the NFS configuration. The compression of the checkpoint files is beneficial for the restart. When considering only the computation node, up to 20% of energy saving was registered when using compressed files. However, compression negatively impacts the checkpoint operation, with a 55% higher energy consumption when using the minimum clock frequency. Among the future works it is expected to evaluate other factors that may affect the energy consumption of checkpoint and restart operations, such as the compression program used for checkpoint files, the parallel application programming model and the fault tolerance tool used.
6
Amrizal2017
Amrizal, M. A., Takizawa, H.: Optimizing Energy Consumption on HPC Systems with a Multi-Level Checkpointing Mechanism. In International Conference on Networking, Architecture, and Storage (NAS). IEEE (2017).
Mills2013
Mills, B., Grant, R. E., Ferreira, K. B.: Evaluating energy savings for checkpoint/restart. In Proceedings of the 1st International Workshop on Energy Efficient Supercomputing. ACM (2013)
Mills2014
Mills, B., Znati, T., Melhem, R., Ferreira, K. B., Grant, R. E.: Energy consumption of resilience mechanisms in large scale systems. In: 122nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing. IEEE, (2014)
Meneses
Meneses, E., Sarood, O., Kalé, L. V.: Assessing energy efficiency of fault tolerance protocols for HPC systems. In 2012 IEEE 24th International Symposium on Computer Architecture and High Performance Computing (pp. 35-42). IEEE.
Diouri
Diouri, M., Glück, O., Lefevre, L., Cappello, F.: Energy considerations in checkpointing and fault tolerance protocols. In IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN 2012) (pp. 1-6). IEEE. (2012, June).
Shalf2010
Bergman, K., Borkar, S., Campbell, D., Carlson, W., Dally, W., Denneau, M., Karp, S.: Exascale computing study: Technology challenges in achieving exascale systems. Defense Advanced Research Projects Agency Information Processing Techniques Office (DARPA IPTO), Tech. Rep, (2008)
Dauwe2017
Dauwe, D., Jhaveri, R., Pasricha, S., Maciejewski, A. A., Siegel, H. J.: Optimizing checkpoint intervals for reduced energy use in exascale systems. In 2017 Eighth International Green and Sustainable Computing Conference (IGSC) (pp. 1-8). IEEE.(2017, October)
Rajachandrasekar2015
Chandrasekar, R. R., Venkatesh, A., Hamidouche, K., Panda, D. K.: Power-check: An energy-efficient checkpointing framework for HPC clusters. In 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (pp. 261-270). IEEE. (2015, May)
Cui2016
Cui, X., Znati, T., Melhem, R.: Adaptive and power-aware resilience for extreme-scale computing. In Intl IEEE Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld) (pp. 671-679). IEEE.(2016, July)
Saito2013
Saito, T., Sato, K., Sato, H., Matsuoka, S.: Energy-aware I/O optimization for checkpoint and restart on a NAND flash memory system. In Proceedings of the 3rd Workshop on Fault-tolerance for HPC at extreme scale (pp. 41-48). ACM. (2013, June).
ibtesham2014
Ferreira, K. B., Ibtesham, D., DeBonis, D., Arnold, D.: Coarse-grained Energy Modeling of Rollback/Recovery Mechanisms (No. SAND2014-2159C). Sandia National Lab.(SNL-NM), Albuquerque, NM (United States) (2014).
Silveira2016
Silveira, D. S., Moro, G. B., Cruz, E. H. M., Navaux, P. O. A., Schnorr, L. M., and Bampi, S.:Energy Consumption Estimation in Parallel Applications: an Analysis in Real and Theoretical Models. In XVII Simposio em Sistemas Computacionais de Alto Desempenho, (pp. 134-145) (2016).
le2010dynamic
Le Sueur, E., Heiser, G.: Dynamic voltage and frequency scaling: The laws of diminishing returns. In Proceedings of the 2010 international conference on Power aware computing and systems (pp. 1-8).(2010, October).
DMTCP
Ansel, J., Arya, K., Cooperman, G.: DMTCP: Transparent checkpointing for cluster computations and the desktop. In 2009 IEEE International Symposium on Parallel & Distributed Processing (pp. 1-12). IEEE (2009, May).
Moran2018
Morán, M., Balladini, J., Rexachs, D., Luque E.: Factores que afectan el consumo energético de operaciones de checkpoint y restart en clusters. In: XIX Workshop Procesamiento Distribuido y
Paralelo (WPDP), XXIV Congreso Argentino de Ciencias de la Computación, pp. 63-72, CACIC 2018. ISBN 978-950-658-472-6.
Diouri2013
Diouri, M., Glück, O., Lefevre, L., Cappello, F.: ECOFIT: A framework to estimate energy consumption of fault tolerance protocols for HPC applications. In 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing (pp. 522-529). IEEE (2013, May).
Bergman08exascalecomputing
Bergman, K., Borkar, S., Campbell, D., Carlson, W., Dally, W., Denneau, M., Karp, S.: Exascale computing study: Technology challenges in achieving exascale systems. Defense Advanced Research Projects Agency Information Processing Techniques Office (DARPA IPTO), Tech. Rep, 15. (2008).
acpi
ACPI - Advanced Configuration and Power Interface. <http://www.acpi.info>
|
http://arxiv.org/abs/2409.02823v1 | 20240904154259 | Design Contradictions: Help or Hindrance? | [
"Aron E. Owen",
"Jonathan C. Roberts"
] | cs.HC | [
"cs.HC"
] |
Introduction
Evolution of radiation profiles in a strongly baffled divertor on MAST Upgrade
the MAST Upgrade team
===============================================================================
The quest for innovation in data visualisation requires us to seek out and embrace new creative approaches continually. This process can positively impact creativity, leading to the development of novel ideas and designs. However, as we transition towards AI-driven design, it is essential to question whether these design contradictions can work positively with AI tools. AI systems, including large language models (LLMs) like OpenAI's GPT series, often need to catch up in this area. These systems rely heavily on algorithms that promote similarity and repetition, whereas true creativity thrives on divergence and novelty. This poster aims to initiate a crucial conversation about how we can drive AI systems to be more creative and generate new, innovative ideas.
We explore a specific example, such as "Round Bar Chart", demonstrating how these contradictions can inspire creativity and lead to suboptimal outputs when handled by generative AI. These examples highlight the need for better strategies to harness AI tools' creative potential.
As we delve deeper into this subject, we are compelled to reassess traditional design methods and contemplate new approaches tailored to the era of AI-driven design. This prompts the question: can established techniques like the double diamond model be effectively applied, or do we need entirely new methods for design engineering? This research underscores the pressing need for the development of new design methods that can effectively harness generative AI to expedite the design of visualisations and foster the creation of new ideas, a pivotal step in the evolution of data visualisation. By understanding and addressing the limitations and possibilities of AI-driven design, we can pave the way for more innovative and effective visualisation solutions, a prospect that holds immense promise for the future of data visualisation.
§ RELATED WORK AND CONCEPTS
Understanding design contradictions is a collaborative endeavour, requiring the collective insights of researchers from various disciplines. Moreover, this is particularly crucial in generative AI systems, where design contradictions play a significant role. Multidisciplinary research is necessary and a testament to our academic community's collective intelligence and shared responsibility. When handling contradictions in Natural Language Processing (NLP), we can turn to the Stanford Typed Dependencies Representation and word embeddings for contradiction detection, which are not just theoretical constructs but practical tools that have revolutionised our understanding of syntactic and semantic relations within the text. These tools have been pivotal in identifying and managing contradictions within textual data, thereby enhancing NLP models' reliability and real-world applications <cit.>. Another aspect of contradictions is with the data itself, and there is a comprehensive guide to creating compelling visualisations, including techniques for handling conflicting data <cit.>, which demonstrates how contradictions may not solely affect generative AI but the data itself can influence. Furthermore, this work emphasises the importance of clear design principles and methodologies to avoid misinterpretations caused by data conflicts. Moreover, this is further explored when discussing various visualisation methods suitable for different data types, including those with inherent conflicts <cit.>.
The concept of cognitive load theory, which examines the effects of cognitive load on learning and problem-solving <cit.>, is crucial for understanding how users interact with AI-generated content, particularly when faced with contradictions. Extending this understanding in a discussion on storytelling in visualisation, highlighting how narrative techniques can reduce cognitive load and enhance user comprehension of complex data <cit.>. We must also be aware of sound design principles like Tufte's seminal work, which outlines fundamental principles for creating clear and compelling visualisations, emphasising eliminating unnecessary complexity <cit.>. Building on these principles <cit.> offers practical advice on visualisation to accurately and efficiently convey quantitative information. These principles are particularly relevant when addressing the design contradictions posed by generative AI outputs. The literature highlights significant efforts in understanding and addressing design contradictions in generative AI and visualisation design. The studies underscore the importance of clear design principles, effective contradiction detection, and the reduction of cognitive load to enhance user satisfaction. Ongoing interdisciplinary research is essential to develop more reliable AI models capable of handling these contradictions, ultimately improving AI-generated content's utility.
§ CASE STUDY - ROUND BAR CHART
Designers, always on the lookout for fresh and unconventional sources of inspiration, may find a spark in the world of sound engineering, Specifically, the circular visualisers used in audio applications. Furthermore, this intriguing concept raises a question: Can we apply the circular effect to the data of a traditional bar chart, creating a unique and engaging visualisation? A round bar chart is inherently contradictory because bar charts are designed to use rectangular bars to compare categorical data through their lengths. Making them round contradicts the primary purpose of the bar chart, which is to show differences in length clearly. However, let us investigate this use case further. Imagine attempting to represent sales data for different products using a round bar chart. This design, with its circular shape and bars radiating outwards, is undeniably eye-catching. Yet, it presents a host of challenges. The radial arrangement makes it difficult to compare bar lengths, and the potential for overlapping or converging bars adds visual clutter. The chart may also include unnecessary design elements that distract from the data, further complicating the visualisation.
Further analysis of the output, as seen in Fig <ref>, demonstrates that the round bar chart needs to achieve the clarity of a traditional bar chart. The intended purpose of comparing categorical data through bar lengths is compromised. This analysis represents the hindrance aspect of our design contradiction.
Despite the challenges, the round bar chart offers unique and exciting possibilities. Its circular design can be a visual feast, capturing attention in presentations, reports, or dashboards. It has a contemporary and innovative appearance, making it a perfect fit for design-forward or creative industries. It provides a compact representation that can fit into confined spaces, such as a corner of a dashboard or a compact infographic. A round bar chart can intuitively represent this cyclic nature, emphasising continuity and periodicity for data with a natural cyclic pattern (e.g., hours in a day, months in a year). The unique format can be used creatively to tell a story or present data in a novel way, engaging the audience more deeply than traditional charts. The distinctive visual form can make the data more memorable and impactful.
While round bar charts introduce challenges in data interpretation, they offer significant visual and aesthetic benefits. They can provide compelling data representations that capture attention, highlight cyclical patterns, and enhance storytelling when used thoughtfully. The key is to balance these visual benefits with the need for clarity and accuracy in data representation.
This case study asks: Do design contradictions help or hinder the creative process in visualisation? Can we leverage these contradictions to push the boundaries of design while maintaining the integrity of the data presented? This exploration invites further discussion on the potential of AI-driven creativity and the development of new design methodologies in data visualisation.
§ CONCLUSION
This paper ventures into a unique and unexplored area of research, delving into the design contradictions that large language models (LLMs) such as OpenAI's GPT. These 'design contradictions' are the inherent conflicts that arise when LLMs attempt to visualize data with contradictory elements like round bar charts. The models often resort to visual shortcuts that compromise the clarity and accuracy of the output, underscoring the need for more sophisticated handling of such contradictions. Our example vividly illustrates how LLMs grapple with these contradictions, often leading to cluttered and unclear outputs, such as bar charts with overlapping bars. These examples underscore the significant challenges in maintaining data integrity and clarity when faced with contradictory design requirements.
We warmly invite the research community to join us on this journey of continuous improvement. By working together, we can develop more robust LLMs capable of effectively handling design contradictions.
abbrv-doi
|
http://arxiv.org/abs/2409.02301v1 | 20240903212740 | Theory of charge stability diagrams in coupled quantum dot qubits | [
"Nathan L. Foulk",
"Sankar Das Sarma"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Condensed Matter Theory Center and Joint Quantum Institute, Department of Physics, University of Maryland, College Park, Maryland 20742-4111 USA
§ ABSTRACT
We predict large regions of the charge stability diagram using a multi-band and multi-electron configuration interaction model of a double quantum dot system.
We account for many-body interactions within each quantum dot using full configuration interaction and solve for single-particle density operators.
This allows charge states to be predicted more accurately than the extensively used classical capacitance model or the single-band Hubbard model.
The resulting single-particle mixed states then serve as inputs into an atomic orbital picture that allows for the explicit calculation of the underlying Hubbard model parameters by performing the appropriate integrals. This numerical approach allows for arbitrary choices of electrostatic potential and gate geometry.
A common assumption when calculating charge stability diagrams from the Hubbard model is that the charge stability diagrams are periodic, but we find that the tunnel couplings for valence electrons in dots with N=3 electrons are significantly enhanced when compared to single-electron dots.
This difference is apparent in the charge stability diagram for higher occupancy Coulomb diamonds.
We also quantitatively explore how the barrier gate strength and dot pitch impact this behavior.
Our work should help improve the future realistic modeling of semiconductor-dot-based quantum circuits.
Theory of charge stability diagrams in coupled quantum dot qubits
S. Das Sarma
September 9, 2024 – Version 1.0
=================================================================
§ INTRODUCTION
Quantum dot based semiconductor spin qubits <cit.> are a promising candidate for scalable quantum computing, benefitting from long coherence times <cit.>, fast gate operations <cit.>, and integration possibilities with the existing semiconductor industry <cit.>.
Using lithographically defined metal gates, quantum dots are formed in the 2D quantum well of a semiconductor heterostructure.
These dots are extremely small–on the order of 100 nm–and their small size provides much of the promise as one of the few qubit implementations that are serious candidates for a fully error-corrected and scalable quantum computer <cit.>—the scalability because of the compatibility with the vast semiconductor electronics industry.
Each dot is loaded with a predetermined number of electrons, and the qubits are represented by the spin states of these electrons.
Single spin manipulation can be performed with electrical or magnetic means and the Heisenberg exchange interaction allows for fast and predictable electrical two-spin control between nearest neighbors <cit.>.
The simplest qubit encoding is the Loss-Divincenzo qubit, where the two-level system is a single electron spin confined to a single quantum dot <cit.>.
Many other qubit encodings exist, including singlet-triplet qubits <cit.>, exchange-only qubits <cit.>, and the flopping-mode qubit <cit.> to name a few.
However, all encoding schemes require precise control over the electron number of each dot (although a restriction to one electron per dot is not essential for qubit operations) <cit.>.
Therefore, one must tune each device so that each dot contains the correct electron number, and this must be done each time the device is cooled and initialized.
In addition, this number of electrons must be preserved during the gate operations—a 1-electron dot cannot suddenly become a 3- or 5-electron dot due to local fluctuations.
A typical spin qubit device contains multiple dots arranged in a linear array using an alternating pattern of plunger gates and barrier gates.
An example of such a device is found in Fig. <ref>(b).
Each quantum dot is electrostatically defined under a plunger gate.
Plunger gate voltages are raised or lowered to control the electron number of each dot.
The two dots are separated by a distance known as the pitch.
Barrier gates (between the dots) are primarily used to modulate the wavefunction overlap between two dots.
The gate voltages on the plunger and barrier gates are the most important variables in determining the charge state of the dots.
Modern quantum dot devices are made of many more gates and components than just these two, but the plunger and barrier gates occupy a central role.
Any minimal model must include the role of these two gates, one controlling each dot and the other controlling the inter-dot coupling.
Charge stability diagrams are used to visualize and map out the different charge regimes in a double quantum dot, thus enabling the precise control over electron occupancy necessary for qubit operations.
The plunger gate voltages for two neighboring dots are varied, and the conductance is measured.
A nonzero conductance signifies that electrons are tunneling into the device or between dots.
Modern devices also employ nearby quantum dots for charge-sensing measurements, where the charge-sensor signal measures the charge states capacitively.
An example of a charge stability diagram is shown in Fig. <ref>a.
The diagram typically consists of sharp peaks of transitions and large plateaus of charge state stability.
These different plateaus demarcate different charge states for a double dot system. Each plateau, or cell—referred to as a “Coulomb diamond.”—indicates fixed electron numbers.
These charge stability diagrams are essential for determining not only the initial gate voltages necessary for reliable charge initialization but also for mapping paths across the stability diagram, which are needed to carry out precise gate operations on the qubits.
Recently, charge stability diagrams have also been used as inputs to machine learning approaches for automated control of quantum dot arrays <cit.>.
The direct calculation of realistic charge stability diagrams given the underlying microscopics of a device is therefore a timely need in the quantum computing research community in order to cost-effectively produce qubit designs and to generate input data for machine learning algorithms for autonomous control.
As a complement to direct measurement, one can predict the charge boundaries of the stability diagram.
This can be done on varying levels of sophistication, the simplest of which is the classical capacitance model of the double dot system <cit.>.
The classical capacitance model uses the physical dimensions of the dots and their pitch to estimate the electrostatic parameters of the system, such as the charging energy of each dot and the Coulomb interaction between the dots.
This approach correctly identifies the charge regimes in the absence of quantum fluctuations.
In order to include the effects of quantum fluctuations—which are obviously significant for double quantum dot systems—one can use the Hubbard model with nonzero tunneling <cit.>.
The major Hubbard parameters that affect the stability diagram are the onsite Coulomb repulsion U, the interdot Coulomb repulsion V, and the tunnel coupling t.
The generalized Hubbard model reproduces the capacitance model exactly when t = 0.
When t > 0, however, the sharp corners of the diamonds become rounded, with larger tunnel constants leading to more rounding (see Fig. <ref>c).
The principal difficulty remains in choosing realistic Hubbard parameters for a given physical system.
Thus, the generalized Hubbard model, introduced in this context in Refs. <cit.>, considerably simplifies charge stability modeling provided that the microscopic parameters t, U, and V are known for the quantum dot system.
In the simplest case, given model confinement potentials and single electron occupancy, the estimation of these effective Hubbard parameters reduces to integrals of the single particle eigenstates <cit.>.
Such an approach is also implicitly based on charge stability diagram periodicity.
One chooses a single point (V_L,V_R) within the (N_L,N_R) = (1,1) cell, calculates the wavefunctions of each dot separately, and estimates the corresponding Hubbard parameters, consistent with a Hund-Mulliken atomic orbital picture.
Those parameters are then extended to the entire stability diagram in a way that produces perfectly periodic Coulomb diamonds.
Experimental stability diagrams superficially justify this approach, as their Coulomb diamonds are often close to periodic with regard to cell size and location.
However, the smoothness of the cell corners often varies drastically across the diagram <cit.>.
In order to improve this approach of charge stability diagram simulation, calculations featuring multiple electrons in each dot must be included so as to go beyond the (N_L,N_R) = (1,1) Coulomb diamond constraint.
We would expect the Hubbard parameters to depend on device-specific details, such as gate geometry, barrier gate level, electron occupancy, and both magnetic and electrostatic disorder.
Incorporating such device-specific details requires the use of extensive, system-dependent numerics, rather than analytic approaches.
Going beyond the (N_L,N_R) = (1,1) Coulomb diamond also addresses the practical needs of the spin qubit community, since a common practice is to load a quantum dot with three electrons rather than one (in principle, any odd number of electrons per dot suffices, but three seems to be the optimal number).
The spin state of the valence electron is then used as the qubit's two-level system <cit.>.
Calculating the Hubbard parameters of multi-electron dots is significantly more complex than single-electron dots.
The presence of other electrons perturbs the valence electron's wavefunction from the simple analytic expressions often used when assuming simplified potentials.
This increased electron number also complicates numerical treatments of the system, for the simple fact that simulating six-electron systems is very computationally expensive, as the relevant Hilbert space grows exponentially with the number of electrons.
An approach that leverages the computational efficiency of the atomic orbital projection method of previous work toward understanding N=3 quantum dots is therefore needed.
This is our goal in the current work utilizing the configuration interaction method, which is the standard technique in quantum chemistry for calculating the electronic structure of multielectron atoms and molecules.
We first present our approach of using full configuration interaction (FCI) to project an accurate representation of N=3 quantum dots in the effective Hubbard model space for simulating charge stability diagrams.
Separating the double dot system into two effective single dot potentials, we solve for the three electron wavefunction of each dot.
Using the natural orbitals of the one-electron reduced density matrix (1RDM) of each dot, we approximate the atomic orbitals and calculate the relevant Hubbard parameters.
We then present our results, emphasizing the novelties introduced by the many-body treatment, including enhanced valence tunnel coupling and an understanding of how barrier gate strength and device layout impact the system's projection onto the Hubbard model.
Finally, we discuss the implications of our findings for simulating charge stability diagrams and the quantum dot semiconductor spin qubits field at large.
Our explicit results assume the qubit platform to be based on quantum dots on the Si [100] surface, but the theory and the general method are applicable to all semiconductor quantum dot structures.
§ MODEL
We calculate charge stability diagrams of double quantum dot systems using a generalized Hubbard model.
Ĥ = μ_1 n_1 + μ_2 n_2 + ∑_iU/2 n_i (n_i - 1) + V n_1 n_2
+ ∑_m (t_m c^†_1m c^_2m + H.c.)
where n_i is the total electron dot occupancy over all levels for dot i, and c^†_ij is the creation operator for the jth level for the ith dot.
Many terms can be included in a generalized Hubbard model, but the terms that dominate the charge stability diagram are t, U, and V, which are necessary in the minimal effective model (and should suffice in most situations) since interaction terms beyond nearest-neighbors are likely to be vanishingly small.
All other terms can be completely ignored, and the stability diagram will be almost exactly the same.
This Hamiltonian can be rewritten as
Ĥ = ∑_i,j F_ij c_i^† c_j + ∑_i,j,k,l G_ijkl c_i^† c^†_j c_k c_l.
Given the appropriate single particle wavefunctions that correspond to c_i^†|0⟩ and c_j^†|0⟩, where |0⟩ is the vacuum state, we can calculate F_ij
F_ij = ∫Ψ_i^*(𝐫) ĤΨ_j(𝐫) d 𝐫.
This can easily be extended to two-body terms as well:
G_ijkl = ∬Ψ_i^*(𝐫)Ψ_j^*(𝐫') ĤΨ_k(𝐫) Ψ_l(𝐫') d𝐫 d𝐫'.
However, the difficulty lies in extracting these mutually orthogonal, single-particle wavefunctions.
In reality, these single-particle states are entangled, and care must be taken to preserve their orthogonality.
We first calculate the many-body wavefunction.
We utilize full configuration interaction (FCI) <cit.> to calculate the multi-electron ground state.
These FCI calculations are performed within the effective mass approximation using the following Hamiltonian:
H(𝐫) = ∑_i ((𝐩_i^2 + e𝐀(𝐫_i))^2/2m_e^* + V(𝐫_i) + μ_B 𝐒_𝐢·𝐁)
+ ∑_i,je^2/4πϵ1/|𝐫_i - 𝐫_j| .
where m^*_e is the effective mass for silicon in the in-plane directions, approximately m^*_e ≈ 0.19m_0, where m_0 is the electron's rest mass.
Additionally, electrostatic interactions are modulated by the increased permittivity for silicon, which is ϵ≈ 11.68 ϵ_0.
We utilize an analytic expression for the electrostatic potential in a quantum well from a square gate of radius a at a distance z between the gate and the well <cit.>
V_i(x,y) = V_g [g(x-a, y-a) + g(x-a, a -y)
+ g(a-x, y-a) + g(a - x, a -y)]
where
g(x,y) = 1/2πarctanxy/z√(x^2 + y^2 + z^2)
and V_g is the voltage applied to the gate electrode. An example of a possible gate architecture with its accompanying electrostatic potential is shown in Fig. <ref>.
For simplicity of demonstration, we will focus on square gates.
For all of our calculations, the plunger gates have a radius of 25 nm and the barrier gate is 30×50 nm as shown in Fig. <ref>a.
We emphasize that these choices are typical for the currently experimentally used Si-based quantum dot qubits, and our qualitative results are independent of these specific numerical choices for the confining potential.
Our approach can be implemented using any electrostatic potential.
Gates of different shapes, sizes, and pitches can be used, the only thing needed is a grid of the confining potential producing the dots.
Our choice of potential is arbitrary and could use any level of sophistication.
The potential used in this technique could easily be solved by carrying out the standard Schrodinger-Poisson self-consistent calculation using the relevant lithographic geometry.
However, we use the model potential confinement above for the dots instead of doing a numerical Schrodinger-Poisson approach because we are presenting a general theory instead of modeling specific devices.
We note as an aside that the Schrodinger-Poisson technique is essentially a one-particle problem, and is not particularly computationally demanding compared with the FCI technique we employ for extracting the Hubbard parameters as discussed below.
In the case of multi-electron dots, calculating the exact many-body wavefunction for the double quantum dot becomes prohibitively expensive (even for our model confinement potential).
This is because the minimum number for each dot is three in order to have a valence electron, and N=6 between the two dots is expensive enough to make an FCI calculation impractical.
This leads us instead to treat the intradot and interdot interactions separately.
We assume that the correlation effects between the two dots are small enough to be safely ignored for the FCI calculations, and we calculate the FCI ground state for each dot separately. This is also consistent with the atomic orbital approach for calculating the Hubbard parameters since the generalized Hubbard approach to spin qubits implicitly assumes this individual dot independence we are using in our FCI theory.
However, the single dot potential in Eq. <ref> is incomplete for dot-specific FCI calculations because these single dot potentials are unaffected by changes to the barrier gate.
In reality, we expect a lowered barrier gate to significantly increase wavefunction overlap between the two dots and that a raised gate would decrease that same overlap.
We ensure that each single dot potential is both localized and accurately reflects the full potential—from both plunger gates and the barrier gate—at that site by taking the full potential and rapidly tuning it to zero as it approaches the second dot.
V_eff(x,y) = V(x,y) exp[-max(0.0,(x - d/4))^2/(d/2)^2]
where
V(x,y) = V_L(x,y) - V_X(x,y) + V_R(x,y).
This approach is illustrated in Fig. <ref> and we refer to these adjusted dots as effective single dot potentials.
Configuration interaction (CI) methods require expressing the many-body Hamiltonian in the basis of Slater determinants.
Each Slater determinant represents a different possible electron configuration, and the Coulomb term in Eq. <ref> couples different configurations, hence the term configuration interaction.
Given M orbitals and N electrons, there are K = 2MN possible determinants.
The process is described as full configuration interaction when all K determinants are employed.
Given the fact that the number of determinants K grows very quickly, we carefully choose our orbitals to keep the computational cost within reason and avoid memory constraints.
Rather than use a predetermined basis of orbitals for all calculations, we use the eigenstates of the single-particle Hamiltonian in Eq. <ref>.
This approach has already been used effectively for spin qubit modeling of single electron dots by Anderson et al. <cit.>.
This allows us to minimize the number of orbitals M which must be kept.
Therefore, for each choice of gate voltages and pitch, we solve for the single-particle eigenstates of each dot using exact diagonalization in a truncated Fourier basis consisting of N_F basis functions.
The eigenstates are approximated well after about N_F = 500 basis functions, and the eigenenergy convergence is shown in Fig. <ref>a.
Including enough orbitals M is a critical part of a successful calculation, and we demonstrate the FCI ground state energy convergence in Fig. <ref>b.
Increasing M until the output is consistent and predictable is mandatory (and is, in fact, the definition of self-consistent FCI), as it is very difficult to know a priori how many orbitals will be necessary for sufficiently converged Hubbard parameters.
Ground state convergence is not by itself sufficient for converged Hubbard terms, however.
Separating out parts of a system's wavefunction and projecting to the Hubbard space requires higher-order terms arising from the excited states to be included.
Early truncation can lead to erratic, numerically unstable values of the Hubbard parameters.
We have found keeping the first M=30 Slater determinants to be good practice for our choice of parameters, but this has to be ensured in each calculation since, depending on the details, more or fewer orbitals may be necessary for the computation.
Next, we need to extract the effective single-particle states from the FCI wavefunction.
For a single Slater determinant, this is simple.
However, such an approach would ignore correlational effects within the dot.
Separable approaches to the many-body system will never capture dynamical correlations, but important contributions from exchange and static correlation can still be incorporated.
Fortunately, when N=3, it becomes possible to isolate the valence electron well in such a way as to ensure that the two core electrons resemble a singlet state as much as possible.
This process consists of taking the one-particle reduced density matrix (1RDM) <cit.>, which is given as
γ_ij = ⟨Ψ | â_i^†â_j | Ψ⟩,
where | Ψ⟩ is the ground state FCI wavefunction.
The eigenvectors of this density matrix represent the natural orbitals of the system, and the eigenvalues are their occupation numbers.
Because there are three electrons in the dot, one of them can be isolated by its spin, which without loss of generality, we classify as spin up.
The natural orbitals that are spin up, and their occupation numbers represent one of the core electrons, and we construct a density matrix to describe its state.
We then can use the remaining spin-down orbitals to construct another state that resembles the core electron as best as possible.
The remaining state is that of the valence electron.
This is similar to the occupation number representation of a Slater determinant since it is not important which electron interacts with which.
Only a correct understanding of the states and their occupation is required.
Consequently, the many-body state is separated into three single-particle operators.
Each single-particle density operator's spin is also well-defined, which maps well to the Hubbard representation.
These single-particle density operators then need to be orthogonalized with respect to the same operators on a neighboring dot since they will have a nonzero overlap for any finite pitch.
This is usually done by performing a Löwdin orthogonalization process, which is symmetric with respect to its arguments.
Under the Löwdin procedure, the vectors are rotated so that no single vector is rotated more than the others.
This way, all new vectors will resemble their original starting points as much as possible.
This is the standard orthogonalization routine in quantum chemistry <cit.>.
Löwdin orthogonalization differs from the more familiar Gram-Schmidt process, which is strongly asymmetric in its treatment of the initial vectors.
However, a direct application of the Löwdin procedure is obviously inappropriate, since density matrices do not form a vector space.
Therefore, we again work with the natural orbitals of each density matrix.
To orthogonalize our set of single particle density matrices, we use the density operator Uhlmann fidelity F(ρ, σ) as the distance metric over which we define orthogonality.
F(ρ,σ) = tr(√(√(σ)ρ√(σ)))
Fidelity between two density operators F(ρ,σ) = 0 if and only if ρ and σ have orthogonal support <cit.>.
Löwdin orthogonalization on the eigenvectors of the single-particle density operators produces density matrices of single-particle states on neighboring dots with zero fidelity.
This orthogonalization is done as follows: We perform a change of basis from the 2M basis orbitals (M from each dot), which are not mutually orthogonal, to orthogonalized versions of these basis states.
This is a passive transformation to place density matrices from neighboring dots on equal footing and to work in an orthonormal basis.
We then find the natural orbitals that contribute most to each density matrix and orthogonalize them with respect to each other using the Löwdin procedure.
Orbitals are iteratively included until overlaps of states from neighboring dots reach a set threshold ε = 1×10^-10.
This is done because as more (irrelevant) orbitals are included in the orthogonalization procedure, the resulting states resemble their original versions less and less.
This lets us directly extract the relevant Hubbard parameters for obtaining the charge stability diagram.
We need only to modify the initial equations to handle density matrices rather than wavefunctions.
F_ij = tr√(√(ρ_i)Ĥρ_j Ĥ√(ρ_i))
G_ijkl = tr√(√(ρ_i ⊗ρ_j)Ĥρ_k ⊗ρ_l Ĥ√(ρ_i ⊗ρ_j))
where ρ_i corresponds to the state c^†_i |0⟩ in the Hubbard model.
With these quantities, we can efficiently simulate the charge stability diagram for a given gate architecture.
§ RESULTS
We take a one-dimensional slice in the charge stability diagram along the V_1 = V_2 diagonal and calculate the FCI ground state of each dot for N=1,2, and 3 to find the correct electron number.
These results are shown in Fig. <ref>.
The charge transitions of V = 0.30 and V=0.58 mark the boundaries of the charge regimes which can be used for further calculations. These transition points will depend on gate layout and barrier gate strength but are relatively cheap to determine for N ≤ 3 electrons since the ground state energy converges quickly in M compared to Hubbard values.
We first calculate the tunnel coupling as a function of plunger gate voltage, barrier gate voltage, and dot pitch.
Fig. <ref> displays our results.
We find that tunnel coupling varies significantly over different orbital states and that t_3, which corresponds to accessing the first p-orbital of each dot, sees a 50x increase in tunnel coupling strength relative to the N=1 case for modest values of pitch and barrier gate.
This is sensible since the electrons are less localized to their original dot as they access higher orbital states, and we expect to see the overlap increase exponentially in such a regime.
It is worth noticing that the value of t fluctuates significantly within each charge state, with a monotonically decreasing behavior as the plunger gates are increased.
This is physically logical, as we would expect the states to become more localized.
However, for the purposes of modeling charge stability diagrams, and specifically the transitions between charge states, this effect is minor compared to the order-of-magnitude shifts that occur at the charge transitions.
The effects of the barrier gate are apparent, with increased values of V_X immediately diminishing the result of t_3 ≫ t_1.
This behavior is only observable because of our effective single dot potentials, which take into account the barrier gate strength in calculating the center of the single dot well.
As we increase the barrier voltage, it appears that tunnel couplings for the n=3 electron drop below that of the core electrons.
This is likely only a numerical artifact, as the tunnel couplings in that regime are small enough that our separation and orthogonalization procedure may fail to capture the correct (but very small) value.
Numerical errors in such exponentially small tunnel coupling values are not a problem for obtaining accurate charge stability diagrams.
Analyzing the tunnel couplings calculated as the dot pitch is varied (Fig. <ref>) also illustrates that this regime of very large t_valence is quickly suppressed for larger pitches or barriers. Here it appears that d = 90 nm is a key value (for our parameters), after which the tunnel coupling starts to drop precipitously. Similar calculations could be useful for estimating the spatial distribution of the valence electrons in individual devices.
The effect of these occupancy-dependent tunnel couplings is seen when generating the charge stability diagram.
In Fig. <ref>a, we calculate a charge stability diagram using only Hubbard parameters calculated from a double dot system with plunger gates at 0.2V.
The Hubbard parameters are applied over the whole stability diagram.
Switching on level-dependent tunneling leads to the picture in Fig. <ref>b, as the increased tunnel coupling leads to greater corner smoothing for higher charge states.
Because the major changes in t occur at charge transitions, we model t in the Hubbard model as being piecewise constant over each Coulomb diamond. This already represents a significant improvement over the widespread practice of assuming that the tunnel coupling is constant over the entire stability diagram. If a transition involves tunneling from one dot with n electrons into another with m electrons, the tunnel coupling corresponding to that interaction is t_j, where j = max(n,m+1).
This occupancy-dependent tunneling is a new qualitative feature of our theory compared with the existing charge stability theories in the literature.
Because of the dramatically higher levels of tunnel coupling, points on the charge stability diagram that are one charge state in Fig. <ref>a could be another one entirely in Fig. <ref>b.
Another feature in the level-dependent coupling diagram is that for N > 3, the individual Coulomb diamonds become less pronounced, and the diagonal slices corresponding to the double dot total charge become more straight and featureless (see Fig. <ref>).
This feature is often seen in the experimental data <cit.>.
Experimental data usually only shows changes in the total charge of the system, though occasionally charge transitions at the dot level are also visible for a small dot occupation. Fig. <ref>b reflects how level-dependent tunneling may appear in total charge transition diagrams.
We performed a similar analysis for the onsite Coulomb term U in Fig. <ref> and the Coulomb coupling term V in Fig. <ref>.
The onsite Coulomb term U is unique in that it does not couple states of neighboring dots like t or V, and depends entirely on the properties of a single dot.
For that reason, it is calculated differently. Rather than using atomic orbitals to calculate the overlap of the charge distributions, we simply calculate the expectation value of the Coulomb Hamiltonian for our FCI single-dot ground state.
Fig. <ref> shows that the pitch and barrier gate have very small effects on U. However, as before with the tunnel coupling, we see significant fluctuations within a given charge state.
It is reasonable and even expected that the Coulomb energy increases as the dot becomes more localized.
However, in keeping with our purpose of modeling charge transitions, U is assumed to be constant over the charge stability diagram, since we do not see dramatic shifts in the average value of U between Coulomb diamonds, unlike the tunnel coupling.
Our calculations of the Coulomb coupling term V are shown in Fig. <ref>.
The Coulomb energy between valence electrons of neighboring dots appears to have large shifts at charge transitions and to remain relatively constant within each regime.
As the wells become deeper within a charge regime, we do not expect V to change significantly, whereas we would expect larger changes to V when the dot wavefunctions have larger overlaps, such as when p-orbitals are occupied, or the barrier gate or pitch is lowered.
This is exactly what we see in the results, with p-orbitals having significantly larger Coulomb couplings.
We also observe V decreasing with barrier gate voltage and pitch in a manner generally similar to that of the tunnel coupling.
It would appear that a piecewise constant approach to V would be appropriate, similar to our approach with the tunnel coupling. However, even if such an approach is pursued, the relevant quantity is the average value of V = 1/n_1 n_2∑_i,j V_ij. We employ this approach, and our charge stability diagrams use a constant value of V which is the average of the Coulomb couplings, since this average stays approximately the same across a single charge stability diagram.
The tunnel coupling varies dramatically throughout the stability diagram, and still only has a small (but nonetheless important) effect on the final stability diagram. Therefore, it is to be expected that the changes to the Coulomb terms, both the intradot U and the interdot V repulsion, can be safely ignored within a single stability diagram since the corrections in charge distribution that arise from accessing higher orbital states have only a minor effect on the Coulomb energies. Nonetheless, it is worth considering the effect that the barrier gate and dot pitch have on the entire charge stability diagram. We show these effects in Fig. <ref>.
The fact that V significantly increases as the barrier gate is lowered means that charge stability diagrams can look drastically different for the same gate geometry but with a slightly lower barrier gate voltage. Lowering barrier gate voltage also tends to decrease U slightly, but the increase in V is much more dramatic. We again stress the importance of our effective single dot potentials for capturing these effects. Fig. <ref> clearly demonstrates that given a situation with enhanced t_3 coupling, this behavior can be suppressed by raising the barrier gate, and thereby returning the charge stability map to one where N=3 Coulomb diamonds more closely resemble those corresponding to N=1.
Our charge stability plots are generated in real space, starting from the (μ_1, μ_2) space and including neighboring dot occupation and electrostatic crosstalk through the interdot Coulomb effects with the standard formulas from the classical capacitance model.
μ_1 = |e|[α_1 v_1 + (1-α_1) v_2]
μ_2 = |e| [(1-α_2) v_1 + α_2 v_2]
α_1 = (U_2 - V)U_1/(U_1 U_2 - V^2)
α_2 = (U_1 - V)U_2/(U_1 U_2 - V^2)
The alternative is to ignore these effects and plot the stability diagram using these theoretical quantities as the axes.
These quantities are known as “virtualized”.
Gate virtualization results in “straightened out” stability plots with horizontal or vertical charge boundaries.
It is more meaningful for theorists to express the stability diagram results in physical units to better compare their work to experimental results since the virtualization procedure distorts the stability diagram in such a way that the effects of modeling choices can be exaggerated.
It could also be beneficial for experimentalists to provide virtualized charge stability diagrams to better connect with theory. An example of a virtualized stability diagram is shown in Fig. <ref>.
§ CONCLUSIONS
In conclusion, we have presented a multi-band and multi-electron interacting model of gate-defined double quantum dots and simulated charge stability diagrams for these systems using realistic electrostatic potentials, full configuration interaction, and effective Hubbard model mapping.
Our approach is compatible with confining potentials at any level of sophistication, including potentials obtained from Poisson-Schrodinger solvers as the starting point (replacing our model analytical confining potentials).
We separated the double dot system using an effective single dot approach that accounts for the barrier gate voltage.
We then utilized full configuration interaction to account for many-body interactions within each quantum dot.
Single particle density matrices were computed using natural orbitals of the 1RDM, and Hubbard couplings were calculated as the Uhlmann fidelities of the corresponding operators.
These parameters of the generalized Hubbard model enabled us to predict large regions of the charge stability diagram with improved accuracy compared to classical capacitance models or single-band Hubbard models.
Our results highlighted the significant enhancement of tunnel couplings for valence electrons in dots with three electrons compared to single-electron dots.
This difference was evident in the charge stability diagram for higher occupancy Coulomb diamonds.
We extrapolate this trend to a large charge stability diagram in Fig. <ref>.
As dot occupancies increase, so do the tunnel couplings, and total charge regions become increasingly featureless and eventually become almost parallel lines.
Unlike the tunnel coupling, the average values of the onsite Coulomb term U and the interdot Coulomb coupling V were not found to vary significantly within the charge stability diagram.
We also found that increasing barrier strength or dot pitch reduces the effect of increased tunnel coupling, and this trend allows researchers to make their Coulomb diamonds as periodic as possible so that they see similar behavior using N=3 dots as they do N=1 dots.
Our FCI approach is only applicable in regimes where it is appropriate to treat the dots as separate subsystems, where wavefunction overlap is small.
Double dot systems with large detunings or negative barrier gates are not compatible with this technique.
Likewise, calculations of interdot exchange interaction rely on extremely precise pictures of the correlational energy of the entire double dot system, and cannot be performed in this formalism, even when tunnel couplings are small.
Although calculating the FCI energy of the double dot system as a whole is an excellent approach for such a problem <cit.>, this would likely be computationally feasible only for N<3 electrons in each dot.
This work opens the door to many exciting future research directions, such as how background disorder in the form of charge noise or magnetic impurities impacts the Hubbard parameters.
Evaluations of the resulting charge stability diagrams could be done by calculating the ground state charge states of the double dot system explicitly using FCI or another quantum chemistry approach. Although FCI would yield an extremely accurate answer, it probably could only be used for the first few Coulomb diamonds because of computational cost.
One advantage of this approach is that because it separates the dots, it could feasibly be applied to spin chains longer than two to investigate how next-nearest and next-next-nearest neighbor tunnel couplings scale compared to nearest neighbor couplings.
Our work demonstrates the importance of considering multi-electron effects in double quantum dot systems and the potential for the generalized Hubbard model to capture these effects.
By accurately predicting the charge boundaries of the stability diagram, our approach provides valuable insights for tuning multidot devices and achieving specific electron number states.
We believe that this technique will allow for cost-effective simulation of very high-quality charge stability diagrams, which should be relevant for constructing gating sequences, prototyping device designs, and generating data for machine learning algorithms.
Overall, our findings significantly contribute to the understanding of charge stability diagrams of quantum dot semiconductor spin qubits and enhance their potential for scalable quantum computing applications.
§ ACKNOWLEDGEMENT
This work was supported by the Laboratory for Physical Sciences.
|
http://arxiv.org/abs/2409.02354v1 | 20240904005723 | Entangled in Spacetime | [
"Mohammad Rasoolinejad"
] | quant-ph | [
"quant-ph"
] |
An SNSPD-based detector system for NASA's Deep Space Optical Communications project
Emma E. Wollman,1,* Jason P. Allmaras,1 Andrew D. Beyer,1 Boris Korzh,1 Marc C. Runyan,1,2 Lautaro Narváez,3 William H. Farr,1 Francesco Marsili,1 Ryan M. Briggs,1 Gregory J. Miles,1 and Matthew D. Shaw1
September 9, 2024
================================================================================================================================================================================================================
§ ABSTRACT
This paper presents an observational analysis of the Delayed-Choice Quantum Eraser experiment through the framework of quantum mechanics. The Delayed-Choice Quantum Eraser, a variation of the classic double-slit experiment, demonstrates the intricate relationship between quantum measurement, wave-particle duality, and the temporal ordering of observations. By utilizing the principles of quantum superposition, entanglement, and the non-local collapse of the wave function, we seek to rationalize the counterintuitive outcomes observed in the experiment. Specifically, we explore how the act of measurement retroactively influences the observed behavior of particles, depending on whether or not the which-path information is available. Our analysis underscores the significance of the quantum mechanical concept of wave function collapse across spacetime, providing a deeper understanding of how quantum mechanics reconciles the delayed-choice paradox.
§ HISTORY OF ATOMIC THEORY
The development of atomic theory spans centuries, originating from ancient philosophical conjectures to rigorous modern scientific principles. This evolution has significantly shaped our understanding of the structure and behavior of matter, pivotal to advancements in various branches of science including physics, chemistry, and material sciences. The notion of atoms as indivisible units of matter was first postulated in ancient times, with the term `atom' derived from the Greek word atomos, meaning uncuttable. These early ideas, while lacking empirical support, set the groundwork for a more scientific approach that would emerge millennia later. In the 19th century, John Dalton reintroduced the concept of atoms, this time as part of a scientific theory rather than philosophical speculation. Dalton's atomic theory proposed that each element is composed of unique types of atoms and that these atoms combine in simple whole-number ratios to form compounds. His ideas were the first to provide a quantitative framework for chemistry through the law of multiple proportions and the law of conservation of mass, established by Antoine Lavoisier. Lavoisier's experiments refuted the phlogiston theory of combustion and demonstrated the decomposability of water into hydrogen and oxygen, suggesting that these substances were themselves composed of atoms. Joseph Proust expanded on this foundation with the law of definite proportions, which posited that chemical compounds are formed from elements in fixed mass ratios. The combined efforts of these scientists laid the essential groundwork for modern chemistry and atomic physics.
The discovery of subatomic particles began with J.J. Thomson's identification of the electron in 1897. His experiments with cathode rays revealed that atoms were not indivisible, but contained smaller, negatively charged electrons. This finding led to the plum pudding model, which envisioned electrons embedded within a positively charged substrate. However, this model was short-lived, overturned by Ernest Rutherford's gold foil experiment in 1911, which revealed a small, dense nucleus at the center of the atom. Rutherford's nuclear model proposed that the atom consisted mostly of empty space, with electrons orbiting a central nucleus, much like planets around the sun. Further refinements to the atomic model were made by Niels Bohr, who incorporated quantum theory to explain the stability of Rutherford’s atomic structure. Bohr introduced the concept of quantized orbital shells for electrons, which could explain the emission and absorption spectra of atoms. His model was crucial in advancing the field of quantum mechanics, providing a theoretical basis for the electronic structure of atoms <cit.>.
The quantum story began in earnest with Max Planck's groundbreaking proposal in 1900 that energy is quantized, introducing the concept of the quantum to solve the black-body radiation problem. This revolutionary idea was soon followed by Albert Einstein's 1905 explanation of the photoelectric effect, which further emphasized the particle-like properties of light, quantified as photons. These developments challenged the prevailing wave theories of light and prompted a rethinking of the fundamental principles of physical reality. The formal structure of quantum mechanics began to take shape in the 1920s with the contributions of luminaries like Niels Bohr, Werner Heisenberg, and Erwin Schrödinger. Bohr introduced the quantum model of the hydrogen atom in 1913, which described electrons orbiting the nucleus in quantized energy levels, offering a quantum explanation for the atomic stability and spectral lines. Heisenberg's matrix mechanics (1925) and Schrödinger's wave mechanics (1926) provided two mathematically distinct but theoretically equivalent formulations of quantum mechanics, each deepening the understanding of atomic and subatomic processes <cit.>.
The new quantum theory rapidly evolved, incorporating the statistical interpretation of the wave function introduced by Max Born, and the uncertainty principle articulated by Heisenberg, which posits fundamental limits to the precision with which certain pairs of physical properties, like position and momentum, can be simultaneously known. These principles underscored the inherent probabilistic nature of quantum measurement outcomes, marking a departure from deterministic classical physics. Throughout the mid-20th century, quantum mechanics was further refined and expanded through the development of quantum field theory, which reconciled quantum mechanics with special relativity and provided a framework for describing particle interactions via fields. This era saw the formulation of theories like quantum electrodynamics by Richard Feynman, Julian Schwinger, and Sin-Itiro Tomonaga, which precisely described the interactions of electrons and photons, and led to predictions of extraordinary accuracy.
§ SCHRÖDINGER EQUATION
The Schrödinger equation is a fundamental equation in quantum mechanics that governs the wave function of a quantum-mechanical system. It was postulated by Erwin Schrödinger in 1925 and published in 1926, providing a significant foundation for modern quantum theory. The equation is named after Schrödinger, who received the Nobel Prize in Physics in 1933 for his work. Conceptually, the Schrödinger equation serves as the quantum counterpart to Newton's second law in classical mechanics. While Newton's second law predicts the trajectory of a classical system given initial conditions, the Schrödinger equation predicts the evolution of the wave function over time, encapsulating the quantum state of a system. The equation was inspired by Louis de Broglie's hypothesis that all matter exhibits wave-like properties, leading to the prediction of atomic bound states that matched experimental data.
The general form of the time-dependent Schrödinger equation is given by:
iħ∂/∂ tΨ(𝐫, t) = ĤΨ(𝐫, t)
Here, Ψ(𝐫, t) is the wave function, Ĥ is the Hamiltonian operator representing the total energy of the system, ħ is the reduced Planck constant, and i is the imaginary unit <cit.>.
For a single non-relativistic particle in one dimension, the Schrödinger equation can be written as:
iħ∂/∂ tΨ(x, t) = [ -ħ^2/2m∂^2/∂ x^2 + V(x, t) ] Ψ(x, t)
In this equation, m is the mass of the particle, V(x, t) is the potential energy, and ∂^2/∂ x^2 represents the second derivative with respect to position <cit.>.
In cases where the Hamiltonian does not explicitly depend on time, the wave function can be separated into spatial and temporal components, leading to the time-independent Schrödinger equation:
Ĥψ(𝐫) = E ψ(𝐫)
Here, ψ(𝐫) is the spatial part of the wave function, and E represents the energy eigenvalue of the system.
The Schrödinger equation was a significant milestone in the development of quantum mechanics, offering a new way to understand the behavior of microscopic systems. Although Schrödinger initially attempted to interpret the wave function Ψ as representing charge density, Max Born later provided the correct interpretation: the modulus squared of the wave function |Ψ|^2 represents the probability density of finding a particle in a given state <cit.>. This interpretation remains central to quantum mechanics today. The Schrödinger equation is non-relativistic, as it contains a first derivative in time and a second derivative in space, treating space and time asymmetrically. This limitation is addressed in relativistic quantum mechanics by the Klein-Gordon and Dirac equations, which incorporate special relativity <cit.>. The Dirac equation, in particular, reduces to the Schrödinger equation in the non-relativistic limit.
§ QUANTUM FIELD THEORY
Quantum Field Theory (QFT) represents a fundamental framework in theoretical physics, combining elements of classical field theory, quantum mechanics, and special relativity. This powerful theory is crucial for our understanding of particle physics and has applications in condensed matter physics and other areas of physics. Quantum fields are seen as the fundamental building blocks of the universe, with particles acting as excitations of these fields <cit.>. In QFT, particles such as electrons and photons are not described as distinct points but rather as excited states of underlying fields that permeate all of space <cit.>. For example, the electron is an excitation of the electron field, while the photon is an excitation of the electromagnetic field. This conceptual shift from particles to fields helps address phenomena that are inexplicable by classical physics, such as the interactions between light and matter and the creation and annihilation of particles <cit.>.
The equations of motion in QFT are derived from a Lagrangian that includes terms for each field and interactions among fields <cit.>. These interactions are depicted graphically by Feynman diagrams, which provide a visual and mathematical way to calculate the probabilities of various physical processes. The interaction terms in the Lagrangian typically involve products of field operators, and the dynamics of these fields are governed by the principles of quantum mechanics and special relativity. Feynman diagrams are a powerful and intuitive tool used in quantum field theory to represent the complex interactions between subatomic particles <cit.>. A typical Feynman diagram includes external lines, internal lines, and vertices. External lines represent the incoming and outgoing particles in a process, such as an electron entering a reaction or a photon being emitted. Internal lines depict virtual particles—temporary, intermediate particles that exist only fleetingly during the interaction and connect different vertices within the diagram. Vertices, the points where lines meet, represent the interaction points where particles either exchange other particles or transform into different ones. For example, in quantum electrodynamics (QED), the simplest Feynman diagram might show an electron emitting or absorbing a photon, with the interaction represented by a vertex where an electron line meets a wavy line representing the photon.
Feynman diagrams are not merely illustrative; they are a fundamental tool for calculating the probabilities of various particle interactions <cit.>. Each line and vertex in a diagram corresponds to a specific mathematical term in the overall equation that describes the process. The translation of a Feynman diagram into a mathematical expression follows specific “Feynman rules," which vary depending on the particular quantum field theory being used <cit.>. The elegance of Feynman diagrams lies in their ability to encapsulate the entirety of a particle interaction in a single, often simple, visual representation. Even complex interactions involving multiple particles and forces can be broken down into simpler diagrams, which can then be calculated using these rules. The total probability amplitude for a given process is determined by summing all possible diagrams that represent that process <cit.>.
§ PATH INTEGRAL FORMULATION
The path integral formulation of quantum mechanics is a powerful framework that generalizes the classical principle of stationary action to include quantum phenomena. Unlike the classical description, where a system follows a single, unique trajectory, the path integral approach considers an infinite number of possible paths that a system can take between two points in spacetime. Each path contributes to the quantum amplitude, with a phase factor determined by the action along that path. This method was developed by Richard Feynman in 1948 and has since become a cornerstone of modern theoretical physics.
In this formulation, the quantum amplitude for a particle to move from point A at time t_A to point B at time t_B is given by summing over all possible paths connecting these points. The contribution of each path is weighted by e^iS/ħ, where S is the action calculated along that path. Mathematically, this can be expressed as:
⟨ B | A ⟩ = ∫𝒟[x(t)] e^iS[x(t)]/ħ
Here, 𝒟[x(t)] represents the measure over all possible paths x(t), and S[x(t)] is the classical action, which is a functional of the path <cit.>.
In the context of quantum mechanics, the path integral formulation offers deep insights into phenomena like quantum tunneling and the behavior of particles in potential fields. For instance, the quantum tunneling rate can be derived by evaluating the path integral for paths that traverse potential barriers, yielding results consistent with experimental observations <cit.>. However, the path integral formulation also comes with challenges. The functional integral over an infinite number of paths is often difficult to evaluate directly, and various approximations, such as the stationary phase approximation, are employed. Moreover, ensuring the unitarity of the S-matrix (which corresponds to the conservation of probability) is less transparent in this formulation and requires careful handling <cit.>.
§ UNCERTAINTY PRINCIPLE
The Uncertainty Principle, introduced by Werner Heisenberg in 1927, is a fundamental theory in quantum mechanics. It asserts a fundamental limit to the precision with which pairs of physical properties, such as position (x) and momentum (p), can be simultaneously known. This principle has profound implications across quantum physics, highlighting the inherent limitations of our measurement capabilities at the quantum scale.
The principle is best known in one of its mathematical forms, which relates the standard deviation of position σ_x and the standard deviation of momentum σ_p:
σ_x σ_p ≥ħ/2
where ħ = h/2π is the reduced Planck constant.
This fundamental limit arises because every particle's position and momentum are linked by their wave properties. In quantum mechanics, every particle is also a wave, and its position and momentum are described by a wave function ψ(x). The precision in determining the position and momentum of a particle is governed by the spread of its wave function and its Fourier transform, respectively. A more precisely defined position leads to a broader spread in momentum and vice versa. This is formalized by the Fourier transform properties of the wave functions <cit.>:
ψ(x) = ∫ e^-ipx/ħϕ(p) dp
and
ϕ(p) = ∫ e^ipx/ħψ(x) dx.
The Uncertainty Principle is not merely a statement about the observational limits imposed by our current technology; rather, it is a fundamental property of the universe. It applies not only to position and momentum but to other pairs of physical properties as well, such as energy and time, for which a similar uncertainty relation can be derived.
The energy-time uncertainty relation is similarly interpreted:
Δ E Δ t ≥ħ/2
This relation implies that the measurement of energy in a quantum system over a finite time interval Δ t cannot be more precise than ħ/2 divided by that interval.
§ QUANTUM ENTANGLEMENT
Quantum entanglement is a fundamental and counterintuitive phenomenon in quantum mechanics, wherein the quantum states of two or more particles become intertwined such that the state of each particle cannot be described independently of the state of the others, regardless of the distance separating them. This phenomenon illustrates a profound departure from classical physics, where objects are considered to have distinct, independent properties regardless of their interaction history. The concept of quantum entanglement can be traced back to the early 20th century when Albert Einstein, Boris Podolsky, and Nathan Rosen (EPR) presented a thought experiment challenging the completeness of quantum mechanics <cit.>. The EPR paradox posited that if quantum mechanics were correct, then measuring the state of one particle in an entangled pair instantaneously determines the state of the other, even if the particles are light-years apart <cit.>. Einstein famously referred to this as “spooky action at a distance," expressing his discomfort with the idea that information could seemingly travel faster than the speed of light, thus violating the principle of locality. Despite Einstein's reservations, subsequent theoretical work and experiments, most notably those conducted by John Bell in the 1960s, demonstrated that the predictions of quantum mechanics—specifically, the strong correlations between entangled particles—could not be explained by any local hidden variable theory <cit.>. Bell's theorem provided a means to test the differences between quantum mechanics and local realism, leading to a series of experiments that confirmed quantum mechanics' predictions and invalidated local hidden variable theories. The experimental violation of Bell's inequalities has since been demonstrated in numerous “loophole-free" experiments, reinforcing the non-local nature of quantum entanglement.
At the heart of entanglement lies the principle that the combined quantum state of a system of particles can be described as a superposition of all possible states, where each particle's state is intrinsically linked to the others. This entangled state is such that measuring one particle's property (such as spin, polarization, or position) instantaneously affects the state of the other particle, collapsing the superposition into a definite outcome. This collapse occurs simultaneously for all entangled particles, no matter the distance between them, a feature that defies classical intuitions about space and time. Entanglement has been observed in a variety of systems beyond photons, including electrons, atoms, and even macroscopic objects under carefully controlled conditions.
§ SPACETIME
Spacetime is a foundational concept in modern physics, unifying the dimensions of space and time into a single, four-dimensional continuum. This concept revolutionized the way we understand the universe, particularly through the theories of special and general relativity. In classical mechanics, space and time were treated as separate entities. Space was a three-dimensional stage where events occurred, while time flowed uniformly and independently of space. The theory of relativity, developed by Albert Einstein, signifies a pivotal transformation in modern physics, encompassing two theories: special relativity and general relativity. Special relativity, introduced in 1905, redefined the concepts of time and space, which were previously considered as independent and absolute. Contrary to these earlier notions, Einstein proposed a spacetime continuum where time and space are interlinked and relative to the observer. Special relativity rests on two fundamental postulates: the laws of physics are invariant across all inertial frames of reference, and the speed of light in a vacuum is a constant, unaffected by the motion of the source or the observer. These postulates introduce phenomena such as time dilation, where time appears to slow down for an object in motion relative to a stationary observer; length contraction, where objects in motion are observed to be shorter along the direction of motion; and mass-energy equivalence, expressed by the equation E=mc^2, asserting that mass and energy are interchangeable <cit.>.
In 1915, Einstein extended these principles through his formulation of general relativity, which includes the effects of gravity on spacetime. Central to general relativity is the principle of equivalence, which posits that the effects of gravity are indistinguishable from those of acceleration. Hence, gravity is not described as a conventional force but as a manifestation of the curvature of spacetime caused by mass and energy. This curvature dictates the trajectories of objects, which move along paths known as geodesics <cit.>. General relativity's predictions have been substantiated through various experimental and observational confirmations. These include gravitational lensing, where light bends around massive objects; the anomalous precession of Mercury's orbit, unaccounted for by Newtonian mechanics; and the detection of gravitational waves—ripples in spacetime generated by cataclysmic events such as mergers of black holes, a phenomenon confirmed by the LIGO observatory <cit.>. Spacetime is a dynamic entity in general relativity. Massive objects like stars and black holes warp the spacetime around them, creating gravitational fields. To visualize these effects, physicists use the concept of spacetime. In this four-dimensional framework, events are specified by three spatial coordinates (x, y, z) and one time coordinate (t). A spacetime diagram, also known as a Minkowski diagram, helps illustrate how different observers perceive events differently based on their relative velocities <cit.>. The unification of space and time into spacetime shows that the separation of these two entities is not absolute but depends on the observer’s state of motion.
§ DELAYED-CHOICE QUANTUM ERASER
The delayed-choice quantum eraser experiment is an advanced and intriguing variation of the quantum eraser experiment, which itself is a derivative of the famous double-slit experiment in quantum mechanics. First performed by Yoon-Ho Kim, R. Yu, S. P. Kulik, Y. H. Shih, and Marlan O. Scully in 1999, this experiment combines the principles of quantum entanglement with the concept of a delayed choice, originally proposed by physicist John Archibald Wheeler <cit.>. The delayed-choice quantum eraser experiment explores the perplexing consequences of quantum mechanics, specifically addressing the nature of wave-particle duality and the role of the observer in determining quantum states.
The delayed-choice quantum eraser experiment is an extension of the double-slit experiment, where a beam of light, usually from a laser, is directed towards a barrier with two parallel slits. When light passes through these slits, it can produce an interference pattern on a detection screen, indicative of wave-like behavior. However, if a detector is placed at the slits to observe which slit the photon passes through, the interference pattern disappears, and the light behaves like a particle. This phenomenon highlights the principle of complementarity in quantum mechanics: a quantum system can display either wave-like or particle-like properties depending on the experimental setup. In the delayed-choice quantum eraser, the setup is more complex. After photons pass through the slits, they are subjected to spontaneous parametric down-conversion (SPDC), a process that generates pairs of entangled photons. One of these photons, called the “signal photon," proceeds towards a primary detector (D0), while the other, known as the “idler photon," is sent towards a series of detectors (D1, D2, D3, and D4) positioned at varying distances and with different configurations of beam splitters and mirrors. The key aspect of the experiment is that the idler photons, which could provide “which-path" information (i.e., knowledge about which slit the photon passed through), are detected after the signal photon has already been recorded at D0. This introduces a “delayed choice" element: the decision to preserve or erase the which-path information is made after the signal photon has been detected.
The experiment yields results that challenge classical intuitions about time and causality. When the idler photons are detected at D3 or D4, which provide which-path information, the signal photons at D0 do not form an interference pattern; they behave as if they had traveled through one specific slit, exhibiting particle-like behavior. Conversely, when the idler photons are detected at D1 or D2, where which-path information is effectively “erased," the signal photons at D0 display an interference pattern, indicative of wave-like behavior. What makes this result remarkable is that the interference pattern (or lack thereof) at D0 seems to be determined by a choice made after the signal photon has already been detected. This appears to imply that the measurement made on the idler photon retroactively influences the outcome of the signal photon, even though the events are separated in time. The delayed-choice quantum eraser experiment suggests that quantum systems do not have definite properties (such as being a wave or a particle) until they are observed, and that the nature of these properties can be influenced by measurements made after the fact.
§ PARTICLE FALLACY
Before delving into the intricacies of the Delayed-Choice Quantum Eraser experiment, it is crucial to address and dispel some widespread misconceptions, particularly the “particle fallacy." This fallacy arises from the classical intuition that electrons and other subatomic entities are discrete, localized particles moving along defined trajectories. However, according to the Schrödinger model of quantum mechanics, this view is fundamentally flawed. In the Schrödinger model, the electron is not a particle in the traditional sense but rather a quantum entity best described by a wave function—a probability density function spread over its eigenstate. This wave function represents the electron's state, encapsulating all possible outcomes of a measurement, such as position or momentum, as probabilities rather than certainties. The notion of the electron as a particle with a definite position and path is an approximation that only emerges under specific conditions, such as when a measurement collapses the wave function to a particular eigenstate. Outside of these measurements, the electron exists in a superposition of states with its properties being fundamentally indeterminate.
The wave function in quantum mechanics is often described as a mathematical tool used to calculate probabilities of finding a particle in a particular state or position. However, this description overlooks the deeper physical significance of the wave function, particularly in the context of quantum field theory. In QFT, particles are understood as excitations of underlying quantum fields that permeate all of space. The wave function, which describes the quantum state of a particle, is not merely an abstract mathematical construct; rather, it is intimately tied to the physical reality of these fields. The wave function represents the state of the quantum field itself, and its evolution is governed by the Schrödinger equation or its relativistic equivalents, such as the Dirac or Klein-Gordon equations. The physical manifestation of the wave function becomes particularly evident in phenomena like interference and diffraction, where the wave-like nature of particles—such as electrons or photons—is observable.
It is essential to clarify that Feynman diagrams should not be interpreted as depicting individual paths that a particle might take, as this perspective can be misleading. The traditional view, where each diagram represents a specific trajectory or path taken by a particle, falls into the trap of classical thinking, which does not apply to the quantum realm. In quantum mechanics, there are no particles following definite paths through space and time. Instead, we must understand the concept of the wave function—a quantized probability amplitude that is distributed across all possible paths and configurations simultaneously. The wave function encapsulates all possible outcomes and states that a quantum system can inhabit, representing a superposition of these possibilities rather than a single, defined trajectory. When we use Feynman diagrams, they serve as a powerful computational tool to represent the sum of all possible interactions and processes that contribute to the overall probability amplitude of a quantum event. Each diagram is a graphical representation of a term in the perturbative expansion of the wave function, contributing to the total amplitude in a manner that reflects the quantum superposition of all possible states and interactions. The diagrams illustrate how these interactions contribute to the overall wave function, not as discrete alternatives but as components of a unified, non-classical quantum reality. In essence, Feynman diagrams do not depict individual paths but rather the collective sum of all potential quantum interactions, where the wave function is distributed across all possibilities. This understanding aligns with the core principles of quantum mechanics, where the concept of a particle following a single path is replaced by the more accurate description of a wave function that encompasses all possibilities at once.
§ THE CONFINEMENT OF THE WAVE FUNCTION
In quantum mechanics, the act of measurement fundamentally alters the system being observed, particularly the wave function that describes the system's state. Before measurement, the wave function is a superposition of all possible states, spread out across all potential paths and configurations. This wave function is not localized or confined; rather, it represents a probabilistic distribution over all possible outcomes that the system can achieve. However, when we attempt to measure the wave function, we are forced to interact with it using some form of measurement device or particle. This interaction is not passive; it imposes constraints on the wave function, effectively confining it to a specific region of space or a particular eigenstate. The act of measurement restricts the wave function to those paths that directly interact with the measurement apparatus, thereby collapsing the wave function from a superposition of states into a single, observable outcome. This confinement of the wave function is analogous to constraining a crowd of people. Imagine a crowd dispersing freely in an open space, representing the unmeasured wave function. As soon as you introduce a gate through which they must pass, you limit the directions they can take. Similarly, when a measurement device interacts with a quantum system, it acts like a gate, restricting the system's wave function to those paths and states that can pass through this “gate."
In terms of Feynman Paths, which represent the sum of all possible quantum interactions, the act of measurement eliminates many of these possible paths. Only the paths consistent with the measurement constraints remain part of the solution space. The other trajectories—the ones that would have been possible in the absence of measurement—are no longer relevant to the observed outcome. In this sense, measurement not only changes the state of the system but also reduces the complexity of the quantum interactions we must consider. By collapsing the wave function, measurement simplifies the system to a single trajectory, effectively discarding the alternative paths and possibilities that were initially part of the quantum superposition. This inherent limitation underscores a key principle of quantum mechanics: the very act of measuring a system changes it. The wave function, which initially encompasses all possible states, is forced to conform to the constraints imposed by the measurement. This process reveals one of the fundamental limitations in our ability to fully understand a quantum system: we can only observe the state that remains after measurement, not the full array of possibilities that existed beforehand.
Unlike classical systems, where transitions occur gradually and continuously from one state to another, quantum systems exhibit what can be described as instantaneous transitions between eigenstate. There are no intermediate or “in-between" states during these transitions; the wave function moves directly from one eigenstate to another without passing through any transitional phases. This instantaneous transition is deeply tied to the probabilistic nature of quantum mechanics. The wave function of a quantum system represents a superposition of all possible states, each associated with a certain probability amplitude. When a measurement is made, the wave function collapses instantaneously into one of these possible states. The likelihood of the wave being collapsed in any particular eigenstate is directly proportional to the probability density function derived from the wave function. This density function tells us where the system, such as an electron, is most likely to be found upon measurement, but before the measurement, the electron is not in any single place—its existence is spread out across all potential locations.
Consider the double-slit experiment as an example. When an electron passes through the slits, its wave function spreads out, encompassing all possible paths and positions it could occupy. However, when it is detected on the screen behind the slits, the wave function collapses/absorbs instantly, and the electron is found in a specific location on the screen. This location corresponds to one of the many possible outcomes dictated by the wave function's probability distribution. Keep in mind the the detection point is most probably an atom that can host the wave function in one of its orbitals. Electron was never a particle here, it just transitioned from a wave function in space to the wave function in orbital. Moreover, this process of wave function collapse is governed entirely by the probability density function at the specific location of detection. If the wave function interacts with a particle or atom at that location, the interaction is instantaneous, and the system is immediately confined to a new eigenstate. This highlights the fact that quantum mechanics does not allow for smooth, continuous evolution between states during measurement; rather, it involves abrupt, discrete changes governed by the probabilistic nature of the wave function.
It's crucial to understand that when discussing quantum mechanics, especially in the context of experiments like the double-slit experiment, we must discard any anthropomorphic notions about particles “choosing" paths or making decisions. Particles do not possess consciousness or any form of agency that would allow them to “decide" which slit to pass through or how to behave when observed. The behavior of particles is purely a result of the mathematical structure of quantum mechanics, which describes the evolution and measurement of the wave function. In the double-slit experiment, the particle’s wave function initially spreads out to encompass all possible paths through both slits. When no measurement device is present to detect which slit the particle goes through, the wave function continues to evolve as a superposition of all these possibilities, leading to an interference pattern on the detection screen—a hallmark of the wave-like behavior of quantum particles. However, when a measurement device is introduced—such as a detector placed at one of the slits—the situation changes dramatically. The act of measurement forces the wave function to collapse into a state that corresponds to the particle having passed through one slit or the other. Remember, half of the wave cannot collapse. The wave is quantized and belongs to a single particle, so the collapse is an all-or-none phenomenon. This is why placing a measuring device by the slit causes the wave to pass through one slit or the other. This collapse is not a choice made by the particle but a direct consequence of the interaction between the wave function and the measurement device. The wave function is forced into an intermediate eigenstate, determined by the constraints of the measurement apparatus. Importantly, once the gate (or measurement device) is removed, the wave function is no longer constrained by the need to interact with the measurement apparatus. The wave function then resumes its original, uncollapsed form, spreading out once more to encompass all possible paths through both slits.
§ TIME UNCERTAINTY
In this section, we aim to reinterpret the Uncertainty Principle by exploring its implications within the spacetime domain. Starting from the conventional expression relating the uncertainties in position and momentum, by considering momentum as the product of mass and velocity, and assuming that the mass of the particle remains constant, we will show that the uncertainty in momentum translates directly into uncertainty in velocity. Since velocity can be understood as the uncertainty in position over time, we will then express the Uncertainty Principle in terms of the uncertainties in both space and time. This reformulation underscores the interconnected nature of space and time at the quantum level, offering a more comprehensive view of the constraints imposed by quantum mechanics.
To explore this concept further, let's consider the momentum of a particle. Momentum (p) is defined as the product of mass (m) and velocity (v):
p = mv
If we assume that the mass of the particle is constant and well-defined (i.e., there is no uncertainty associated with the mass), any uncertainty in momentum (σ_p) must therefore arise from the uncertainty in the velocity (σ_v) of the particle:
σ_p = mσ_v
Given that velocity is the rate of change of position with respect to time, the uncertainty in velocity can be understood as the uncertainty in position (σ_x) over a given time interval (σ_t):
v = Δ x/Δ t or σ_v = σ_x/σ_t
Substituting this into the expression for σ_p, we get:
σ_p = m σ_x/σ_t
Now, considering the Uncertainty Principle in terms of space and time, we can rewrite the principle by substituting σ_p from the above equation:
σ_x (m σ_x/σ_t) ≥ħ/2
Simplifying this expression, we obtain:
σ_x^2/σ_t≥ħ/2m
This equation illustrates that the uncertainty in position (σ_x) is directly related to the uncertainty in the time interval (σ_t) over which the position is measured, given a constant mass. In this sense, the Uncertainty Principle can be interpreted as an uncertainty relation in spacetime, where the precision with which we can measure the position of a particle is fundamentally linked to the time interval during which the measurement occurs. To get around the inaccuracies of this derivation, we can assume the Uncertainty Principal holds a general form of:
σ(x,t) ≥ c
which σ(x,t) signifies the uncertainty in spacetime, while c is a constant.
Another crucial aspect to consider is the effect of time dilation when a particle is traveling close to the speed of light. The reference time for such a particle is not the same as that for a stationary observer. Due to relativistic effects, time for the particle slows down relative to the observer's frame of reference. In the quantum world, the concept of “present time" as we understand it in our everyday experience does not have the same meaning. The notion of a universal “now," where events are occurring simultaneously across the universe, is an artifact of our classical understanding of time, deeply rooted in our human perception. However, this concept falls apart when we delve into the quantum realm.
In quantum world, each wave function exists within its own spacetime domain, governed by the principles of quantum fields. The wave function is not confined to a single moment or location; instead, it spans across the entirety of spacetime, encompassing all possible states and interactions. This means that what we refer to as “the present" is not a definitive, universally shared moment but rather a construct that emerges from our macroscopic perspective. For a wave function, time and space are intertwined in such a way that there is no single “present" but rather a continuous distribution of possibilities across the entire spacetime. Each wave function exists within this spacetime, with its evolution governed by the underlying quantum field. The outcomes we observe, which we might describe as occurring in “the present," are actually the result of interactions between wave functions across different points in spacetime.
§ WAVE FUNCTION IN SPACETIME
When a particle such as an electron or photon passes through the two slits, its wave function splits into two distinct wave components, each corresponding to one of the slits. These wave components then propagate and overlap on the other side of the slits. The overlap of these wave components results in the phenomenon of interference, which can be either constructive (where the wave components reinforce each other) or destructive (where they cancel each other out). This interference of the wave function creates a pattern of alternating high and low probabilities on the detection screen placed behind the slits.
When a particle is detected on the screen, the wave function does not manifest in a continuous distribution across the screen but rather collapses to a single point. The position where the particle is detected is determined stochastically, meaning that it is random but follows the probability distribution given by the square of the wave function's amplitude. This process leads to the formation of an interference pattern over time, as more and more particles are detected at the screen. The pattern consists of regions with high and low densities of detected particles, corresponding to the constructive and destructive interference of the wave function, respectively. This quantum mechanical interpretation highlights that the interference pattern observed in the double-slit experiment emerges from the wave-like nature of the particle's wave function and the probabilistic nature of quantum mechanics. The particle itself does not interfere with itself in a classical sense; rather, the wave function that describes the particle's quantum state exhibits interference, and the detection events occur at positions that reflect this interference pattern.
In the context of the Delayed-choice quantum eraser experiment, it's essential to understand that the wave function does not merely spread across space but across spacetime as a whole. The wave function is a mathematical representation of the quantum state of a system, and this state encompasses not just the spatial distribution of possibilities but also their temporal evolution. To fully capture the behavior of quantum systems, one must consider the wave function as a solution to the Schrödinger equation that is valid across both space and time. This means that the wave function evolves over time, and its admissible solutions are governed by the constraints of both spatial boundaries and temporal ones. The wave function must satisfy these conditions to provide a coherent description of the quantum system.
It is constructive to think of entangled particles as entangled wave functions within a quantum field, which ensures that the energy, momentum, and spin of the joint wave remain consistent, preserving underlying field and spacetime symmetries, provided they do not interact with other wave functions. This approach emphasizes the wave-like nature of quantum entities and how their properties are governed by the overall quantum field, maintaining entanglement as long as external interactions are minimized. The particles involved are not just entangled through space, but through both time and space. This entanglement arises because the particles are manifestations of the same underlying wave function, which is a solution to the quantum field governing the system. The quantum field, which dictates the behavior of particles and their interactions, ensures that the wave function remains consistent and admissible across the entirety of spacetime. Entanglement in this context means that the properties of one particle are intrinsically linked to those of another, regardless of the distance separating them. The wave function's evolution through spacetime allows for the intriguing possibility that actions taken at a later time can seemingly affect outcomes observed at an earlier time. This is not because the wave function is “deciding" anything but because it exists as a continuous, spacetime-spanning entity. The entire history of the wave function, including the interaction with measurement devices, must conform to the allowable solutions of quantum mechanics. Therefore, the wave function's spread through spacetime incorporates all possible paths and interactions, constrained by the conditions imposed by the experiment.
When we measure and confine a wave function, we are not only determining the state of the particle associated with that wave function, but we are also influencing its entangled counterpart across spacetime. This confinement, imposed by the act of measurement, forces the wave function to “collapse" into a specific eigenstate within the bounds of the measurement interaction. Consequently, this collapse is not an isolated event; it is intricately connected to the state of the entangled particle, regardless of the distance or time separating them. In the realm of quantum mechanics, entangled particles share a single, unified wave function that extends across spacetime. This means that the wave function of one particle cannot be described independently of its entangled partner. When we interact with one particle and confine its wave function to a particular state, this interaction immediately imposes constraints on the wave function of the other particle. As a result, other potential paths that the wave function might have taken—paths that do not satisfy the constraints imposed by the measurement—are no longer admissible within the spacetime framework. These non-admissible paths are effectively eliminated from the wave function, both in space and time.
This process might give the illusion that particles are somehow communicating with their counterparts in the future, as if they possess consciousness or agency to influence one another across time, or if our consciousness affects the physical reality. However, it is essential to understand that particles do not have any form of consciousness or ability to make choices. The phenomenon we observe is purely a consequence of the constraints enforced by the quantum field within spacetime. The misconceptions form when we want to probe 4D space with our limited 3D senses/devices. When a wave function is measured and confined, it is not because the particle is “aware" of its entangled partner or communicates with it. Rather, the constraints imposed by the measurement simply dictate the only allowable solution in spacetime for the entire quantum system. The entangled partner's wave function is then automatically adjusted to maintain consistency with the laws of quantum mechanics. This adjustment is not a result of communication between particles but a reflection of the fundamental nature of quantum mechanics, where the wave function must satisfy all admissible solutions in both space and time. The elimination of non-admissible paths is a consequence of the quantum field ensuring that only those trajectories that fit within the enforced constraints remain.
§ CONCLUSION
The limitations inherent in our measurement devices, combined with the deeply ingrained dogmas of classical physics, create significant barriers to perceiving the true nature of quantum reality. Our instruments, no matter how advanced, are ultimately grounded in the classical framework—they are designed to measure discrete, particle-like events in a well-defined spacetime. However, quantum mechanics operates on principles that defy this classical outlook. The wave function, which encapsulates the quantum state of a system, is not a simple, localized entity but rather a spread-out, probabilistic distribution that exists across spacetime. Our measurements, which are fundamentally classical interactions, inevitably collapse this wave function, giving us only a partial and often misleading view of the quantum world. Moreover, our classical physics background has trained us to think in terms of distinct particles moving along well-defined paths in space and time. This perspective, while effective for macroscopic phenomena, is inadequate for understanding quantum mechanics, where particles do not have a definite position or momentum until they are observed. The classical notion of determinism is replaced by probabilities, and the clear-cut distinction between past, present, and future is blurred in the quantum realm.
unsrt
|
http://arxiv.org/abs/2409.03383v1 | 20240905093709 | Generating customized field concentration via virtue surface transmission resonance | [
"Yueguang Hu",
"Hongyu Liu",
"Xianchao Wang",
"Deyue Zhang"
] | math.NA | [
"math.NA",
"cs.NA",
"math-ph",
"math.MP",
"physics.optics",
"35P25, 35R30"
] |
§ ABSTRACT
In this paper, we develop a mathematical framework for generating strong customized field concentration locally around the inhomogeneous medium inclusion via surface transmission resonance.
The purpose of this paper is twofold.
Firstly, we show that for a given inclusion embedded in an otherwise uniformly homogeneous background space, we can design an incident field to generate strong localized field concentration at any specified places around the inclusion. The aforementioned customized field concentration is crucially reliant on the peculiar spectral and geometric patterns of certain transmission eigenfunctions.
Secondly, we prove the existence of a sequence of transmission eigenfunctions for a specific wavenumber and they exhibit distinct surface resonant behaviors, accompanying strong surface-localization and surface-oscillation properties. These eigenfunctions as the surface transmission resonant modes fulfill the requirement for generating the field concentration.
Doping the spin-polarized Graphene minicone on Ni(111)
Federico Bisti
^1 Université Paris-Saclay, CEA, Service de Thermo-hydraulique
et de Mécanique des Fluides, 91191 Gif-sur-Yvette, France
^2 Laboratoire de Physique de la Matière Condensée, CNRS,
École Polytechnique,
Institut Polytechnique de Paris, 91120 Palaiseau, France
=======================================================================================================================================================================================================================================================================
Keywords: field concentration, transmission resonance, surface localization, Herglotz approximation, oscillating behaviors
2010 Mathematics Subject Classification: 35P25, 35R30
§ INTRODUCTION
§.§ Statement of main results
In this paper, we are concerned with artificially generating field concentration via surface transmission resonance. To begin with, we present the mathematical formulation of the transmission problem and discuss the major findings mainly focusing on the mathematics.
We consider a time-harmonic wave scattering problem arising from an incident field u^i and a bounded inhomogeneous medium inclusion (Ω; σ,τ) in ^N, N=2, 3, which is the mathematical model of our study.
Here, Ω denotes the support of the inhomogeneity of the medium with a connected complement in ^N and σ,τ are the related material parameters. By choosing appropriate physical units, we may assume that the material parameters by normalization are set as
σ̃() = χ_^N\Ω+σ()χ_Ωτ̃() = χ_^N\Ω+τ()χ_Ω∈^N ,
where σ() and τ() are two L^∞ functions in Ω. χ_A() is the indicator function such that χ_A () =1 if ∈ A and χ_A () =0 otherwise.
Let u^i be an entire solution to homogeneous Helmholtz equation
Δ u^i(𝐱) +k^2u^i(𝐱) =0 𝐱∈^N,
where k ∈_+ represents the (normalized) wavenumber of wave propagation. The incident field u^i impinging on the inclusion Ω gives rise to the scattered field u^s. Let u:= u^i +u^s denotes the total field.
The forward scattering problem is described by the following Helmholtz system,
∇·(σ̃∇ u ) + k^2 τ̃ u = 0 in ^N,
u = u^i +u^s in ^N,
lim_r →∞ r^N-1/2(∂_r u^s- k u^s) = 0 r= ||,
where i = √(-1) is the imaginary unit and ∂_r u = ·∇ u with := / || ∈𝕊^N-1.
The last limit in (<ref>) characterizes the outgoing nature of the scattered field, known as the Sommerfeld radiation condition.
We also allow the case of absorption, represented by the material parameters
with nonzero imaginary parts. In this paper, we assume
σ() ⩽ 0 τ() ⩾ 0 σ () ⩾γ_0 ∈Ω,
for some positive constant γ_0.
Then the system (<ref>) is well-posed, and in particular, there exists a unique solution u ∈ H_loc^2(^N) <cit.>. Furthermore, the total field has the following asymptotic expansion:
u() = u^i () + e^ kr/r^(N-1)/2u_∞() +𝒪(1/r^(N+1)/2) as r → +∞,
which holds uniformly for all directions . Here the complex-value function u_∞, defined on the unit sphere 𝕊^N-1, is known as the far field pattern and encodes the scattering information
caused by the perturbation of the incident field u^i due to the scatterer (Ω;σ,τ).
In this paper, we are concerned with artificially generating strong field concentration in a customized way. Specifically, we show that by properly choosing an incident field u^i, one can make the degree of field concentration, namely ∇ u_L^∞, to be arbitrarily
large at a customized exterior neighborhood around ∂Ω. The main result is presented as follows.
Consider the scattering problem (<ref>) with a C^1,1 inclusion Ω in ^N,N=2,3. We denote by Γ⊂∂Ω any subset of the boundary ∂Ω. Then the gradient of the total field ∇ u can blow up in the following sense: for any given large number M>0, there exists an incident field u^i impinging on the inclusion Ω such that
∇ u_L^∞ (Γ_e(Ω,ϵ))⩾ M(u^i),
where M is only dependent of the incident field u^i and Γ_e(Ω,ϵ)= {x | dist(x,Γ) ⩽ϵ, x ∉Ω} is the exterior of the inclusion Ω along Γ for any given parameter ϵ = o(1).
The primary idea for generating field concentration involves introducing a virtual field concentration generator near the inhomogeneous medium inclusion (see Figure <ref>).
Subsequently, we can design a tailored incident field to activate such a generator and achieve the desired field concentration.
A field concentration generator is referred to as a transmission eigenmode ball where the wave field tends to concentrate around the boundary.
As the activated generator approaches to the inclusion, it produces a significant potential difference in the vicinity around the inclusion that lead to the blowup of gradient of the underlying wave field.
It should be noted that the construction of the incident field u^i is dependent on the distance parameter ϵ and the geometric location of the boundary part Γ. Therefore, the dependence of M on the incident field u^i also signifies its dependence on ϵ and Γ.
Furthermore, there are no restrictive requirements regarding the occurrence locations of field concentration. In fact, we can achieve field concentration in any specified location (even multiple locations) around the inclusion by introducing field concentration generators over there.
To enhance the clarity of the idea of generating field concentration, we outline the main proof of Theorem <ref> in this paper.
Firstly, we establish the existence of a sequence of specific transmission eigenfunctions within a radial domain for a given wavenumber (refer to Lemma <ref>). These transmission eigenfunctions are dependent on the construction of material parameters and as surface waves, they exhibit strong surface-localized properties (refer to Theorem <ref>). These properties satisfy the requirements for generating field concentration.
Secondly, the transmission eigenvalue problem (<ref>) possesses a distinctive spectral pattern as a result of the topological structure of the defined domain (refer to Proposition <ref>). Based on this spectral pattern, we can construct the corresponding transmission eigenfunctions.
Thirdly, there exists a Herglotz wave function that can serve as a desirable approximation to the transmission eigenfunction. This Herglotz wave function satisfies the entire Helmholtz equation (<ref>) and can be assigned as the incident field (refer to Lemma <ref>).
Finally, by utilizing the nearly vanishing scattering property in the presence of transmission eigenfunctions (refer to Lemma <ref>), we can rigorously prove that the gradient of the total field would blow up at specified locations around the inhomogeneous medium inclusion.
As mentioned in the abstract, our study critically relies on certain geometric properties of the so-called transmission eigenfunctions, which were recently revealed in <cit.>. The transmission eigenvalue problem arises in the study of invisibility/transparency in continuum mechanics and has a very colorful history. For further discussion on this topic, we refer the readers to <cit.>.
In this paper, we reexamine the transmission eigenvalue problem (<ref>) from the standpoint of material construction.
We have proved that for a given wavenumber k, there exists a sequence of transmission eigenfunctions which exhibit strong surface resonance behaviors. Specially, these transmission eigenfunctions together with their gradients tend to localize/concentrate on the boundary surface of the underlying inhomogeneous inclusion. Moreover, their oscillating frequencies along the boundary are significantly higher than the excitation frequency. The main result is presented as follows.
Consider the transmission eigenvalue problem (<ref>) in a radial domain B_r_0 centering the origin with radius r_0 in ^N, N=2,3 and a given constant wavenumber k. Then there exists a sequence of transmission eigenfucntions (v_m,w_m)_m ∈ depending on the material parameters (σ_m,τ_m)_m ∈ such that k is the transmission eigenvalue and the transmission eigenfunctions and their gradients are surface-localized, i.e.
lim_m→∞ψ_m_L^2(B_ξ )/ψ_m_L^2(B_r_0) =0 lim_m→∞∇ψ_m_L^2(B_ξ)/∇ψ_m_L^2(B_r_0) = 0 , ψ_m =w_m, v_m ,
where B_ξ is defined as
B_ξ= {x ∈ B_r_0, dist(x,∂ B_r_0)⩾ξ r_0,ξ∈ (0,1)}.
We further suppose v_m is L^2(B_r_0)-normalised, i.e. v_m_L^2(B_r_0) =1, then
lim_m→∞∇ψ_m_L^∞(Σ_ξ )/k = ∞ ,
where Σ_ξ is defined as
Σ_ξ =
{ (r,θ) | ξ r_0 < r < r_0, θ_1 < θ <θ_2}ξ∈ (0,1); θ_1,θ_2 ∈ [0,2π) ,
{(r,θ,φ) | ξ r_0 < r < r_0, 0⩽θ⩽π ,φ_1 < φ <φ_2}ξ∈ (0,1); φ_1,φ_2 ∈ [0,2π).
B_ξ and Σ_ξ represent the interior and boundary domain inside the inclusion respectively. In theorem <ref>,
the first limit indicates that these transmission eigenfucntions are surface-localized eigenmodes.
The second limit indicates that their most oscillating behaviors tend to concentrate along the boundary for both transmission eigenfunctions.
In the vicinity along the boundary, The last limit indicates that the oscillating frequencies of transmission eigenfunctions are much larger than the natural frequency k, especially for sufficiently large order m.
§.§ Background discussion and scientific novelties
In the field of fluid mechanics, a field concentration refers to a location within an object where the gradient of velocity potential or pressure is substantially higher than the surrounding region in the fluid field.
Excessively large velocity gradient can lead to serious consequences, including flow instability and structural damage.
Nevertheless, the destructive nature of field concentration can be harnessed for practical applications.
Proper utilization of field concentration offers engineers numerous possibilities to analyze and optimize the performance of fluid systems.
Therefore, comprehending field concentrations is fundamental in fluid mechanics and plays a vital role in designing and optimizing various engineering applications.
Field concentration can generally be utilized to perform destructive tasks in engineering.
In medical applications, such as treating cancers (e.g., prostate cancer, liver tumors) and non-cancerous conditions (e.g., uterine fibroids), a transducer focuses acoustic waves on specific points within the body to creat high-intensity regions where the acoustic energy concentrates and lead to thermal ablation of the target tissue. The concentrated acoustic field can generate localized energy concentration capable of destroying pathological tissues, including tumors.
Similarly, in the treatment of kidney stones and gallstones, lithotripsy involves generating a series of acoustic waves outside the body, concentrating acoustic energy at the location of the stone and render it fragile. Subsequently, field concentration can fragment kidney stones or gallstones into smaller pieces that can be naturally expelled by the body.
On the other hand, field concentration can also enhance the performance of fluid systems.
In the design of aircraft lifting surfaces, the low-pressure region on the upper surface of the airfoil and the corresponding pressure gradients are utilized to generate the required lift. Optimizing the distribution of field concentration can improve lift and reduce drag.
Nozzles and propulsion systems also exemplify this application. The design of rocket engines and jet engines relies on the extremely high velocity potential gradient and pressure gradient in the exhaust flow to generate thrust. Through meticulous nozzle system design, the velocity potential gradients can be optimized to enhance the propulsive efficiency.
As mentioned earlier, we are concerned with artificially creating strong field concentrations in a customized way. The field concentration can occur at a customized place around a generic material inclusion or within the material inclusion if it possesses a certain specific geometric structure.
There are several novel and salient features of our study. First, the field concentration has been widely investigated in the literature, say e.g. <cit.> in the high-contrast composite material theory. It turns out that the material and geometric irregularities are crucial for the occurrence of field concentration. In fact, it is usually required that the material parameters of the inclusion and those of background medium possess a certain asymptotically high contrast. Moreover, it usually occurs between two close-to-touching convex material inclusions, corresponding to the building blocks of the underlying composite material. The study is usually concerned with the static or quasi-static case, namely ω = 0 or w ·diam(Ω) ≪ 1 see <cit.> for discussion on the related developments in the literature. However, for our study, there are none of such restrictive requirements: there can be only one material inclusion with regular material parameters of generic geometric shape, and the field concentration can occur in the quasi-static regime or beyond the quasi-static regime. Second, a large field concentration may cause the failure of the material structure. Many preventive methods have been developed to counter the destructive effect of field concentration. To our best knowledge, there is little study on artificially and customarily generating strong field concentration within a material structure. As is well known, everything might have many different sides. The destructive nature of field concentration can be put into good use. For example, in the treatment for kidney stones and gallstones, a destructive failure of the stone structure inside can make the treatment easier and safer and reduce the need for surgical intervention. Finally, we would like to mention that gradient estimates of solutions is a central topic in the theory of partial differential equations (PDEs); see e.g. <cit.>. Our study provides a good example that for wave equations, the gradient blowup phenomena may be even induced for (possibly) smooth coefficients due to frequency effect.
The rest of the paper is organized as follows.
Section 2 presents the spectral patterns of transmission resonance and provides the proof for the existence of a sequence of surface transmission resonant eigenfunctions associated with a given wavenumber. These eigenfunctions are utilized to construct transmission resonant balls, which facilitate the generation of strong field concentration.
Section 3 demonstrates the artificial generation of customized field concentration via the spectral and geometric patterns of transmission resonance. It is followed by the illustration of this phenomenon through numerical experiments in Section 4.
Section 5 focuses on the surface-oscillating behaviors of transmission resonance and concludes the complete proof of Theorem <ref>.
§ SURFACE TRANSMISSION RESONANCE
This section introduces the transmission eigenvalue problem along with its spectral patterns, and subsequently establishes the existence of a sequence of surface transmission resonant eigenfunctions.
§.§ Transmission eigenvalue problem and its spectral patterns
The transmission eigenvalue problem is associated with invisibility in the wave scattering system (<ref>). When invisibility occurs, there is no perturbation outside caused by the inhomogeneous medium inclusion, i.e., u^s≡ 0. This reduction leads to the following (interior) transmission eigenvalue problem:
∇·(σ∇ w ) +k^2τ w = 0 in B,
Δ v +k^2v = 0 in B,
w =v σ∂ w/∂ν = ∂ v/∂ν on ∂ B,
where ν∈𝒮^N-1 is the exterior unit normal to ∂ B and B is a bounded Lipschitz domain in ^N.
In order to make a distinction between the inhomogeneous medium inclusion introduced in (<ref>) and the definition domain introduced in (<ref>), we introduce the symbol B to denote the definition domain for transmission eigenvalue problem. By slight abuse of notation, B_r(_0) represents a radial domain with radius r and center _0 in ^N. When _0 is the origin, it is omitted that should be evident from the context. Additionally, we introduce an auxiliary parameter to facilitate our analysis:
n =√(τ/σ).
It is evident that w = v = 0 forms a pair of trivial solutions to (<ref>).
If the system (<ref>) admits a pair of nontrivial solutions (v,w), then k is referred to as a transmission eigenvalue, and (v,w) are the corresponding transmission eigenfunctions. The transmission eigenfunction in fact can be viewed as the restriction of the wave inside the inhomogeneous medium inclusion. The behaviors of transmission eigenfunction reflect the geometric properties of the wave probing on the invisible inclusion. This can help to understand the mathematical nature of invisibility.
In this paper, we find out there exists a sequence of surface-localized transmission eigenfunctions (v_m,w_m)_m ∈. Here, the localization means that there exists a sufficiently small number ε =o(1) such that
ψ_L^2(B_ε)/ψ_L^2(B) = o(1) , ψ = w and v.
where B_ε:= {x ∈ B, dist(x,∂ B)⩾ε}.
If localization occurs, the corresponding transmission eigenfunctions are called surface-localized transmission eigenmodes.
Next we shall consider the spectral pattern of transmission eigenvalue problem. Let Ω be composed of multiple simply-connected components as follows:
(Ω;σ,τ) = ∪_j=1^L_0(Ω_j;σ_j,τ_j),
where Ω_j's are mutually disjoint and (Ω_j;σ_j,τ_j) are the restriction of (Ω;σ,τ) on Ω_j.
The following theorem states that the set of all transmission eigenvalues in Ω is in fact the union of the eigenvalues in each component Ω_j. Moreover, the transmission eigenfunctions in Ω are linear combinations of the eigenfunctions in each component Ω_j with the trivial eigenfunctions in other components.
Let σ [Ω;σ,τ] and σ[Ω_j;σ_j,τ_j] be the set of transmission eigenvalues of the system (<ref>) in (Ω;σ,τ) and (Ω_j;σ_j,τ_j), 1⩽ j ⩽ L_0, where (Ω;σ,τ) = ∪_j=1^L_0(Ω_j;σ_j,τ_j). Then
σ [Ω;σ,τ] = ∪_j=1^L_0σ[Ω_j;σ_j,τ_j].
We first prove that
∪_j=1^L_0σ[Ω_j;σ_j,τ_j]⊂σ [Ω;σ,τ].
Without loss of generality, we consider k∈σ[Ω_1;σ_1,τ_1] with the associated transmission eigenfunctions w_1∈ H^1(Ω_1) and v_1∈ H^1(Ω_1), that is,
∇· (σ_1∇ w_1) +k^2 τ_1 w_1 = 0 in Ω_1,
Δ v_1 +k^2v_1 = 0 in Ω_1,
w_1 =v_1 σ_1∂ w_1/∂ν = ∂ v_1/∂ν on ∂Ω_1,
Next, it is trivially seen that w_j=v_j=0, j=2, 3,…, L_0, satisfy
∇· (σ_j∇ w_j) +k^2 τ_j w_j = 0 in Ω_j,
Δ v_j +k^2v_j = 0 in Ω_j,
w_j =v_j σ_j∂ w_j/∂ν = ∂ v_j/∂ν on ∂Ω_j,
Set
w= w_1χ_Ω_1+0·χ_Ω_2+⋯+0·χ_Ω_L_0∈ H^1(Ω),
v= v_1χ_Ω_1+0·χ_Ω_2+⋯+0·χ_Ω_L_0∈ H^1(Ω).
By virtue of (<ref>) and (<ref>) as well as using the fact that Ω_j are mutually disjoint, it is directly seen that w and v defined in (<ref>) are nontrivial solutions to (<ref>) with k=k_1. Hence, k_1∈σ [Ω;σ,τ], which readily proves (<ref>).
We proceed to prove that
σ [Ω;σ,τ]⊂∪_j=1^L_0σ [Ω_j; σ_j,τ_j].
Let k∈σ [Ω;σ,τ] and w, v be the associated transmission eigenfunctions. Since w∈ H^1(Ω) and v∈ H^1(Ω) are non-identically zero and Ω_j's are mutually disjoint, there must exist an Ω_j_0, 1≤ j_0≤ L_0, such that
w_j_0=w|_Ω_j_0∈ H^1(Ω_j_0), v_j_0=v|_Ω_j_0∈ H^1(Ω_j_0),
are non-identically zero. Again, by using the facts that Ω_j's are mutually disjoint and w, v satisfy (<ref>), it can be seen that
∇· (σ_j_0∇ w_j_0) +k^2 τ_j_0 w_j_0 = 0 in Ω_j_0,
Δ v_j_0 +k^2v_j_0 = 0 in Ω_j_0,
w_j_0 =v_j_0σ_j_0∂ w_j_0/∂ν = ∂ v_j_0/∂ν on ∂Ω_j_0,
That is,
k∈σ[Ω_j_0;σ_j_0,τ_j_0]⊂∪_j=1^L_0σ [Ω_j;σ_j,τ_j],
which proves (<ref>).
Finally, combining (<ref>) and (<ref>) readily yields (<ref>), thus completing the proof.
§.§ The geometric patterns of transmission resonance
The transmission eigenvalue problem (<ref>) possesses a sequence of transmission eigenvalues and surface-localized transmission eigenfunctions for the given material parameters (σ,τ) <cit.>. In this subsection, we reexamine the transmission eigenvalue problem for the given wavenumber k and radial domain B_r_0. We prove that there exists a sequence of {n_m}_m ∈ such that k is the transmission eigenvalue and the corresponding transmission eigenfunctions (v_m,w_m)_m ∈ are surface-localized. This allows us to construct the field concentration generator with arbitrary size by considering the material parameter.
These eigenfunctions are in fact surface transmission resonant modes, accompanying strong surface-localization and manifesting highly-oscillatory patterns along with the energy blowup.
Since the verification of surface-oscillating patterns involves technical and tedious calculations, we defer the related proof to the section <ref>.
To begin with, we denote J_m(x) and j_m(x) respectively denote the m-th order Bessel function and m-th order spherical Bessel function. Moreover, let j_m,s and j'_m,s respectively denote the s-th positive root of J_m(x) and J'_m(x) (arranged according to magnitude).
Consider the transmission eigenvalue problem (<ref>) and assume B_r_0 is a radial object with radius r_0 in ^N centering the origin. Given any wavenumber k>0, there exists a sequence of {n_m}_m∈ such that k is the transmission eigenvalue and the corresponding transmission eigenfunctions (v_m,w_m) are both surface-localized. In specific, let s_0 ∈ be a fixed positive integer, the eigenvalues are
n_m ∈ (j_m,s_0/kr_0,j_m,s_0+1/kr_0) in ^2 n_m ∈ (j_m+1/2,s_0/kr_0,j_m+1/2,s_0+1/kr_0) in ^3,
and the corresponding eigenfunctions in ^2 are
w_m() = α_mJ_m(kn_mr)e^ mθ v_m() = β_m J_m(kr)e^ mθ ,
where = (rcosθ,rsinθ) ∈^2 denotes the polar coordinate. The corresponding eigenfunctions in ^3 are
w_m^l() = α_m^l j_m(kn_mr)Y_m^l(θ,φ) v_m^l() = β_m^l j_m(kr)Y_m^l(θ,φ) -m ⩽ l ⩽ m ,
where = (rsinθcosφ,r sinθsinφ,rcosθ) ∈^3 denotes the spherical coordinate and Y_m^l are the spherical harmonics.
To ensure that (v_m,w_m) are the transmission eigenfunctions of (<ref>), the transmission boundary conditions on the last line in (<ref>) yield
α_m J_m(knr_0) = β_m J_m(kr_0) and σ n α_m J'_m(knr_0) = β_m J'_m(kr_0).
So n must be the root of the following equation:
f_m(n) = J'_m(kr_0)J_m(knr_0) -σ nJ_m(kr_0)J'_m(knr_0).
For a given integral s_0 ∈, it is obvious that J_m(knr_0) = 0 when n = j_m,s_0/(kr_0) and j_m,s_0+1/(kr_0) respectively.
From the fact that the zeros of J_m(x) and J'_m(x) are interlaced, it holds that
f_m(j_m,s_0/kr_0) · f_m(j_m,s_0+1/kr_0) = σ^2 j_m,s_0j_m,s_0+1/k^2r_0^2 J^2_m(kr_0)J'_m(j_m,s_0)J'_m(j_m,s_0+1) < 0.
It readily implies by Rolle's theorem that there exists at least one zero denoted by n_m in (j_m,s_0/(kr_0),[2]j_m,s_0+1/(kr_0)) satisfying f_m(n_m) =0. Then the transmission eigenvalue is given by
n_m ∈(j_m,s_0/kr_0 ,j_m,s_0+1/kr_0),
and (v_m,w_m) are the corresponding transmission functions.
The surface-localizations of (v_m,w_m) mainly depend on the properties of Bessel functions J_m and spherical Bessel functions j_m. From the orthogonality of {e^ mθ}_m ∈ in the unit circle, we can obtain
v_m^2_L^2(B_r_0) =β_m^2 ∫_0^r_0 J^2_m(kr)r r = β_m^2/k^2∫_0^kr_0 J^2_m(r)r r.
Without loss of generality, we assume kr_0 is always less than m and this assumption always holds when m is sufficiently large. We define a monotonously increasing and convex function
f(r) = J^2_m(r)r r ∈ (0,kr_0).
In fact, straight calculations yield
f'(r) = J_m(r)(2J'_m(r)r+J_m(r)) >0 ,
f”(r) = 2J'^2_m(r)r + 2J'_m(r)J_m(r) + 2r^-1(m^2-r^2)J_m^2(r) >0.
The convex property means that the integral (<ref>) is bigger than the area of the triangle under the tangent of f(kr_0) with the x-axis, namely,
∫_0^kr_0 J^2_m(r)r r⩾1/2J_m^3(kr_0)(kr_0)^2/J_m(kr_0)+2kr_0J'_m(kr_0).
Since f(r) is monotonously increasing, it is obvious that
∫_0^kξ r_0 J^2_m(r)r r⩽1/2 J^2_m(kξ r_0)(kξ r_0)^2 ξ∈ (0,1) .
Combining the above two estimates, we further obtain
v_m_L^2(B_ξ)^2/v_m_L^2(B_r_0)^2 ⩽J_m^2(kξ r_0)(kξ r_0)^2/J_m^3(kr_0)(kr_0)^2/(J_m(kr_0)+2kr_0J'_m(kr_0))
= (J_m(kξ r_0)/J_m(kr_0))^2 ξ^2(1 + 2kr_0J'_m(kr_0)/J_m(kr_0)).
When m is sufficiently large, it holds that
J'_m(kr_0)/J_m(kr_0) = m/kr_0 - J_m+1(kr_0)/J_m(kr_0) = m/kr_0(1 - k^2r_0^2/2m(m+1) + 𝒪(m^-3)).
where we have used the asymptotic expansions of J_m for larger order,
J_m(x) = x^m/2^m Γ(m+1)(1 - x^2/4(m+1) + 𝒪(m^-2) ),
and Γ(x) is the Gamma function. By using the asymptotic expansions (<ref>) again, we can get
J_m(kξ r_0)/J_m(kr_0) = (kξ r_0)^m(1 - (kξ r_0)^2/4(m+1) + 𝒪(m^-2) )/(kr_0)^m(1 - (kr_0)^2/4(m+1) + 𝒪(m^-2) )∼ξ^m.
Finally, we insert (<ref>) and (<ref>) in (<ref>) and get
v_m_L^2(B_ξ)^2/v_m_L^2(B_r_0)^2⩽ C(k,r_0)ξ^2m+2(1+2m) → 0 as m →∞.
We have proved the surface-localization of {v_m}_m∈. Similarly,
the L^2-norm of w_m is
w_m^2_L^2(B_r_0) =α_m^2 ∫_0^r_0 J^2_m(kn_mr)r r = β_m^2/k^2n_m^2∫_0^kn_mr_0 J^2_m(r)r r.
Since k n_m r_0 ∈ (j_m,s_0,j_m,s_0+1) and the zeros of Bessel functions have the following asymptotic expansions,
j_m,s_0 = m + b_s_0m^1/3 + 𝒪(m^-1/3),
where b_s_0 is some positive constant only depending on the fixed integer s_0 ∈, then it holds that
kn_mr_0 > j_m,1 >j'_m,1>m > kn_m τ r_0.
By following a similar argument as (<ref>), we can readily get
w_m_L^2(B_ξ)^2/w_m_L^2(B_r_0)^2 = ∫_0^kn_mξ r_0 J^2_m(r)r r/∫_0^kn_mr_0 J^2_m(r)r r⩽∫_0^kn_mξ r_0 J^2_m(r)r r/∫_0^m J^2_m(r)r r
⩽1/2J_m^2(kn_mξ r_0)(kn_mξ r_0)^2/1/2J_m^3(m)m^2/(J_m(m)+2mJ'_m(m))
= (J_m(kn_mξ r_0)/J_m(m))^2 (kn_mξ r_0/m)^2(1 + 2mJ'_m(m)/J_m(m)).
On the one hand, kn_mξ r_0 <m and
1 + 2mJ'_m(m)/J_m(m) = 1 +2m (1 - J_m+1(m)/J_m(m)) = 1 + m( m+2/m+1 - m/ (m+1)J_m+2(m)/J_m(m))
⩽ 1 + m(m+2)/m+1⩽ m+2.
where we have used the recurrence formula J_m(x) +J_m+2(x) = 2(m+1)J_m+1(x)/x. On the other hand, from <cit.>, one can get
J_m(kn_mξ r_0)/J_m(m)∼ C(k,ξ) Ai(m^2/3ζ_x) ∼ C(k,ξ) e^-2/3ζ_x^3/2mm^-1/6,
where
ζ_x=(3/2(ln(1+√(1-x^2)/x)-√(1-x^2)))^2/3 x =k n_mξ r_0/m ∼ξ ,
and we have used the asymptotic formula of Airy function for sufficiently x,
Ai(x) = e^-2/3x^3/2/2π^1/2x^1/4(1 - 5/48x^-3/2 +𝒪(x^-3) ).
Finally, we can further obtain
w_m_L^2(B_ξ)^2/w_m_L^2(B_r_0)^2 ⩽ C(k,ξ) e^-2/3ζ_x^3/2m(m+2)m^-1/6→ 0 as m →∞.
Therefore, we have proved the surface-localization of {w_m}_m∈.
The proof in ^3 is similar with that in ^2.
First, the transmission boundary conditions for (v_m^l,w_m^l) defined on (<ref>) yield
f_m(n) = j'_m(kr_0)j_m(knr_0) -σ nj_m(kr_0)j'_m(knr_0).
It is noted that the zeros of j_m(x) are the same as that of J_m+1/2(x). Taking the value of n=j_m+1/2,s_0/kr_0 and n =j_m+1/2,s_0+1/kr_0 respectively, we can get
f_m( j_m+1/2,s_0/kr_0)f_m( j_m+1/2,s_0+1/kr_0) = σ^2 j_m+1/2,s_0j_m+1/2,s_0+1/k^2r_0^2j'_m(j_m+1/2,s_0)j'_m(j_m+1/2,s_0+1).
It holds from j_m(x) =√(π /2x)J_m+1/2(x) that
. j'_m(x)|_x = j_m+1/2,s_0 = .( √(π/2x) J_m+1/2(x))'|_x = j_m+1/2,s_0 = √(π/2j_m+1/2,s_0)J'_m+1/2(j_m+1/2,s_0).
Since the zeros between J_m+1/2(x) and J'_m+1/2(x) are interlaced, it is clear that
f_m( j_m+1/2,s_0/kr_0)f_m( j_m+1/2,s_0+1/kr_0)< 0.
By the Rolle's theorem again, there exists at least one zero also denoted by n_m satisfying f_m(n_m) = 0, namely,
n_m ∈(j_m+1/2,s_0/kr_0 ,j_m+1/2,s_0+1/kr_0).
The surface-localization of (v_m,w_m) is obvious from the results in ^2. Indeed, the L^2-norm of (v_m,w_m) are given by
v^l_m^2_L^2(B_r_0) =(α_m^l)^2 ∫_0^r_0 j^2_m(kr)r^2 r = (α_m^l)^2/k^2∫_0^kr_0 J^2_m+1/2(r)r r,
w^l_m^2_L^2(B_r_0) =(β_m^l)^2 ∫_0^r_0 j^2_m(kn_mr)r^2 r = (β_m^l)^2/k^2n_m^2∫_0^kn_mr_0 J^2_m+1/2(r)r r.
By substituting the order of Bessel function from m to m+1/2 and following the same argument as (<ref>) and (<ref>), we can easily obtain the surface-localized result for (v_m,w_m).
§ ARTIFICIAL GENERATION OF CUSTOMIZED FIELD CONCENTRATIONS
In this section, we shall show how to artificially generate customized field concentration around the inclusion via surface transmission resonance.
To that end, we first introduce the Herglotz approximation and prove the nearly vanishing property of scattered field in the presence of transmission eigenfunctions.
§.§ Herglotz approximation
A Herglotz function is a function of the form
v_g,k = H_kg () := ∫_𝕊^N-1e^ik·θ g(θ) θ∈^N,
where g(θ) ∈ L^2(𝕊^N-1) is called the Herglotz kernel.
It is clear that the Herglotz function is an entire function to the Helmholtz equation (<ref>) and can be designated as the incident field.
For the purpose of generating field concentration, we shall use a customized Herglotz function to activate a field concentration generator around the inhomogeneous medium inclusion.
Here we introduce the following denseness result of the Herglotz function.
<cit.>
Suppose that Ω is of class C^α,1 , α∈∪{0} with a connected complement in ^N,N=2,3. Denote by ℋ be the space of all Herglotz functions and
ℌ(Ω) := {u ∈ C^∞ (Ω) Δ u +k^2 u =0 in Ω}
Then ℋ(Ω) is dense in ℌ(Ω) ∩ H^α +1(Ω) with respect to the H^α+1 norm.
§.§ The nearly vanishing property of scattered field
We begin to prove the smallness of scattered field in the presence of transmission eigenfunction. This result is in fact a straight consequence of the
stability estimate for the the scattering system (<ref>).
To that end, we first truncate the scattering system (<ref>) into a bounded ball D_R with a sufficiently large radius R.
The scattering system (<ref>) is equivalent to the following truncated system:
Δ u^s_1 + k^2 u^s_1 = 0 in D_R \Ω,
∇·(σ∇ u_1) + k^2 τ u_1 = 0 in Ω,
u_1 = u^s_1 + f σ∂_ν u_1 = ∂_ν u^s_1 + g on ∂Ω,
∂ u^s_1/∂ν = Λ u^s_1 on ∂ D_R ,
where f = u^i|_∂Ω , g = ∂_ν u^i|_∂Ω and
Λ: H^3/2(∂Ω) → H^1/2(∂Ω) is the Dirichlet-to-Neumann map defined by Λ W := ∂ V/∂ν with V ∈ H_loc^2(^N \D_R) is the unique solution of
Δ V + k^2 V = 0 in ^N \D_R,
V = W on ∂ D_R ,
lim_r →∞ r^N-1/2(∂_r V-ik V) = 0 r= ||.
Let (u,u^s) ∈ H^2(Ω) × H_loc^2(^N \Ω) be a solution of (<ref>). From the definition of Λ, the restriction of u^s in D_R \Ω is clearly a solution of (<ref>). Conversely, let (u_1,u^s_1) ∈ H^2(Ω) × H_loc^2(^N \Ω) be a solution of (<ref>). We shall prove that the extension of u^s_1 from D_R \Ω to outside of D_R satisfies the Sommerfeld radiating condition. From the Green's formula for Helmholtz equation, the representation of u^s_1 is
u^s_1 = ∫_∂Ω u^s_1(y) ∂Φ(x,y)/∂ν(y) - ∂ u^s_1/∂ν(y) Φ(x,y)s(y)
- ∫_∂ D_R u^s_1(y) ∂Φ(x,y)/∂ν(y) - Λ u^s_1(y) Φ(x,y)s(y), x ∈ D_R \Ω,
where Φ(x,y) is the fundamental solution to Helmholtz equation in ^N. From the definition of Λ, u^s_1 is in fact the boundary data of the radiating solution V to (<ref>). Together with the fact that Φ is also a radiating solution to Helmholtz equation, for x ∈ D_R \Ω, we can deduce that
∫_∂ D_R u^s_1(y) ∂Φ(x,y)/∂ν(y) - Λ u^s_1(y) Φ(x,y)s(y)
= lim_r →∞∫_∂ B_r V(y) (∂Φ(x,y)/∂ν(y) - k Φ(x,y) ) - ( ∂ V/∂ν(y) - k V(y)) Φ(x,y)s(y)
=0,
where we have used the boundedness of radiation solution and Sommerfeld radiation condition. Hence, u_1^s coincides with the radiating solution to the (<ref>) in the exterior of Ω. We get (u_1,u^s_1) is also a solution to (<ref>).
Consider the scattering system (<ref>) with a C^1,1 incluson (Ω;σ,τ) in ^N, N=2,3 and the wavenumber k∈_+.
We let (v,w) be a pair of transmission eigenfunction in (Ω;σ,τ) associated with k.
If the incident field u^i impinging on (Ω;σ,τ) satisfies u^i-v_H^2(Ω)⩽ε, then there exists a constant C(k,σ,τ) such that the scattered field u^s illuminated by u^i has u^s_H^2_loc(^N \Ω)⩽ C ε.
For Lemma <ref>, we can rewrite (<ref>) in the following form:
Δ u^s + k^2 u^s = 0 in D_R \Ω,
∇·(σ∇ u) + k^2 τ u = 0 in Ω,
u = u^s + f σ∂_ν u = ∂_ν u^s + g in ∂Ω,
∂ u^s/∂ν = Λ u^s on ∂ D_R ,
where f = u^i|_∂Ω and g = ∂_ν u^i|_∂Ω and Λ is defined in (<ref>). We first introduce the solution u_f being the unique solution of
Δ u_f + k^2 u_f = 0 in D_R \Ω,
u _f = f on ∂Ω,
u _f = 0 on ∂ D_R.
Without loss of generality, we assume that k^2 is not the Dirichlet eigenvalue of -Δ in D_R \Ω through a proper choice D_R.
By introducing the test function ψ∈ H^1(D_R), the equivalent variational formulation of (<ref>) is to find p ∈ H^1(D_R) such that
∫ _Ωσ∇ p ∇ψ -k^2 τ p ψx+ ∫ _D_R\Ω∇ p ∇ψ -k^2 p ψx - ∫_∂ D_RΛ p ψs
= ∫ _D_R\Ω -∇ u_f ∇ψ + k^2 u_f ψx + ∫_∂ D_R∂_νu_f ψs + ∫_∂Ω(g - ∂_νu_f )ψs∀ψ∈ H^1(D_R).
By using Green's first theorem, it is easy to check that u:= p|_Ω and u^s := p|_ D_R \Ω -u_f satisfies the scattering system (<ref>). On the other hand, we can also get the variational formulation (<ref>) by multiplying the test function to the first two equations in (<ref>).
In order to prove the regularity result of (<ref>), we first introduce a bounded operator Λ_0:H^1/2(∂ D_R) → H^-1/2(∂ D_R) defined in the Theorem 5.20 <cit.> and satisfies
- ∫_∂ D_RΛ_0 ψψs⩾ C ψ^2_H^1/2(∂ D_R) ,
for some C>0. Further, the operator Λ -Λ_0 is a compact operator from H^1/2(∂ D_R) to H^-1/2(∂ D_R). Then the variational formulation (<ref>) can be rewritten as
a_1(p, ψ) + a_2(p, ψ) + ⟨ -(Λ-Λ_0)p,ψ⟩ = ℱ(ψ) ∀ψ∈ H^1(D_R),
where ⟨· ,·⟩ denotes the inner product in L^2(∂ D_R), and the bilinear forms a_1 and a_2 together with the linear bounded functional f are defined by
a_1(p, ψ) := ∫ _Ωσ∇ p ∇ψ + k^2 pψx + ∫ _D_R\Ω∇ p ∇ψ + k^2 pψx - ∫_∂ D_RΛ_0 p ψs,
a_2(p,ψ) := -∫ _Ω k^2(1+τ)p ψx -∫ _Ω 2k^2 p ψx ,
and
ℱ(ψ):=∫ _D_R\Ω -∇ u_f ∇ψ + k^2 u_f ψx + ∫_∂ D_R∂_νu_f ψs + ∫_∂Ω(g - ∂_νu_f )ψs.
From the assumption of σ and boundedness of Λ_0, it is clear that
a_1(p,ψ) ⩽ C p_H^1(D_R)ψ_H^1(D_R) a_1(p,p) ⩾ C p^2_H^1(D_R) ,
where C is some positive constant. The Lax-Milgram lemma indicates there exists a unique linear bounded operator 𝒜 with a bounded inverse such that
a_1(p,ψ) = (𝒜p,ψ) ∀ p,ψ∈ H^1(D_R).
By the Riesz representation theorem, there also exists a bounded operator B:H^1(D_R) → H^1(D_R) such that
a_2(p,ψ) = (ℬp,ψ) ∀ p,ψ∈ H^1(D_R).
We claim that ℬ is compact in H^1(D_R). In fact, Let {ϕ_j} be a bounded sequence in H^1(D_R) and the boundedness implies there exists a subsequence denoted also by {ϕ_j} satisfying ϕ_j ⇀ϕ for some ϕ∈ H^1(D_R). On the other hand, ϕ_j →ϕ in L^2(D_R) due to the Rellich–Kondrachov theorem. From the definition of ℬ, we know that {Bϕ_j} is weakly convergent in H^1(D_R) and (ℬ(ϕ_j -ϕ),ψ) =a_2 ((ϕ_j -ϕ),ψ). Let ψ =ℬ(ϕ_j -ϕ),ψ), it is clear that
ℬ(ϕ_j -ϕ)_H^1(D_R)⩽ 4k^2 max{1+τ_L^∞(Ω),2}ϕ_j-ϕ_L^2(D_R)→ 0.
It means that ℬ is compact in H^1(D_R).
We claim that 𝒜 + B-(Λ-Λ_0) is bijective with a bounded inverse in H^1(D_R) if it is injective. In fact, since A^-1 is bijective and bounded, then
𝒜 + B-(Λ-Λ_0) = A(I - (-A^-1(B-(Λ-Λ_0))),
implies the bijective equivalence of 𝒜 + B-(Λ-Λ_0) and I - (-A^-1(B-(Λ-Λ_0)). From the fact that A^-1 is bounded and B-(Λ-Λ_0) is compact, we can get that -A^-1(B-(Λ-Λ_0)) is compact. The Fredholm theory gives the bijection of 𝒜 + B-(Λ-Λ_0) with a bounded inverse if it is injective. Therefore, To show the existence and uniqueness it suffices to show that 𝒜 + B-(Λ-Λ_0) is injective, i.e. the only solution of homogeneous integral equation (<ref>) or the equivalent homogeneous system (<ref>) is identically zero. If this is done, by the Lax-Milgram theory the integral equation (<ref>) can be inverted in H^1(D_R) and the inverse operator is bounded. From this, it follows that (u,u^s) depends on the boundary data (f,g).
The uniqueness of (<ref>) is guaranteed by the Rellich theorem. In specific, when f=g =0, we can get
∫_∂ D_R∂ u^s /∂νu^s =( ∫_D_R \ΩΔ u^s u^s + |∇ u^s|^2 + ∫_∂Ω∂ u^s /∂νu^s)
= ∫_∂Ωσ∂ u/∂νu = ∫_∂Ωσ|∇ u|^2 - k^2∫_∂Ωτ|u|^2 ⩽ 0.
Then u^s = 0 in ^N \Ω. The homogeneous boundary data now imply that u = ∂_ν u = 0 on ∂Ω. From the unique continuity principle <cit.> we get u = 0 in Ω. Summarizing all the above analysis, we have the continuous dependence on the boundary data of the incident field to the direct scattering problem in H^1(D_R). The standard elliptic regularity estimate can increase the regularity from H^1(D_R) to H^2(D_R) if the boundary data (g,f) ∈ H^1/2(∂Ω) × H^3/2(∂Ω).
We observe that the Sobolev norm of u^i-v also gives the boundary information due to trace theorem. Hence one has that
u^i-v_H^3/2(∂Ω)⩽ε and ∂_ν u^i- ∂_ν v _H^1/2(∂Ω)⩽ε.
We let (w_j, w^s_j) be the unique solution to the system (<ref>) with the boundary conditions f := u^i-v and g := ∂_ν u^i- ∂_ν v on ∂Ω and (g,f) ∈ H^1/2(∂Ω) × H^3/2(∂Ω).
Then it is clear that
w_j^s_H^2_loc (^N \Ω)⩽ C(k,σ,τ)ε.
If the transmission eigenfunction v can be exactly extended to v^i defined in the whole space, Then the scattered field w^s illuminated by v^i is identically zero due to the zero boundary conditions. By the linearity of (<ref>), we can further get u^s = w_j^s + w^s and
u^s_H^2_loc (^N \Ω)⩽w_j^s_H^2_loc (^N \Ω) + w^s_H^2_loc (^N \Ω)⩽ Cε.
§.§ Proof of Theorem <ref>
With all the preliminary work above, we are in a position to prove the main results. The main idea of this proof is based on the spectrala and geometric patterns of transmission eigenfunctions introduced in section <ref>.
We mainly consider the three-dimensional case, and the two-dimensional case can be proven similarly except for the transmission eigenfunction v_m^l substituted by v_m defined in (<ref>).
We first define an exterior domain around the inhomogeneous medium inclusion:
Γ_e(Ω,ϵ) = {x ∈^3 | dist(x,Γ)⩽ϵ, x ∉Ω},
where Γ is any subset of ∂Ω and ϵ = o(1) is an any given distance parameter. We can artificially generate field concentration phenomenon in Γ_e(Ω,ϵ) by properly choosing an incident field u^i, namely ∇ u_L^∞(Γ_e(Ω,ϵ)) >M for any given large M. The construction of M shall be more definitely specified in what follows. In order to make the proof clearer, we divide it to several steps:
Step 1: We first construct the surface-localized transmission eigenfunctions. Let B_r_0∈^3 be a radial ball with radius r_0>0 satisfying
ϵ : =dist(B_r_0,Γ) = o(1).
By choosing a proper coordinate system, we assume B_r_0 centers at the origin.
We consider the transmission eigenvalue problem (<ref>) in B_r_0.
From Lemma <ref>, there exists a sequence of material parameters (σ_m,τ_m)_m ∈ such that k is the transmission eigenvalue. For example, we can assume
σ_m() = 1 τ_m() = √(n_m)∈ B_r_0,
where n_m is given in (<ref>).
The corresponding transmission eigenfunction v_m^l defined in (<ref>) is surface-localized and satisfies the homogeneous Helmholtz equation (<ref>) in B_r_0.
We further consider the transmission eigenvalue problem in (Ω; σ,τ) ∪ (B_r_0;σ_m,τ_m) where (σ,τ) and (σ_m,τ_m) are defined in (<ref>) and (<ref>) respectively. Lemma <ref> shows that the transmission eigenvalue k in B_r_0 is also a transmission eigenvalue in Ω∪ B_r_0. Moreover, the corresponding transmission eigenfunction 𝐯_m^l is given by
𝐯_m^l = v_m^l·χ__B_r_0 + 0·χ__Ω.
It is clear that 𝐯_m^l as the transmission eigenfunction is the solution of homogeneous Helmholtz equation (<ref>) in Ω∪ B_r_0.
Step 2: We next construct the incident field with the Herglotz approximation.
In order to identify the transmission eigenfunction uniquely, we assume the normalized condition 𝐯_m^l_L^2(Ω∪ B_r_0) =1. Then
β_m^l = 1/√(∫_0^r_0 j^2_m(kr)r^2r) = = √(2/π)k^3/2/√(∫_0^kr_0 J^2_m+1/2(r)rr).
Since Ω is a of class C^1,1 and Ω∪ B has a connected complement in ^N, hence lemma <ref> shows that there exists a Herglotz wave v_g,k such that v_g,k is a H^2(Ω) approximation to 𝐯_m^l. In other words, given any ε >0, it holds that
v_g,k - 𝐯_m^l_H^2(Ω∪ B_r_0)⩽ε.
The Herglotz function v_g,k is in fact the desired incident field since v_g,k is the entire solution of Helmholtz equation (<ref>) in the whole space. Moreover, the Herglotz function v_g,k exhibits strong resonant behaviors in B_r_0 as the surface transmission eigenmode and nearly vanishes inside the inclusion Ω.
It is noted that the construction of v_g,k is based on the spectral and geometric patterns of transmission resonance. However, the existence of v_g,k does not depend on the existence of (B_r_0;σ_m,τ_m). In fact, B_r_0 is an auxiliary radial domain with the same material parameter as the homogeneous background space. We denote the incident field by u^i := v_g,k.
Step 3: In the following, we analyze the behaviors of the total field u illuminated by u^i in (<ref>). The inequality (<ref>) also indicates
u^i - 𝐯_m^l_H^2(Ω)⩽ε.
From lemma <ref>, it is clear that the scattered field u^s illuminated by u^i would be sufficiently small outside Ω. With the Sobolev embedding H^2 ↪ C^0, we can further get
u^s_∞, G\Ω⩽ C(k,σ,τ) ε,
where G is an any bounded domain containing Ω∪ B_r_0. Therefore, the inclusion Ω is like invisible obstacle such that the behaviors of the incident field is almost undisturbed in the presence of the scatterer Ω.
Since the behaviors of total field is almost described by the incident field outside the homogeneous inclusion Ω, we can investigate the behavior of incident field v_g,k. First,
v_m^l() = β_m^l j_m(kr) Y_m^l(θ,ψ) =k/√(∫_0^kr_0 J^2_m+1/2(r)rr)J_m+1/2(kr)r^-1/2Y_m^l(θ,φ) ∈ B_r_0.
Without loss of generality, we always assume m is sufficiently large such that kr_0 is less than j'_m+1/2,1. Then J_m+1/2(kr)r^-1/2 is monotonously increasing for r ∈ (0,r_0) and attains the maximum ar r = r_0. By straight calculation, it holds that
max_r∈ (0,r_0)J_m+1/2(kr)kr^-1/2/√(∫_0^kr_0 J^2_m+1/2(r)rr) = J_m+1/2(kr_0)kr_0^-1/2/√(∫_0^kr_0 J^2_m+1/2(r)rr) = √(2m+3)/r_0^3/2(1 + 𝒪(m^-2) ).
and
sup_(θ ,φ)∈ T Y_m^l (θ,φ) = √(2m+1/4π)sup_θ∈ [0,π]√(%s/%s)(m-|l|)!(m-|l|)!P_m^|l|(cosθ) ⩾√(1.11/4πm+1/2/|l|+1) -m ⩽ l ⩽ m,
where T := [0,π] × [φ_1,φ_2],φ_1,φ_2 ∈ [0,2π) we have used lemma <ref>. Combining the above estimates, we can finally obtain
sup_∈B_r_0 v_m^l() = sup_∈∂B_r_0 v_m^l() ⩾1/√(2π)r_0^3/2m+1/2/√(|l|+1)(1 + 𝒪(m^-2) )⩾1/3r_0^3/2m+1/2/√(|l|+1),
for sufficiently large m.
Step 4: We finally can prove the gradient of total field would blow up inside Γ_e(Ω,ϵ). From the formulas (<ref>), (<ref>), (<ref>) and the Sobolev embedding H^2 ↪ C^0 inside B_r_0∪Ω, it holds that
sup_∈∂B_r_0 u^i() ⩾1/3r_0^3/2m+1/2/√(|l|+1) - ε and sup_∈∂Ω u^i() ⩽ε.
The estimate (<ref>) implies that the scattered field would be sufficiently small if we designate the incident field u^i to be the Herglotz approximation v_g,k to (<ref>). It means that the total field outside the scatterer Ω equals to the incident field plus a sufficiently small term. Let
_0 = sup_∈∂ B_r_0v_g,k().
we can assume ϵ :=dist(B_r_0,Γ) = dist(_0,Γ) by rigid motions if necessary. Then
u(_0) ⩾1/3r_0^3/2m+1/2/√(|l|+1) -(1+C(k,σ,τ)) ε and sup_∈∂Ω u() ⩽ (1+C(k,σ,τ))ε.
Since (1+C(k,σ,τ))ε is sufficiently small, then we can assume
(1+C(k,σ,τ))ε⩽1/12r_0^3/2m+1/2/√(|l|+1).
Finally by using the mean value theorem, we can obtain the conclusion
∇ u _L^∞(Γ_e(Ω,ϵ))⩾1/ϵ1/6r_0^3/2m+1/2/√(|l|+1)⩾1/6r_0^-3/2 (m+1/2)^1/21/ϵ.
It is obvious from (<ref>) that ∇ u would blow up when m goes to infinity or ϵ goes to zero. Therefore, For any given large number M, we can always construct the parameters ϵ,m and r_0 such that
1/6r_0^-3/2 (m+1/2)^1/21/ϵ⩾ M.
Here, the construction of parameters ϵ,m and r_0 depends on the incident field u^i.
It is noted that the field concentration can occur at multiple localizations near the inclusion Ω. In specific, we can add multiple disjoint virtue field concentration generators B_1 ,⋯,B_L_0 around the inclusion Ω. The incident field u^i is designated to Herglotz approximation to 𝐯_m^l = v_m^l·χ__B_1 + v_m^l·χ__B_2 + ⋯ + v_m^l·χ__B_L_0 + 0·χ__Ω where v_m^l are surface-localized transmission eigenfunctions up to a translation transformation. Thus, the field concentration can occur in the intervals between Ω and B_1 ,⋯,B_N by following a similar argument.
From the theorem <ref>, it is obvious that we can achieve field concentration in two customized ways. One is high-order probing. We can choose a high-order field concentration generator with more intensive surface-localized property such that m is sufficiently large. The other is in a geometric manner, i.e. the resonator is sufficiently close to the inclusion (ϵ→ 0).
§ NUMERICAL EXPERIMENTS
In this section, several numerical experiments are presented to verify the theoretical results. On the one hand, we shall determine the Herglotz wave, which is an approximation to transmission eigenfunction localizing on the surface of B while almost vanishing within the interior of Ω. On the other hand, we utilize the Herglotz wave as the incident field and obtain the total field by solving the scattering problem of the Helmholtz equation.
Given a radial area B near the scatterer Ω, we first obtain kernel function g(x) by solving the integral equation (<ref>). Noting that H_k is compact, that is, the inverse of integral equation (<ref>) is generally ill-posed, then we use the Tikhonov regularization method as introduced in <cit.>. We consider the integral equation
H_k g = f f ∈ L^2 (∂Ω) × L^2(∂ B). The Tikhonov regularization is adopted to the following equation:
α g_α +H^*_gH_g g_α = H^*_g f.
Thus, the regularized kernel function g_α is given by (α + H^*_gH_g)^-1H^*_g f. Next, we extend the incident field to the whole domain by using the Herglotz approximation, namely, u^i=H_kg_α.
Finally, we designate u^i as the incident field and solve the inhomogeneous scattering system (<ref>).
In what follows, we present three examples to illustrate the phenomenon of field concentration.
In the first example, we consider an ellipse domain Ω, i.e.,
Ω:={z=(x,y) | (x+3.2)^2/3^2 + (y+3.2)^2/4^2 < 1}.
Here, we set the wavenumber k=3 and the parameter σ=1. By solving the transmission eigenvalue problem (<ref>), we obtain τ=17.5285, where the regularization parameter is chosen as α=10^-5.
The second example is an rectangle with width a=4 and height b=8, that is,
Ω:={z=(x,y) | -5.1 < x < 1.1 , -4 < y < 4 },
where the parameters are set to be k=1 and σ=1/4. Furthermore, we have τ=16.2183, where the regularization parameter is chosen as α=10^-6.
In the final example, we consider a kite domain Ω, i.e.,
Ω:={z=(x,y) | (x,y)=(cos t+0.65cos 2t+1.4, 1.5 sin t), t∈[0, 2π]}.
Here, the parameters are set to be k=√(3) and σ=1/3, thus we get τ=19.9121 and the regularization parameter is chosen as α=10^-3.
From figures <ref>(a), <ref>(a) and <ref>(a), one can find that the total field is bounded and is a sufficiently precise approximation to the surface-localized eigenfunction and trivial eigenfunction in the area B and Ω respectively. Furthermore, it can be observed that the gradient field ∇ u blows up near the scatterer and would decay sharply when the total field leaves the area B, as depicted in figures <ref>(b), <ref>(b) and <ref>(b). To exhibit the imaging results clearly, we enlarge a portion of the image (the red dashed square domain in figures <ref>(b), <ref>(b) and <ref>(b)). It is clear to see that the gradient of the total field demonstrates the field concentration phenomenon in figures <ref>(c), <ref>(c) and <ref>(c). It is worth mentioning that the dashed circles presented in the figures are virtual objects and do not exist in practical applications. Moreover, there are no restrictions on the scale of the circles, and they can be of arbitrary scale. Therefore, our method can demonstrate good performance for multi-scaled scatterers.
§ SURFACE-OSCILLATING OF TRANSMISSION EIGENFUNCTIONS
In general, the magnitude of a wave field gradient is proportional to its wavenumber, as observed in plane waves such as the plane wave e^ k · d where d ∈𝕊^N-1. However, this relationship does not hold for surface-localized transmission resonant eigenfunctions. The oscillating behaviors of these transmission eigenfunctions tends to concentrate in the neighborhood along the boundary and the oscillating frequencies are much higher than the excitation frequency.
§.§ Auxiliary results
Before proving Theorem <ref>, we first introduce several auxiliary lemmas to facilitate the main proof.
The following integral formula for any m>0 and y ⩾ 0 holds:
∫_0^y J'^2_m(x)x + m^2/xJ_m^2(x) x = ∫_0^y J^2_m-1(x)xx - m J_m^2(y).
The recurrence J'_m(x) = J_m-1(x)-m/xJ_m(x) gives
∫_0^y J'^2_m(x)x + m^2/xJ_m^2(x) x = ∫_0^y ( J_m-1(x)- m/xJ_m(x))^2x + m^2/xJ_m^2(x) x
= ∫_0^y J^2_m-1(x)x -2mJ_m(x)J'_m(x)x .
Then using the integration by part again we get the equality.
<cit.>
For real x ∈ [-1,1] and integer l,m with 1 ⩽ l ⩽ m, it holds that
1/√(2.22(l+1)) < max_x ∈ [-1,1]| P(m,l;x) |:=max_x ∈ [-1,1]|√((m-l)!/(m+l)!)P_m^l(x)| < 2^5/4/π^3/41/l^1/4.
There exists an absolute maximum x_0∈ [0,1) such that |P_m^l(x_0)| = max_x∈ [-1,1]|P_m^l(x)| satisfying
1 - (1.11(l+1))^2/(m+1/2)^2⩽ x_0 ⩽ 1- l^2/m(m+1).
§.§ proof of Theorem <ref>
The existence of a sequence of transmission eigenfunctions and their surface-localization have been proved in Lemma <ref>. We focus on the surface oscillating behaviors of transmission eigenfunctions here. The transmission eigenfunctions share the same mathematical representations for any given radius r_0 and the proof is similar. For simplicity, we assume r_0 = 1.
The transmission eigenfunctions (v_m,w_m) in ^2 are given in the formula (<ref>). The gradient formula in polar coordinates yields
∇ v_m = β_m (kJ'_m(kr)r̂ +im/rJ_m(kr)θ̂)e^imθ ,
∇ w_m = α_m (kn_mJ'_m(kn_mr)r̂ +im/rJ_m(kn_mr)θ̂)e^imθ ,
Where (r̂,θ̂) is a pair of unit orthonormal vector in polar coordinates.
By using the orthogonality of {e^ m θ}_m ∈ in the unit circle, one has
∇ v_m_L^2(B_ξ)^2 =β_m^2∫_0^kξ J'^2_m(r)r +m^2/rJ^2_m(r) r.
By using the asymptotic formulas of Bessel functions (<ref>) when m is sufficiently large, it holds that
∇ v_m_L^2(B_ξ)^2/∇ v_m_L^2(B_1)^2 = ∫_0^kξ J'^2_m(r)r +m^2/rJ^2_m(r) r/∫_0^k J'^2_m(r)r +m^2/rJ^2_m(r) r∼∫_0^kξ r^2m-1r/∫_0^k r^2m-1r∼ξ^2m→ 0 m →∞.
So {∇ v_m} are also surface-localized especially for sufficiently m. For w_m,
∇ w_m_L^2(B_ξ)^2 = α_m^2 ∫_0^ξ( k^2n_m^2J'^2_m(kn_mr) + m^2/r^2J^2_m(kn_mr))r r
= α_m^2 ∫_0^kn_mξ J^2_m-1(r)r r - mJ_m^2(kn_mξ).
where we have used the Lemma <ref>.
By following a similar argument as (<ref>), it holds that
∫_0^kn_mξ J^2_m-1(r)r r - mJ_m^2(kn_mξ) ⩽1/2J^2_m-1(kn_mξ) (kn_mξ)^2,
and
∫_0^kn_m J^2_m-1(r)r r - mJ_m^2(kn_mξ) ⩾∫_0^m-1 J^2_m-1(r)r r - mJ_m^2(m-1)
⩾1/2(m-1)^2J_m-1^3(m-1)/J_m-1(m-1)+2(m-1)J'_m-1(m-1)- mJ_m-1^2(m-1),
where we have used the fact J_m(m-1) <J_m-1(m-1). Hence,
∇ w_m_L^2(B_ξ)^2/∇ w_m_L^2(B_1)^2 = ∫_0^kn_mξ J^2_m-1(r)r r - mJ_m^2(kn_mξ)/∫_0^kn_m J^2_m-1(r)r r - mJ_m^2(kn_m)
⩽J^2_m-1(kn_mξ) (kn_mξ)^2/(m-1)^2J_m-1^3(m-1)/J_m-1(m-1)+2(m-1)J'_m-1(m-1)- 2mJ_m-1^2(m-1)
⩽J^2_m-1(kn_mξ)/J_m-1^2(m-1) (kn_mξ)^2/(m-1)^21/J_m-1(m-1)/J_m-1(m-1)+2(m-1)J'_m-1(m-1)-2m/(m-1)^2.
From the asymptotic formula for J_m-1(m-1) and J'_m-1(m-1) (see 9.3.31 in <cit.>), we can obtain
J_m-1(m-1)/J_m-1(m-1)+2(m-1)J'_m-1(m-1)-2m/(m-1)^2
= (m-1)^-2/3(C + (m-1)^-1/3) ,
where C is some positive constant.
Note that kn_mξ <(m-1) and following a similar argument as (<ref>), we can readily get
∇ w_m_L^2(B_ξ)/∇ w_m_L^2(B_1) ⩽ C (m-1)^2/3Ai((m-1)^2/3ζ_ξ) ⩽ C (m-1)^1/2 e^-2/3ζ_ξ^3/2(m-1)→ 0 m →∞.
where C is some constant only depending on k,ξ. Then {∇ w_m}_m ∈ are also surface-localized.
We now prove the oscillating frequencies of (v_m,w_m) are much higher than the excitation frequency k.
The normalized condition v_m_L^2(B_1) = 1 shows that
∇ v_m = β_m (kJ'_m(kr)r̂ +im/rJ_m(kr)θ̂)e^imθ
= 1/√(2π)kJ_m(k)/√(∫_0^kJ_m^2(r)rr)(kJ'_m(kr)/J_m(k)r̂ +im/rJ_m(kr)/J_m(k)θ̂)e^imθ
First, the formula of (<ref>) yields
kJ_m(k)/√(∫_0^kJ_m^2(r)rr)∼k^m+1/√(1/2(m+1)k^2m+2)∼ 2√(m+1).
Moreover, The Bessel function J_m(x) and its derivative J'_m(x) are strictly monotonously increasing. Then ∇ v_m attains the maximum at r=1. By using the asymptotic formula (<ref>) again, it readily holds that
∇ v_m_L^∞(Σ_ξ)/k⩾ C_k,s_0 m^3/2→∞ as m →∞.
It is remarked that the lower bound (<ref>) is in fact optimal and ∇ v_m attains the supremum on the boundary r=1.
For ∇ w_m, the relation between α_m and β_m together with the normalized condition v_m_L^2(B_1) = 1 gives
∇ w_m = α_m (kn_mJ'_m(kn_mr)r̂ +im/rJ_m(kn_mr)θ̂)e^imθ
= 1/√(2π)kJ_m(k)/√(∫_0^kJ_m^2(r)rr)(kn_mJ'_m(kn_mr)/J_m(kn_m)r̂ +im/rJ_m(kn_mr)/J_m(kn_m)θ̂)e^imθ.
The maximum formula of Bessel functions and their derivatives
max_x∈ J_m(x) = J_m(j'_m,1) ∼ m^-1/3max_x∈J'_m(x)∼ m^-2/3,
together with (<ref>), (<ref>) yields
∇ v_m_L^∞(Σ_ξ)/k = k/√(2π)e^imθ/√(∫_0^kJ_m^2(r)rr)J_m(k)/J_m(kn_m)n_mJ_m'(kn_mr)r̂+ im/rkJ_m(kn_mr)θ̂_L^∞(Σ_ξ)
⩾ C_k,s_0 m^3/2→∞ as m →∞.
It is remarked that ∇ w_m attains the maximum at r_m :=j'_m,1/kn_m where r_m → 1 as m→∞. This is a circle with the radius r_m which is very close but not touch the boundary r=1.
The entire proof in three dimensions follows a similar structure to that in two dimensions, with some necessary modifications due to technical calculations. We will outline the required modifications in the subsequent sections.
The corresponding gradient ∇ v_m in spherical coordinate is
∇ v_m^l = ∂ v_m^l/∂ rr̂ + ∇_(θ,ψ) v_m^l = β_m^l( k j'_m(kr)Y_m^lr̂ + j_m+1/2(kr) ∇_(θ,ψ)Y_m^l).
Since r(m(m+1))^-1/2∇_(θ,ψ)Y_m^l are a sequence of orthonormal vector spherical harmonics, then
∇ v_m^l ^2_L^2(B_ξ) = (β_m^l)^2 (∫_0^ξ j'^2_m(kr)k^2r^2 + m(m+1)j^2_m(kr) r)
=(β_m^l)^2/k(∫_0^kξ j'^2_m(r)r^2 + m(m+1)j^2_m(r) r) .
Then
∇ v_m^l ^2_L^2(B_ξ)/∇ v_m^l ^2_L^2(B_1) = ∫_0^kξ j'^2_m(r)r^2 + m(m+1)j^2_m(r) r/∫_0^k j'^2_m(r)r^2 + m(m+1)j^2_m(r) r
∼∫_0^kξr^2mr/∫_0^kr^2mr∼ξ^2m+1→ 0 as m →∞.
Similarly, the representation of ∇ w_m^l is
∇ w_m^l = ∂ w_m^l/∂ rr̂ + ∇_(θ,ψ) w_m^l = α_m^l( kn_m j'_m(kn_mr)Y_m^lr̂ + j_m(kn_mr) ∇_(θ,ψ)Y_m^l).
With the relation j_m(x) = √(π/2x)J_m+1/2(x), the L^2-norm of ∇ w_m^l is given by
∇ w_m^l ^2_L^2(B_ξ) = (α_m^l)^2 (∫_0^ξ j'^2_m(kn_mr)k^2n_m^2r^2 + m(m+1)j^2_m(kr) r)
= π/2(α_m^l)^2/kn_m(∫_0^kn_mξ J'^2_m+1/2(r)r + (m+1/2)^2/rJ^2_m+1/2(r) r - 1/2J^2_m+1/2(kn_mξ))
=π/2(α_m^l)^2/kn_m( ∫_0^kn_mξ J^2_m-1/2(r)rr - (m+1)J^2_m+1/2(kn_mξ) ).
By using the result in (<ref>), it holds that
∇ w_m^l ^2_L^2(B_ξ)/∇ w_m^l ^2_L^2(B_1) =∫_0^kn_mξ J^2_m-1/2(r)rr - (m+1)J^2_m+1/2(kn_mξ)/∫_0^kn_m J^2_m-1/2(r)rr - (m+1)J^2_m+1/2(kn_mξ)
⩽∫_0^kn_mξ J^2_m-1/2(r)rr - (m+1)J^2_m+1/2(kn_mξ)/∫_0^m -1/2 J^2_m-1/2(r)rr - (m+1)J^2_m-1/2(m-1/2)
= J^2_m-1/2(kn_mξ)/J_m-1/2^2(m-1/2)(kn_mξ)^2/(m-1/2)^21/J_m-1/2(m-1/2)/J_m-1/2(m-1/2)+2(m-1/2) J'_m-1/2(m-1/2) - 2 m+1/(m-1/2)^2
⟶ 0 as m →∞.
In order to prove the oscillating frequencies of (v_m^l,w_m^l) are much higher than the excitation frequency k, we need a clearer representation for the transmission eigenfunction. First, The normalized spherical harmonics Y_m^l has the explicit expression ,
Y_m^l(θ,φ) = √(2m+1/4π(m-|l|)!/(m+|l|)!)P_m^|l|(cosθ) e^ilφθ∈ [0,π] , φ∈ [0,2π) , -m⩽ l ⩽ m,
where P_m^|l| are the associated Legendre polynomials. The normalized condition v_m^l_L^2(B_1)=1 gives
β_m^l = 1/√(∫_0^1 j^2_m(kr)r^2 r) = 2/πk/√(∫_0^k J^2_m+1/2(r)r r).
Moreover, the corresponding gradient ∇ v_m^l in spherical coordinates is
∇ v_m^l = ∂ v_m^l/∂ rr̂ +1/r∂ v_m^l/∂θθ̂ + 1/r sinθ∂ v_m^l/∂φφ̂
= β_m^l√(2m+1/4π(m-|l|)!/(m+|l|)!)r^-3/2e^ilφ[ (-1/2J_m+1/2(kr)+krJ'_m+1/2(kr) ) P_m^|l|(cosθ)r̂.
. + J_m+1/2(kr)P_m^|l|(cosθ)/θθ̂ + J_m+1/2(kr)P_m^|l|(cosθ) l/sinθφ̂]
:= β_m^lJ_m+1/2(k)√(2m+1/4π)(I_1r̂ +I_2θ̂ + I_3φ̂),
where r̂ , θ̂ , φ̂ are the orthonormal unit vectors in spherical coordinates.
Next, we need to estimate I_i,i=1,2,3 in the above formula. By using Lemma <ref>, we first have
sup_x ∈Σ_ξI_1 = sup_∈Σ_ξ√((m-|l|)!/(m+|l|)!)1/J_m+1/2(k)| r^-3/2e^ilφ(krJ'_m+1/2(kr)-1/2J_m+1/2(kr))P_m^|l|(cosθ) |
⩾ Cm+1/2/(|l|+1)^1/2 |l| ⩾ 1,
where we used the asymptotic formula of Bessel functions and its derivatives for large order.
It is clear that (<ref>) also holds for l=0 since the Legendre polynomials max_x ∈ [-1,1] P_m(x) =1. For I_3, we shall calculate the value at maximum point x_0 to get the lower bound. Let x_0 =cosθ_0, it holds from Lemma <ref> that
sinθ_0 = √(1-x_0^2)⩽1.11(|l|+1)/m+1/2 1 ⩽ |l|⩽ m.
Then
sup_∈Σ_ξ I_3 ⩾sup_x ∈Σ_ξ√((m-|l|)!/(m+|l|)!)1/J_m+1/2(k)| r^-3/2e^ilφJ_m+1/2(kr)(sinθ_0)^-1lP_m^|l|(cosθ_0)|
⩾ Cm+1/2/(|l|+1)^1/2|l|/|l|+1 |l| ⩾ 1,
and sup_∈Σ_ξ I_3 =0 when l=0.
We shall estimate I_2 in several cases. When 2 ⩽ |l| ⩽ m-1, we estimate the upper bound of the derivative of the associated Legendre polynomial. It holds that
sup_θ∈ [0,π] |√((m-|l|)!/(m+|l|)!)P_m^|l|(cosθ)/θ| = sup_θ∈ [0,π]|√((m-|l|)!/(m+|l|)!)P_m^|l|(cosθ)/(cosθ)sinθ|
= sup_θ∈ [0,π]1/2√((m-|l|)!/(m+|l|)!)|(m+|l|)(m-|l|+1)P_m^|l|-1(cosθ) - P_m^|l|+1(cosθ) |
=sup_θ∈ [0,π]√((m-|l|)!/(m+|l|)!)||l|cosθ/sinθP_m^|l|(cosθ) - P_m^|l|+1(cosθ) |
⩽sup_θ∈ [0,π]√((m-|l|)!/(m+|l|)!)||l|/sinθP_m^|l|(cosθ) | +C (m+1/2) sup_θ∈ [0,π]| P(m,|l|;cosθ)|.
where we have used the fact that the maximum of P(m,|l|+1;cosθ) is uniformly less than that of P(m,|l|;cosθ) with respect to m and l in the last line <cit.>.
For other cases |l|=m,1 0, straight calculations show that
sup_θ∈ [0,π]|√(1/(2m)!)P^m_m(cosθ)/θ| = √(2m)sup_θ∈ [0,π]| P(m,m-1;cosθ) |
sup_θ∈ [0,π]√((m-1)!/(m+1)!)|P^1_m(cosθ)/θ| = sup_θ∈ [0,π]|P_m(cosθ) - √((m+2)(m-1))P(m,2;cosθ)|.
Comparing the formulas (<ref>),(<ref>), (<ref>) with the formula (<ref>), one can find that the upper bound of I_2 can be uniformly controlled by that of I_3 when |l| ⩾ 1, i.e.
sup_x ∈Σ_ξ I_2 =sup_∈Σ_ξ√((m-|l|)!/(m+|l|)!)1/J_m+1/2(k)| r^-3/2e^ilφJ_m+1/2(kr)P_m^|l|(cosθ)/θ| ⩽ C sup_∈Σ_ξ I_3 .
While l=0, it holds that
sup_θ∈ [0,π]|P_m(cosθ)/θ| = √(m(m+1))sup_θ∈ [0,π]|P(m,1;cosθ) |.
Combining (<ref>) , (<ref>), (<ref>), (<ref>) (<ref>), one can deduce that
∇ v_m^l_L^∞(Σ_ξ)/k =J_m+1/2(k)/√(∫_0^1 J_m+1/2(kr)r r)√(2m+1/4π k^2)√((I_1)^2 +(I_2)^2 + (I_3)^2)_L^∞(Σ_2)
⩾ C(m+1/2)^2/(|l|+1)^1/2⩾ C(m+1/2)^3/2→∞.
when m →∞. In fact, ∇ v_m^l attains the supremum on the boundary sphere r =1.
From the relation between α_m^l and β_m^l, we can obtain
α_m^l = j_m(k)/j_m(kn_m)β_m^l = √(n_m)2/πJ_m+1/2(k)/J_m+1/2(kn_m)k/√(∫_0^k J^2_m+1/2(r)r r).
We can get the coefficient of w_m^l through the relation of α_m^l and β_m^l. Then the gradient of w_m^l in spherical coordinates is
∇ w_m^l = α_m^l √(2m+1/4π(m-|l|)!/(m+|l|)!)r^-3/2e^ilφ[ ( kn_mrJ'_m+1/2(kn_mr) -1/2J_m+1/2(kn_mr) )P_m^|l|(cosθ)r̂
+ J_m+1/2(kn_mr)P_m^|l|(cosθ)/θθ̂ + J_m+1/2(kn_mr)P_m^|l|(cosθ) l/sinθφ̂]
:=α_m^l√(2m+1/4π)(L_1r̂ +L_2θ̂ + L_3φ̂).
We note the fact
kn_m ∈ (j_m+1/2,s_0,j_m+1/2,s_0+1) = m+1/2 + a_s_0(m+1/2)^1/3 + 𝒪((m+1/2)^-1/3) >j_m+1/2,1 > j'_m+1/2,1,
where a_s_0 is some positive constant. Then J_m+1/2(x)
max_x∈ J_m+1/2(x) = J_m+1/2(j'_m+1/2,1) ∼ (m+1/2)^-1/3 max_x∈J'_m+1/2(x)∼ (m+1/2)^-2/3.
Hence, we can conduct similar estimates as I_i for L_i,i=1,2,3 to get the lower gradient bound and
∇ w_m^l_L^∞(Σ_ξ)/k⩾ C(m+1/2)^13/6/(|l|+1)^1/2⩾ C (m +1/2)^5/3→∞ as m →∞.
where C is a constant independent of m and l.
§ ACKNOWLEDGMENT
We are grateful to Huaian Diao and Yan Jiang for their insightful
comments during discussions about this work.
abbrv
|
http://arxiv.org/abs/2409.02743v2 | 20240904142227 | Efficient Image Compression Using Advanced State Space Models | [
"Bouzid Arezki",
"Anissa Mokraoui",
"Fangchen Feng"
] | eess.IV | [
"eess.IV"
] |
Efficient Image Compression Using Advanced State Space Models
Bouzid Arezki
L2TI Laboratory
University Sorbonne Paris Nord
Villetaneuse, France
[email protected]
Anissa Mokraoui
L2TI Laboratory
University Sorbonne Paris Nord
Villetaneuse, France
[email protected]
Fangchen Feng
L2TI Laboratory
University Sorbonne Paris Nord
Villetaneuse, France
[email protected]
September 4, 2024
============================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Transformers have led to learning-based image compression methods that outperform traditional approaches. However, these methods often suffer from high complexity, limiting their practical application. To address this, various strategies such as knowledge distillation and lightweight architectures have been explored, aiming to enhance efficiency without significantly sacrificing performance. This paper proposes a State Space Model-based Image Compression (SSMIC) architecture. This novel architecture balances performance and computational efficiency, making it suitable for real-world applications. Experimental evaluations confirm the effectiveness of our model in achieving a superior BD-rate while significantly reducing computational complexity and latency compared to competitive learning-based image compression methods.
Image Compression, State Space Models, Computational Complexity, Rate-Distortion
§ INTRODUCTION
Visual compression represents a fundamental challenge in multimedia processing. Over the past few decades, classical standards have predominantly been employed including BPG, JPEG, JPEG2000, H.265/HEVC, and H.266/VVC. With the introduction of deep neural network architectures such as Convolutional Neural Networks (CNNs) <cit.> and Transformers <cit.>, learning-based compression methods have emerged, demonstrating continuously improving performance and garnering interest over traditional approaches. These architectures typically include a two-level hierarchical variational autoencoder with a hyper-prior as the entropy model. They consist of two sets of encoders and decoders: one for the generative model and another for the hyper-prior model. However, deep learning-based compression approaches often suffer from high complexity, which limits their practicality in real-world applications. Although it is common practice to propose scaled-down models by choosing a relatively smaller model size, these models, while less complex and faster, often significantly sacrifice the compression performance <cit.>. Other approaches have been deployed to accelerate these models while maintaining their performance. Knowledge distillation is one effective method for accelerating neural networks across various computer vision tasks <cit.>. In this context, EVC <cit.> used a mask decay method based on teacher-student training. Another promising direction involves the development of lightweight architectures <cit.>. Recent advancements in lightweight attention mechanisms have resulted in architectures with improved inference times for image compression <cit.>. Other noteworthy contributions include <cit.>, where the authors focus on reducing decoding complexity through the proposal of shallow decoding transforms. Additionally, the work in <cit.> focuses on improving the efficiency of the entropy model.
Recently, the State Space model and its variant, the Mamba model, have garnered significant attention in the field of computer vision. State Space Models (SSMs) were initially introduced in deep neural networks for sequence modeling <cit.>, unifying the strengths of previous sequence models, including Continuous-Time Models (CTMs), Recurrent Neural Networks (RNNs), and CNNs. Despite their potential, SSMs have not seen widespread use due to their high computational and memory requirements, which stem the latent state representation connecting the input and output. Mamba <cit.> addresses these limitations by integrating a selection mechanism into the structured variants of SSMs, enhancing their context-based reasoning ability. In this paper, we propose a State Space Model-based Image Compression (SSMIC) architecture focusing on rate-distortion performance, computational complexity, and latency. The contributions of this paper are as follows:
* SSMIC balances compression efficiency, computational complexity, and latency for practical multimedia processing applications.
* SSMIC integrates SSMs from the Mamba model into the image compression pipeline, enhancing contextual reasoning while managing computational and memory requirements.
* SSMIC is a lightweight architecture designed for efficient real-time image compression on resource-constrained devices.
* Extensive experiments across benchmark datasets show that SSMIC achieves competitive compression performance with reduced computational complexity and latency compared to existing methods. Fig. <ref>, depicting the trade-off between BD-rate and computational complexity on the Kodak dataset, validates the standing of SSMIC among competitive state-of-the-art methods.
§ RELATED WORKS
§.§ Deep Learning-based Image Compression
Among the early works, the authors of <cit.> introduced a CNN-based two-level hierarchical architecture. The authors of <cit.> incorporated an attention mechanism into the image compression framework by introducing self-attention in the hyper-prior model. A more sophisticated approach, the Swin block <cit.>, has been proposed for use in both the generative and hyper-prior models <cit.>. Additionally, the authors of <cit.> proposed a Transformer-CNN Mixture (TCM) block, which leverages the strengths of both transformers and CNNs by splitting the feature dimension into separate branches for each. Although TCM improves compression performance compared to transformer-only architectures, it does not reduce encoding or decoding time.
§.§ Lightweight Attention
Ever since the attention mechanism for visual tasks was proposed and achieved impressive results, efforts to reduce its complexity have been underway. The Swin Transformer, for instance, introduced window-based self-attention to limit computational cost <cit.>. Mobile-Former <cit.> leverages the advantages of MobileNet <cit.> for local processing and the strengths of transformers for global interaction. Similarly, EdgeViT <cit.> and EdgeNeXt <cit.> combine local interaction (convolution) with global self-attention. MobileViT <cit.>, another lightweight, general-purpose vision transformer for mobile devices, uses a multi-scale sampler for efficient training. Additionally, LVT <cit.> develops enhanced self-attention mechanisms to improve computational efficiency. A ladder self-attention and progressive shift mechanism was proposed in <cit.> for lightweight transformers. In <cit.>, an efficient long-range attention network was designed for image super-resolution tasks, which inspired <cit.> for image compression. Other works focused on image compression as well, such as <cit.>, which proposed a Swin-like block without positional encoding, thereby reducing the number of trainable parameters.
§.§ Mamba for Visual Compression
The most closely related work to our approach is the recently proposed MambaVC <cit.>, which incorporates the Mamba block into the image compression architecture to achieve promising results. Despite this similarity, our design differs significantly from theirs. While MambaVC focuses on improving compression performance, our approach is distinct in that it emphasizes designing a computationally efficient compression model. This difference in motivation results in unique design choices throughout the architecture, which will be detailed in the following section.
§ NOVEL STATE SPACE MODEL-BASED IMAGE COMPRESSION (SSMIC)
§.§ Preliminaries
Before presenting our SSMIC architecture, we introduce some useful information hereafter.
The SSM transformation in S4 <cit.> originates from the classical state space model, which maps a one dimensional input signal x(t) ∈ℝ to a one dimensional output signal y(t) ∈ℝ through a latent state h(t) ∈ℝ^N of dimension N:
h'(t) = 𝐀h(t)+ 𝐁x(t),
y(t) = 𝐂h(t),
where 𝐀∈ℝ^N × N, 𝐁∈ℝ^N × 1,
𝐂∈ℝ^1 × N are parameters of neural networks in deep learning.
To deal with the discrete input sequence x= [x_0,x_1, …, x_L-1] ∈ℝ^L, the parameters in equation (<ref>) are discretized using a step size Δ, which represents the resolution of the continuous input x(t) <cit.>. In particular, the continuous parameters 𝐀 and 𝐁 are converted into discretization parameters 𝐀 and 𝐁 by the zero-order hold (ZOH) technique, defined as:
𝐀 = exp(Δ𝐀),
𝐁 = (Δ𝐀)^-1 (exp(Δ𝐀)-𝐈).Δ𝐁.
After discretizing 𝐀 and 𝐁 to 𝐀 and 𝐁, equation (<ref>) can be reformulated as:
h_t = 𝐀h_t-1 + 𝐁x_t,
y_t = 𝐂h_t.
SSMs can be efficiently computed using RNNs. This recursive process can be reformulated and computed as a convolution:
𝐊 = (𝐂𝐁, 𝐂𝐀𝐁,…, 𝐂𝐀^L-1𝐁),
y=x*𝐊,
where L denotes the length of the input sequence x; and 𝐊∈ℝ^L is the SSM convolution kernel.
Mamba <cit.> further incorporates data-dependence to capture contextual information in equation (<ref>) by proposing a novel parameterization method for SSMs that integrates an input-dependent selection mechanism, referred to as S6. Although the recurrent nature of SSMs restricts full parallelization, Mamba employs structural reparameterization tricks and a hardware-efficient parallel scanning algorithm to boost overall efficiency. As a result, many works have adapted Mamba from Natural Language Processing (NLP) to the vision domain <cit.>.
§.§ Proposed SSMIC architecture
The proposed SSMIC architecture follows a general two-level architecture. Specifically, the input image x is first encoded by the generative encoder y=g_a(x), and the hyper-latent z=h_a(y) is obtained through the encoder of the hyper-prior network. The quantized version of the hyper-latent ẑ is modeled and entropy-coded using a learned factorized model before being passed through h_s(ẑ). The output of h_s, along with the output of the context model, enters the entropy parameters network <cit.>, which generates the mean μ and scale σ parameters for a conditional Gaussian entropy model P(y|ẑ)= 𝒩(μ, σ^2) to model y. The quantized latent ŷ=Q(y) is finally entropy-coded (Arithmetic encoding/decoding AE/AD) and sent to x̂=g_a(ŷ) for reconstructing the image x̂. A detailed illustration is shown in Fig. <ref>.
We propose using the context model from <cit.>, which draws inspiration by traditional coding techniques that predict the probability of unknown codes based on previously decoded latents. Unlike the architecture in <cit.>, our choice is driven by the goal of developing a computationally efficient architecture. While more sophisticated channel-wise autoregressive models <cit.> can enhance performance, they are significantly more time-consuming <cit.>. We use the classical strategy of adding uniform noise to simulate the quantization operation (Q) which makes the operation differentiable <cit.>.
The generative and the hyper-prior encoder, g_a and h_a, are built with the patch merge block and the Visual State Space (VSS) block illustrated in Fig. <ref>. The patch merge block contains the Depth-to-Space operation <cit.> for down-sampling, a normalization layer, and a linear layer to project the input to a certain depth C_i. In g_a, the depth C_i of the latent representation increases as the network gets deeper, allowing for a more abstract representation of the image. The size of the latent representation decreases accordingly. In each stage, we down-sample the input feature by 2. Compared to the convolutional layer used in MambaVC <cit.> for a similar purpose, the chosen patch merge block is easier to implement and is also computationally friendly.
VSS block, which was originally proposed in <cit.>, consists of a single network branch with two residual modules, mimicking the architecture of the vanilla Transformer block <cit.>. Specifically, each stage in our SSMIC consists of a sequence of VSS blocks and the number of blocks in stage i is denoted as d_i (see in Fig. <ref>). Given an input feature maps 𝐟∈ℝ^H× W × C, we get 𝐟” from a first residual module:
𝐟” = 𝐟+𝐟',
where f' is obtained through multiple layers as follows:
𝐟' = MLP_2(LN_2(SS2D(
σ(
DWConv(MLP_1(LN_1((f)))))))).
As illustrated in Fig. <ref> (see (a) and (b)), the output of the VSS block is given by:
𝐟_out = MLP_3(LN_3(𝐟”)) + 𝐟”,
where LN represents the normalization layer; SS2D is the 2D selective scan module; σ is the SiLU activation <cit.>; DWConv is the depthwise convolution; and MLP is the learnable linear projection. Unlike VSS block in <cit.>, we use RMSNorm <cit.> instead of LN LayerNorm since we found empirically that the RMSnorm significantly improves the convergence speed during training.
We adhere to the selective scan approach proposed in <cit.> that adapts input-dependent selection mechanism <cit.> to vision data without reducing its advantages. SS2D involves three steps as shown in Fig. <ref> (c).
Cross-scan: unfolds input features into sequences along four distinct traversal paths; Selective scan: processes each path with a distinct S6 in parallel; Cross-merge: subsequently reshapes and merges the resultant sequences to form the output feature, enabling effectively integrating information from other pixels in different directions relative to each current pixel. This facilitates the establishment of global receptive fields in the 2D space.
§ EVALUATION AND ANALYSIS
To evaluate the compression efficiency of SSMIC, we propose a performance study across three aspects: rate-distortion (RD), BD-rate, computational complexity and latency.
§.§ Configuration Setup
The SSMIC model was trained on the CLIC20 training set <cit.>. The loss function used is L = D + λ R, where R represents the bitrate and D denotes the distortion. The Mean Squared Error
(MSE) in the RGB color space is used as the distortion metric. The Lagrangian multiplier λ governs the Rate-Distortion (RD) trade-off. We trained SSMIC with λ∈{100, 50, 30, 10}. Each training batch contains 8 random crops of size 256×256. We performed 1 million (1M) iterations using the ADAM optimizer, with the learning rate set to 10^-4. Our SSMIC architecture was implemented in Pytorch using CompressAI library <cit.>.
We evaluate the performance of the model on three datasets: Kodak <cit.>, JPEG-AI <cit.>, and the test set of CLIC20 <cit.> including both mobile and professional categories. During inference, all images were padded with zeros if their size is not a multiple of 256. We compare the proposed SSMIC against the following models: SwinT <cit.>, MambaVC <cit.>, LightweightLIC <cit.>, LIC_TCM <cit.>, SwinNPE <cit.>, MLIC+ <cit.> and ELIC <cit.>. These models are selected because they are either competitive in terms of compression performance or competitive in terms of the computational efficiency. We also show the compression performance of BPG444 <cit.> as a baseline. For LightweightLIC <cit.>, LIC_TCM <cit.>, SwinNPE <cit.>, and ELIC <cit.>, we evaluated their compression performance and complexity efficiency using their provided pre-trained models under the same configuration as SSMIC. For SwinT <cit.>, MambaVC <cit.>, and MLIC+ <cit.>, as pre-trained models were not available, we assessed their computational complexity using their untrained models and referenced the RD-curves from their respective papers to evaluate compression performance. The experiments were carried out on an A100 80 Go GPU and an Intel Xeon Gold 6330 3.10 GHz CPU.
§.§ Performance Comparison
We summarize in Table <ref>, the BD-rate of SSMIC and the competitive state-of-the-art models across three datasets. The BD-rate was calculated covering approximately 0.4 to 1.2 bits per pixel (bpp), using VTM-15.0 <cit.> as a reference.
On average, SSMIC achieves a -21.75% BD-rate performance compared to VTM-15.0 and 4.17 p.p., relative
increase from LIC_TCM <cit.> which provides the best performance compared to the other selected methods. However, the latter is much more demanding in terms of computational
complexity and number of parameters. This is shown in Fig. <ref>, which provides the BD-rate with VTM-15.0 as an anchor versus the GMACs [MACs and FLOPs are calculated using calflops Library
<https://github.com/MrYxJ/calculate-flops.pytorch> ]
of various approaches on the Kodak dataset. These observations confirm that our SSMIC offers a good trade-off between BD-rate performance and computational complexity.
We also evaluate the compression performance of different models across various bitrate ranges on the Kodak dataset in Fig. <ref>. The results show that the proposed model consistently achieves performance comparable to state-of-the-art methods while maintaining competitive computational efficiency. Indeed, we show in Tables <ref> and <ref> the computational complexity in terms of MACs[1] and FLOPs[1], respectively. The results are shown for three different resolutions, selected for their common usage and to avoid out-of-memory issues on the utilized GPU. It is clear that our SSMIC significantly reduces the computational complexity in terms of MACs and achieves competitive results in terms of FLOPs compared to SwinT <cit.>.
Table <ref> presents the average latency over 2,000 images at a resolution of 256 × 256 on the utilized GPU. Our model shows competitive decoding times compared to lightweightLIC <cit.>, while maintaining comparable encoding times.
§ CONCLUSION
In this paper, we proposed State Space Model-based Image Compression (SSMIC) approach, achieving competitive RD performance while significantly reducing the computational complexity and latency, which is potentially helpful to conduct, with further optimizations, high-quality real-time visual data compression. SSMIC leverages the advantages of State Space Models (SSMs) inherited from the Mamba model, enhancing contextual
reasoning while managing computational and memory requirements. On average across benchmark datasets, SSMIC achieves a substantial reduction in BD-rate compared to VTM-15.0, underscoring its effectiveness in practical applications.
Efficient_Image_Compression_Using_Advanced_State_Space_Models
|
http://arxiv.org/abs/2409.02906v1 | 20240904174742 | 21cm Epoch of Reionisation Power Spectrum with Closure Phase using the Murchison Widefield Array | [
"Himanshu Tiwari",
"Nithyanandan Thyagarajan",
"Cathryn M. Trott",
"Benjamin McKinley"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.IM"
] |
§ ABSTRACT
The radio interferometric closure phases can be a valuable tool for studying cosmological HI from the early Universe. Closure phases have the advantage of being immune to element-based gains and associated calibration errors. Thus, calibration and errors therein, which are often sources of systematics limiting standard visibility-based approaches, can be avoided altogether in closure phase analysis. In this work, we present the first results of the closure phase power spectrum of HI 21-cm fluctuations using the Murchison Widefield Array (MWA), with ∼ 12 hours of MWA-phase II observations centered around redshift, z≈ 6.79, during the Epoch of Reionisation. On analysing three redundant classes of baselines – 14 m, 24 m, and 28 m equilateral triads, our estimates of the 2σ (95% confidence interval) 21-cm power spectra are ≲ (184)^2 pseudo mK^2 at k_|| = 0.36 pseudo h Mpc^-1 in the EoR1 field for the 14 m baseline triads, and ≲ (188)^2 pseudo mK^2 at k_|| = 0.18 pseudo h Mpc^-1 in the EoR0 field for the 24 m baseline triads. The “pseudo” units denote that the length scale and brightness temperature should be interpreted as close approximations. Our best estimates are still 3-4 orders high compared to the fiducial 21-cm power spectrum; however, our approach provides promising estimates of the power spectra even with a small amount of data. These data-limited estimates can be further improved if more datasets are included into the analysis. The evidence for excess noise has a possible origin in baseline-dependent systematics in the MWA data that will require careful baseline-based strategies to mitigate, even in standard visibility-based approaches.
§ INTRODUCTION
Epoch of Reionisation (EoR) is the period when the first stars and galaxies were formed in the early Universe between 15<z<5.3 and contributed to reionising the predominantly neutral intergalactic medium on cosmic scales <cit.>. It is also one of the least understood periods in the history of the Universe, mainly due to the lack of radiation influx from the first stars and galaxies, which are locally absorbed by the intervening medium. The hyperfine ground state of the atomic Hydrogen (HI) produces a weak transition of ∼ 1420 MHz, popularly known as the 21-cm line. It is considered a very promising probe of the EoR due to the abundance of Hydrogen in the early Universe. The intervening medium is largely transparent to the redshifted 21-cm line; therefore, it provides one of the best avenues to infer the astrophysical properties of the IGM and the cosmology of the early Universe. As the neutral IGM gets ionised, it weakens the strength of the 21-cm signal.
One can interpret the stages of cosmic reionisation by estimating the depletion in the redshifted 21-cm signal through cosmic time <cit.>.To detect this forbidden transition from the early Universe, several radio instruments such as Murchison Widefield Array (MWA) <cit.>, Hydrogen Epoch of Reionization Array (HERA) <cit.>, LOw-Frequency ARray (LoFAR) <cit.>, Giant metrewave Radio Telescope (GMRT) <cit.>, Precision Array for Probing the Epoch of Reionization (PAPER) <cit.>, Long Wavelength Array (LWA) <cit.>, Experiment to Detect the Global EoR Signature (EDGES) <cit.>, Shaped Antenna measurement of the background RAdio Spectrum (SARAS) <cit.>, Broad-band Instrument for Global HydrOgen ReioNization Signal (BIGHORNS) <cit.>, Large Aperture Experiment to Detect the Dark Age (LEDA) <cit.>, Dark Ages Radio Explorer (DARE) <cit.>, Sonda Cosmológica de las Islas para la Detección de
Hidrógeno Neutro (SCI-HI) <cit.>, Probing Radio Intensity at High-Z from Marion (PRIZM) <cit.> were built or are under construction. These instruments can either aim to detect the sky-averaged 21-cm signal spectrum (Global signal) or measure its spatial fluctuations. The former category of instruments can detect the overall IGM properties, whereas the latter can provide a detailed study of the three-dimensional topology of the EoR regime.
The spatial signatures are quantified through statistical measures such as the power spectrum, which can probe the 21-cm signal strength as a function of cosmological length scales (k-modes). Alongside, the three-dimensional topology of the EoR can be studied via a two-dimensional 21-cm power spectrum <cit.>, which shows the variation of the 21-cm power spectrum along the line of sight and transverse axis.The significant challenges of detecting the 21-cm signal come from the foregrounds, ionospheric abnormalities, instrumental systematics, and Radio Frequency Interference (RFI), emitting in the same frequency range as the redshifted 21-cm signal from the EoR. In an ideal situation avoiding or minimising all the above factors, we still require to calibrate the instrument against the bright foregrounds that require calibration accuracy of ≳ 10^5 : 1 by the radio instruments to reach the required HI levels.
Calibration accuracy is especially important for EoR observations using the MWA because of the presence of sharp periodic features in the bandpass produced by the polyphase filter bank used. The inability to accurately correct for these element-based bandpass structures significantly affects the power spectrum estimates <cit.>. The radio-interferometric closure phase has emerged as an alternative and independent approach to studying the EoR while addressing the calibration challenges. The main advantage of this approach is the immunity of closure phases to the errors associated with the direction-independent, antenna-based gains. Thus, calibration is not essential in this approach <cit.>. The closure phase in the context of the EoR was first investigated by <cit.> and further employed on the HERA data by <cit.>. The method has shown significant promise in avoiding serious calibration challenges, and with detailed forward modelling, one can ideally quantify the 21-cm power spectrum. This paper is the first attempt to utilise a closure phase approach on MWA-phase II observations. We followed the methods investigated by <cit.> and applied them to our datasets. This paper is organised as follows. In <ref>, we discuss the background of the closure phase. <ref> and <ref> of this paper explain the observations and forward modeling with simulations of the foregrounds, HI, and noise. In <ref>, we discuss the data processing and rectification, and finally, we present our results in <ref> and discuss them in <ref>.
§ BACKGROUND
In this section, we review the background of the interferometric closure phase in brief <cit.>. The measured visibility between two antenna factors at a given baseline (V_ij^ m) can be defined as the sum of true sky visibility and noise:
V_ij^ m(ν) = 𝐠_i(ν) V_ij^ T(ν)𝐠_j^*(ν) + V_ij^ N(ν);
where 𝐠_i(ν), 𝐠_j(ν) denote the element-based gain terms, {*} represents the complex conjugate, V_ij^ T(ν) is the true sky visibility, and V_ij^ N(ν) is the noise in the measurement. The indices {ij} correspond to the antennae {a,b} forming a baseline. The true sky visibility can be further decomposed into the foregrounds and faint cosmological HI visibilities.
V_ij^ T(ν) = V_ij^ FG(ν) + V_ij^ HI (ν)
In general, the foreground ≥ 10^4 K orders of higher magnitude than the HI, and to reach the sensitivity limit of HI, the gains (𝐠_i's) are required to be precisely calibrated up to the HI levels. It presents challenges to the direct visibility-based HI power spectrum analysis as it requires accurate modelling of the foreground and mastering the calibration techniques.
In radio interferometry, the term `closure phase' is assigned to the phase derived from the product of N ≥ 3 closed loops of antenna visibilities <cit.>. When N=3, it is also sometimes referred to as the bispectrum phase in the literature, which can be defined as,
ϕ_∇^ m(ν) = arg∏_ij=1^3 V_ij^ m(ν)
= arg∏_ij=1^3 [ 𝐠_i(ν) V_ij^ T(ν)𝐠_j^*(ν) + V_ij^ N(ν) ]
= arg∏_ij=1^3 V_ij^ T(ν) + noise-like terms
where, {ij} runs through antenna-pairs {ab, bc, ca}, the gains of individual antenna elements (𝐠_i' s) gets eliminated in the closure phase; leaving only the true sky phase as the sole contributing factor in the closure phase. The closure phase delay spectrum technique also exploits the fact that the foregrounds predominantly obey a smooth spectral behavior, whereas the cosmological HI creates spectral fluctuations. Thus, in the Fourier delay domain, the foreground signal strength (power) gets restricted within the lower delay modes. In contrast, the HI power can be observed at the higher delay modes, creating the distinction between these two components in the Fourier domain, from which the faint HI can be detected. Because of its advantages in avoiding element-based calibration errors and processing simplicity, it promises to be an independent alternative to estimating the 21-cm power spectrum <cit.>. The delay spectrum of the closure phase can be estimated by taking the Fourier transform of the complex exponent of the closure phase with a window function along frequency,
Ψ̃_∇(τ) = V_ eff∫ e^iϕ_∇(ν) W(ν) e^2π i ντ dν ,
where τ represents delay, the Fourier dual of the sampling frequencies, and V_ eff is the effective visibility, which can be obtained through the model foreground visibilities or a calibrated visibility. In our work, we used the former estimated through foreground simulations, which are discussed in the next section. Note that we only take the amplitude of the V_ eff, which acts as a scaling factor in the delay spectrum. W(ν) is the spectral window function; we used a Blackman-Harris window <cit.> modified to obtain a dynamic range required to sufficiently suppress foreground contamination in the EoR window <cit.>,
W(ν) = W_ BH(ν) ∗ W_ BH(ν) ,
where, W_ BH(ν) is the Blackman-Harris window function, and {∗} represents the convolution operation. For a given triad, V_ eff is estimated from the sum of inverse variance visibilities weighted over the window,
V_ eff^-2 = ∑_ij=1^3 ∫ V_ij(ν)^-2 W(ν) dν/∫ W(ν) dν .
The inverse squaring ensures the appropriate normalisation of visibilities, V_ij, taking noise into account. From the delay spectrum, we estimate the delay cross power spectrum by taking the product of the two delay spectra and converting it into [pseudo mK^2 h^-3Mpc^3] units by assimilating the cosmological and antenna-related factors as,
P_∇(k_||) = 2ℛ{Ψ̃_∇(τ)*Ψ̃_∇^'(τ)}×
(1/Ω B_ eff)(D^2Δ D/B_ eff)
(λ^2/2k_B)^2
[pseudo mK^2h^-3 Mpc^3] ,
where Ω is the antenna beam squared solid angle <cit.>,
B_ eff is the effective bandwidth of the observation, D and Δ D are the cosmological comoving distance and comoving depth corresponding to the central frequency and the bandwidth, respectively. k_|| is the wavenumber along the line-of-sight <cit.>,
k_|| = 2πτ B_eff/Δ D≈2πτν_r H_0 E(z)/c(1+z)^2 ,
where, ν_r is the redshifted 21-cm frequency, H_0 and E(z) are standard terms in cosmology.
Ω B_ eff is related to the cosmological volume probed by the instrument and is defined as <cit.>
Ω B_ eff = ∬ |A(𝐬̂,ν)|^2 |W(ν)|^2 d^2 𝐬̂ dν ,
where A(𝐬̂,ν) is the frequency-dependent, directional power pattern of the antenna pair towards the direction, 𝐬̂, and W(ν) is the spectral window function. However, we approximated by separating the integrals to obtain Ω as <cit.>
Ω = ∫ |A(𝐬̂,ν_r)|^2 d^2 𝐬̂
and effective bandwidth, B_eff, as <cit.>
B_eff = ϵ B = ∫_-B/2^+B/2 |W(ν)|^2 dν ,
where, ϵ is the spectral window function's efficiency, and B=30.72 MHz is MWA's instantaneous bandwidth. For the MWA observations at the chosen band in this study, Ω≈ 0.076 Sr. For the modified Blackman-Harris window function adopted here, ϵ≈ 0.42 and hence, B_eff≈ 12.90 MHz.
The “pseudo” in Equation (<ref>) is used to note that the power spectrum estimated via the closure phase method is an approximate representation of the visibility-based power spectrum <cit.>. Further, we used the power spectrum as defined in <cit.> with the scaling factor 2 instead of 2/3[The definition of V_ eff^2 already accounts for the factor of 3] to correct for the effective visibility estimates.
§ OBSERVATIONS
In this work, we used 493 zenith-pointed MWA-phase II compact observations from September 2016 under the MWA project . Each observation lasts 112 seconds in the MWA high-band frequency range of 167-197 MHz. Amongst these observations, 198 and 295 target the EoR0 (RA, Dec = 0h, -27deg) and EoR1 (RA, Dec = 4h, -30deg) fields, respectively. Figure <ref> shows the Local Sidereal Time (LST) and Julian Date (JD) of the observations. It amount to ≈ 6 and 9 hours of total observing time on the EoR0 and EoR1 fields, respectively. The MWA raw visibilities are stored in the format. To convert them into measurement sets (MS) or uvfits, we used [<https://github.com/MWATelescope/Birli>] (an MWA-specific software that can perform data conversion, averaging in frequency and time, flagging, and other preprocessing steps). Using , we averaged the raw visibilities for 8 seconds at a frequency resolution of 40 kHz.
Finally, we output the raw (uncalibrated) visibilities as standard . Note that we are required to keep all frequency channels for the closure phase analysis; thus, we avoided flagging channel-based RFI (e.g., DTV) and coarse band edge channels (around every 1.28 MHz), which are usually affected by the bandpass[The MWA’s signal processing chain contains filterbanks that yield 24 coarse channels of 1.28 MHz over the full 30.72 MHz band. The fine polyphase filterbank shape results in poor bandpass characteristics at the coarse channel edges <cit.>.].
§ SIMULATIONS
The closure phases are not linear in the visibilities; thus, forward modelling is key to understanding the data. We incorporated simulations of the foregrounds (FG), HI, and antenna noise to provide cross-validation and comparison with the data. Forward modelling can help identify the excess noise and systematic biases in the data and provide an idealistic estimate for comparison.
§.§ foregrounds
The simulated FG are generated with the same parameters (i.e. matching LST, frequency and time resolution) as the observing data. In the first step, we generated the sky maps corresponding to each individual observation. We used the top 20,000 brightest radio sources from the PUMA catalogue <cit.> in the observing fields (EoR0, EoR1).
The catalogue includes point sources, Gaussians and shapelets. Note that our sky model does not account for the diffuse sky emission. Then, we generated the foreground sky visibilities and converted them to MWA-style . Initially, we experimented with various source counts (e.g., 15000, 25000, 45000) and their effect on the closure phase. We found the variation in closure phases saturates beyond 20000. Therefore, we settled for 20000 source counts in favour of faster computation. However, it should be noted that pinpointing the exact number of source counts where the closure phase saturates is challenging to find. The entire task of sky-map generation and foreground visibility estimation was accomplished using [<https://github.com/MWATelescope/mwa_hyperdrive>]. The sky visibilities are generated using fully embedded antenna element (FEE) beam with real-MWA observing scenario where the information of dead dipoles (if present during the observation) and antenna gains are incorporated in the simulation. Figure <ref> shows 20,000 sources around the EoR1 field for a given observation. For simplicity, only the single component of the sources (or point sources flux density) is shown in the figure.
§.§ Neutral hydrogen
Next, we estimate the HI visibilities as observed by MWA. In the limits of the cosmic and sample variance, the characteristic fluctuations in the HI-signal can be assumed to be the same across the sky; therefore, we can avoid simulating HI box multiple times; instead, we can use a single HI simulation box. The HI simulation was generated using 21cmFAST <cit.> with a simulation box size of 1.5 cGpc corresponding to 50^∘× 50^∘ in the sky at a redshift of 6. Then, we passed the simulated voxel data cube to [<https://github.com/JLBLine/WODEN>] <cit.> to generate the MWA-style visibilities of the HI and output it as . The HI visibilities were first generated for each 1.28 MHz coarse band separately with the matching frequency and time resolution of 40 kHz and 8 seconds of the foreground simulations and then manually stitched together to get the total 30.72 MHz bandwidth. The final HI visibilities were passed to the processing pipeline for further analysis.
The foreground and HI visibilities are added together. We computed the closure phase spectra of the foregrounds as well as of the HI imprinted on the foregrounds. Figure <ref> shows the smooth foreground spectra in the closure phase and the fluctuations (∼ 0.01 milliradian) introduced by the presence of HI.
§.§ Noise
The total noise consists of sky noise and receiver noise components.
The receiver temperature for the MWA was assumed to be T_ rx=50 K <cit.>, while sky temperature follows a power law in our observing frequency range <cit.>,
T_ sky = T_0(ν/ν_0)^α; T_0=180 K, ν_0=180 MHz, α=-2.5 ,
T_ sys = T_ sky + T_ rx .
From the system temperature, we estimated the system-equivalent flux density (SEFD) using <cit.>,
SEFD = 2k_B T_ sys/A_ eff
and the RMS,
σ (ν) = SEFD/√(ΔνΔ t) ,
where, k_B is Boltzmann's constant, A_ eff is the effective collecting area of the telescope, and Δν, Δ t are the frequency resolution and integration time, respectively. The σ(ν) is used to generate the Gaussian random noise and converted into the complex noise visibilities with a normalisation factor of 1/√(2) in the real and imaginary parts. Finally, the noise visibilities were added to the corresponding foreground and HI visibilities to get the Model of the sky signal.
§.§ Baseline-dependent gains
The eq. <ref> modifies to V_ij^ mod(ν) = V_ij^ m(ν)𝐠_ij(ν); in scenarios incorporating baseline-dependent gains, where 𝐠_ij(ν) represents the baseline-dependent gain factor. We introduced the baseline-dependent gains using a simple uniform distribution in the 𝐠_ij(ν) phase with unity amplitude. The scaling factor introduced in the 𝐠_ij(ν) sampling is set to approximately match the RMS phase of the binned averaged closure phase of the DATA. We chose brute force method to find the optimal scaling factor for a given EoR field, with a the single scaling factor for given EoR field. Figure <ref> shows the binned averaged closure phase of DATA and Model with 𝐠_ij. The contribution due to the baseline dependent gains on the binned averaged closure phase about 0.05 rad in both EoR0, EoR1 fields, which is the RMS of the ratio between the Model with 𝐠_ij and Model closure phases.
From here onwards, we use two variants of models in the analysis, the first is a forward Model without baseline dependent gains and the second is a Model with 𝐠_ij.
§ DATA PROCESSING
The following section provides the basic data processing steps we incorporated into this analysis. The complete schematic flowchart of the data structure is shown in <ref>.
§.§ DATA binning
In the first step, the repeated night-to-night observations are sorted based on the Local Sidereal Time (LST) and Julian Date (JD); see figure <ref>. We determined the 14 m, 24 m, and 28 m redundant baseline triads from both Hexagonal configurations of MWA (see an example in figure <ref> (right panel) for which the visibilities are estimated). A given triad {a, b, c} includes N_ vis = 3 visibilities which correspond to {V_ ab, V_ bc, V_ ca}. The number of triads (N_ triads) varies depending on the baseline length. In our case, the N_ triads are 47, 32, 29 for 14 m, 24 m, and 28 m baselines, respectively. Please note that, when accurately measured, the 24 m baseline is 14√(()3)≈24.25 m; however, we chose the former for the simple denomination. On the other hand, the 14 m and 28 m baselines are nearly accurate for the antenna positional tolerance of MWA tiles. Each dual-polarisation observation was made for 112 seconds, which included N_ timestamps=14 each with 8 seconds of averaged data and a frequency resolution of 40 kHz, which provides a total of N_ channels=768 frequency channels with a bandwidth of 30.72 MHz. The entire observations can be restructured into;
N_ obs≡{N_ LST , N_ JD, N_ timestamps,
N_ pol, N_ triads, N_ vis, N_ channels}
§.§ RFI flagging
The MWA high band (167-197 MHz) lies in the digital television (DTV) broadcasting band; thus, we expect RFI to be present in our dataset <cit.>, which in some cases can completely dominate the useful data from the observations. As mentioned in the previous sections, since our analysis required keeping all the frequency channels from our datasets, we did not perform any frequency channel-based RFI flagging in the data preprocessing step. Instead, we incorporated <cit.>, which is designed to detect faint RFI in the MWA data, to either discard the entire frequency band or keep it based on the RFI occupancy of the dataset. Instead of assuming a persistent RFI along a frequency channel, we check for the RFI along the observation time (i.e., along the N_ timestamps axis).
The flagging was performed based on the RFI z-score of an observation. Note that the z-score was estimated at successive adjacent timestamps to measure if any faint or persistent RFI was present in the data across all timestamps (see figure <ref>). We took a z-score threshold of 2.5, below which the data was considered good, and the channels where the z-score exceeded the threshold were considered RFI-affected. Then, we independently estimated the level of such RFI-affected channels along the frequencies at each timestamp and checked if the RFI occupancy at a given timestamp was more than 5%.
As a first step in selecting good timestamps, we chose an RFI occupancy level of 5% as a threshold. We discarded the entire timestamp if the RFI occupancy exceeded this threshold. Figure <ref> presents RFI occupancy for the entire dataset. Since z-scores are estimated relative to the successive adjacent timestamps, it might be difficult to quantify whether the RFI leakage between the adjacent timestamps (where one is good and another is bad) is there or not. Therefore, in the second step, we again flagged all the timestamps based on whether they had a bad neighboring timestamp. The flagging provides a masked array, further propagated through other data processing steps.
§.§ Triad Filtering
The presence of faulty tiles or dipoles/antennae corrupts the voltages recorded by the correlator; therefore, we are required to cross-check the visibilities at each antenna triad. The easiest way was to perform a geometric median-based rectification on the closure phase. We performed a two-step median rectification on the closure phase.
For a given observation, the data structure of the triad filtering can be shown as follows:
ϕ_∇≡{N_ pol, N_ triads, N_ timestamps, N_ channels}
First, we estimated the median absolute deviation (MAD = Median (|X_i - X̅|)) of the closure phases against the mean along the N_ triads,
ϕ_∇^ MAD≡{N_ pol, N_ triads, N_ timestamps, N_ channels}
and then we estimated the mean of the MAD (i.e. MAD_ mean) along the N_ channels axis. Finally, we estimated the MAD of the MAD_ mean.
μ{ϕ_∇^ MAD}≡{N_ pol, N_ triads, N_ timestamps, 1}
This step provides a single value of the MAD_ mean for a given triad at every timestamp.
MAD{ μ{ϕ_∇^ MAD}}≡{N_ pol, N_ triads, N_ timestamps, 1}
Finally, we masked the triads if the mean of the MAD is greater than 1σ≈ 1.4826 of MAD and considered the triad performing poorly at that given timestamp.
§.§ Coherent Time Averaging
The coherent averaging gives us an estimate of a timescale up to which the sky signal can be assumed identical and averaged coherently to improve sensitivity. It can be estimated by measuring the variation in the sky signal with time for a fixed pointing. Indeed, these vary with instrument and frequency of observation since the beam sizes are different. To check this with MWA, we used a continuous drifted sky simulation (FG and HI) under ideal observing settings (i.e., unity antenna gains, equal antenna element elevation from the ground) for ≈ 0.5 hours while keeping a fixed zenith pointing. The sky moves about ≈ 7.5^∘ in 0.5 hours, which is less than the MWA beam size of ≈ 9^∘-7.5^∘ at the shortest (14 m) triad, thus justifying the simulation time range of 0.5 hour.
We simply added the ideally simulated visibilities (FG and HI) and estimated the closure phase power spectrum as a function of time for higher delay (|τ|> 2 μ s). We used a fractional signal loss of 2% to measure the coherence threshold, a similar approach used by <cit.>.
The fractional loss in power is defined as,
1-η = <|ψ_∇(t, τ)^2|> - |<ψ_∇(t, τ)>|^2/<|ψ_∇(t, τ)^2|>
The choice of |τ|> 2 μ s is to choose the timescale based on the loss of HI signal, which is where it would be significant, namely, the higher delay modes. Figure <ref> shows the fractional loss of coherent HI signal power as a function of averaging timescale for three triad configurations. We found the coherence averaging time for {∇: 14, 24, 28} triads to be approximately 408, 130, and 120 seconds, respectively.
§ RESULTS
The bandpass structure of the MWA leaves significant systematics in the edge channels. Usually, these edge channels get flagged in the early data preprocessing step. However, we did not apply the flags since we require the full observing band for the delay spectrum analysis. The closure phase must be free of phase errors associated with the individual antenna elements. Thus, we expected the element-based bandpass structure to be removed in the closure phase. However, we still observed some residual bandpass structures in the closure phases; see figure <ref>. It can happen if there are some baseline-dependent gains present in the data in addition to antenna-dependent gains. We validated our hypothesis by developing a simple bandpass simulator, where we introduced an additional bandpass structure to MWA data that consisted of both element-based and baseline-dependent terms. The bandpass persists if baseline-dependent gains are present in the data, which otherwise gets completely removed if only antenna-based gains are present in the data (see appendix figure <ref>). In total, about 128 frequency channels are affected by the bandpass, which accounts for about 16% of the total bandwidth.
§.§ Mitigation of baseline-dependent bandpass effects
We investigated two approaches to overcome the presence of baseline-dependent edge channel effects, the first being the Non-uniform Fast Fourier transform (NFFT), where we tried to avoid the bandpass-affected channels in the Fourier transform. The second Gaussian Process Regression (GPR) based data-inpainting to estimate the missing channel information at the location of the spikes in the bandpass spectrum.
§.§.§ Gaussian Process Regression (GPR)
GPR <cit.> is a popular supervised machine learning method. GPR has previously been used in foreground subtraction <cit.>, and data inpainting <cit.> and characterisation <cit.> for 21-cm cosmology studies.
We follow a similar formalism to <cit.>. In the first step, we identified all the bad edge channels in the bandpass and flagged them in the closure phase. As mentioned before, edge channel contamination is present in the MWA data. The data at 40 kHz resolution has 768 frequency channels and 128 contaminated edge channels. We implemented GPR on the complex exponent of the closure phase exp(iϕ_∇ (ν)), with the real and imaginary components separately. The GPR implementation was rather simplistic since the complex phase varies between [0, 1]. We used the Matérn kernel as model covariance in our analysis. To optimise the kernel hyperparameters, we used the -based <cit.> to find the minima of the objective function. The optimisation was reiterated over ten times to ensure the kernel hyperparameters' convergence.
Note that for a given frequency range, we applied GPR to the entire N_ obs (see eq. <ref>) separately; therefore, the kernel Hyperparameters are also different for each closure phase frequency spectrum.
Figure <ref> shows the closure phase of the data and GPR reconstruction. In the top panel, the data can be seen with spikes at regularly spaced edge channels of ≈ 1.28 MHz intervals. The interpolated values of the closure phase are plotted over the data. The bottom panel shows the difference in the RMS in the closure phase along the frequency axis. Note that, while estimating the relative difference in the data closure phase, we avoided the noisy edge channels, whereas, for the GPR case, we only included the relative difference near the edge channels. This will enable us to query whether the GPR closure phase has a similar variation across the frequency compared to the data. It can be seen that the RMS of the data is higher than the GPR values, which means that the GPR has performed quite well. We used the Python-based module [<https://github.com/SheffieldML/GPy>] for the GPR implementation.
§.§.§ Non-uniform Fast Fourier transform (NFFT)
NFFT is a well-known method to get the Fourier transform of the data with missing samples <cit.>. To estimate the FFT of bandpass-affected data, first, we removed the 128 edge channels from the data and estimated the Ψ_∇(ν) (a Fourier conjugate of eq.<ref>), which is then supplied to the NFFT function to get Ψ_∇(τ). We used a Python-based [<https://github.com/jakevdp/nfft>] package to develop the NFFT functions.
The absolute values of the closure phase delay cross-power spectrum are shown in figure <ref>. It can be seen that the data is highly affected by the excess systematics in the power, evident by the periodic spikes. NFFT significantly reduces the bandpass systematics but does not eliminate it entirely. Since the GPR performed best between the two methods, we adopted only the GPR for the later analysis. From now on, we will be using `GPR-reconstructed DATA' as `DATA' for the entirety of the paper.
§.§ Cross power spectrum estimation
We proceeded to the power spectrum estimation after eliminating a significant part of the baseline-dependent bandpass contamination using GPR inpainting. Based on the LST-JD of the observations (see figure. <ref>, we split the dataset equally along the JD axis (i.e., N_ JD=2).
We binned the data along LST according to the coherent averaging time of MWA, which we already estimated for the redundant 14 m, 24 m, and 28 m baselines, resulting in 5, 14, and 17 LST bins, respectively.
The first level of weighted averaging (i.e., coherently averaging) was done to the bispectrum (visibility triple product) and effective foreground visibilities, (V_ eff), that lie within the same LST bin. The weights are the number of good observations left after being rectified by the RFI flagging and triad filtering within the given LST bin for a given polarisation and triad.
In the next step, we estimated the delay spectra, Ψ_∇(τ), from the phase of the LST binned averaged closure spectra at each LST bin. It resulted in delayed spectra data structure,
{ N_ LST, N_ JD, N_ pol, N_ triads, N_ delays}; N_ delays=N_ channels
Finally, we estimated the cross-power between the unique triad pairs of the delay spectra across the two JD-bins according to Equation (<ref>)
where,
Ψ̃_∇(τ) . Ψ̃_∇^'(τ) = 1/N_ triadsC_2∑_i,j^N_ triadsΨ̃_∇(i, τ) . Ψ̃_∇(j, τ), i>j,
where i, j runs over N_ triads (upper triangle of the {i,j} pairs) from the first and second JD bins. The normalising factor of 2 arises since phases only capture half the power in the fluctuations. After this operation, the data structure of the binned averaged cross P_∇(k_||) becomes
{N_ LST, N_ JDC_2, N_ pol, N_ triadsC_2, N_ delays}; N_ JDC_2=1.
We took the weighted mean (i.e., incoherently averaged) along the triad pairs, where the weights were propagated from the previous step (refer to data flowchart <ref> for details).
As the sky varies with LST, we applied the inverse variance weights along the LST axis and averaged them to get the final estimates of the power spectrum. The same operation was done for the imaginary part of the data to get an estimate of the level of systematics in the power spectrum.
Ultimately, we incoherently averaged the two polarisations and downsampled the original delays to the effective bandwidth of the applied window function (i.e., ≈ 12.9 MHz). We used -based to interpolate at the new downsampled delays. The final estimates of the closure phase power spectra are shown in figure <ref> and <ref> for the EoR0 and EoR1 fields, respectively.
§.§ Error estimation
The uncertainties on the power spectrum can be estimated in multiple ways. We primarily focused on estimating the uncertainties in two ways, the first being `noise+systematics' and the second only noise, where we tried to mitigate the systematics.
§.§.§ Noise+systematics
To estimate uncertainties in power, we increased the number of samples that go into the uncertainty estimation by splitting the JD-axis into four parts (i.e., N_ JD = 4). This led to the data (Ψ_∇(τ)) structure being {N_ LST, N_ JD=4, N_ pol, N_ triads, N_ delays}.
We similarly estimated the cross-power of the N_ triads along the unique pairs of N_ JD axis. This operation provided us with {N_ pol, N_ LST, N_ JDC_2=6, N_ triadsC_2, N_ delays} unique power spectra. The weighted mean power was estimated by along N_ triadsC_2 where the weights are coming from the number of good observations that went into the Ψ_∇ for a given triad.
Next, We estimated the weighted variance on the power using the standard error of the weighted mean provided in <cit.>,
( SEM_ wtd)^2 = n/(n-1)(∑ w_i)^2 [ ∑ (w_i X_i - w̅X̅_ wtd)^2
- 2X̅_ wtd∑ (w_i -w̅) (w_i X_i - w̅X̅_ wtd)
+ X̅_ wtd^2 ∑ (w_i -w̅)^2] ,
where w_i are the weights, and n represents the weight count. Note that since the N_ JD = 4, we are required to normalise the variance. Figure <ref> illustrates the detailed data structure flow for the noise estimation.
§.§.§ Noise
The uncertainties for the only noise case follow a similar procedure as the previous one, with the same JD split (i.e., N_ JD = 4), the cross-power is estimated, which led to the data structure {N_ pol, N_ LST, N_ JDC_2=6, N_ triadsC_2, N_ delays}.
After this operation, we took the difference in the power spectra between the unique pairs of JD (i.e., the same sky signal).
The only noise scenario can be understood assuming the cross-power between the two independent JD-bins correlates with the common signal and systematics across the triads. Assuming the power is coherent within the LST bin, the successive unique difference in the power within the LST bin eliminates the correlated power and systematics, leaving only the noise-like residuals. Also, since the differences eliminate the sky signal, the LST variation in the difference power is minimal; thus, we collapsed our datasets into a single axis and estimated the weighted standard deviation of the differenced power to get the final noise-like uncertainties. Finally, again, we took the weighted variance using eq. <ref>.
§.§ Validation
We performed a two-sided KS test on the closure phase power spectra of the data and two model variants for the statistical comparison. The null hypothesis was rejected in all scenarios with the Model (without baseline dependent gains); however, it failed to reject the null-hypothesis in all scenarios when using the Model with g_ij.
The former implies that the data and the model uncertainties were unlikely to be drawn from the same distribution, whereas the latter concludes the contrary. The test results are provided in table <ref>. The test results for the Model are not unexpected since we can see that the RMS floor levels of the data and Model are sufficiently different, which match the data and Model with g_ij.
We modelled the excess RMS in the data as arising from baseline-dependent gain factors, although it is believed to have originated from the systematics and residual RFI. Note that we do not claim that the excess noise in the data is solely due to the baseline dependent systematics; however, if the argument is true, the baseline dependent gains introduced in our analysis suffice for the excess power in the data.
§.§ 21-cm power spectrum
We estimated the final dimensionless 21-cm power spectrum from the closure phase power spectrum. The closure phase delay power spectrum can be written into a 21-cm power spectrum (“pseudo”) as follows:
Δ_∇^2(k) = k^3P_∇(k_||)/2π^2 [pseudo mK^2]
where, k^2 = k_⊥^2 + k_||^2, with k_⊥ = 2π |b_∇|/λ D, where b_∇ is the baseline length of the triad, and D is the cosmological comoving distance. Note that the 21-cm power spectrum estimates from the closure phase power spectrum should not be interpreted as true but rather the approximate representation of the actual 21-cm power spectrum <cit.>. The power spectra converted to cosmological units for EoR0 and EoR1 fields are shown in Figures <ref> and <ref>, respectively.
Assuming the convergence to normal distribution due to the central limit theorem, we estimated 2σ [95% confidence intervals (CI)] using the uncertainties since our sample size was sufficient (> 30). The upper limits on the 21-cm power spectrum [pseudo mK^2] were then estimated
{Δ_∇ UL^2 = (μ_Δ^2_∇± CI) [pseudo mK^2]}
for both the EoR0 and EoR1 fields are provided in the table <ref>, and the full table in the (<ref>).
§ DISCUSSION
We used the closure phase delay spectrum technique to obtain an independent estimate of the 21-cm power spectrum for the MWA phase II observations. These observations were centered on the EoR0 and EoR1 fields and were zenith-pointed, similar to the observing strategy of HERA. Our analysis revealed that MWA observations are possibly suffer from a baseline-dependent bandpass structure, which is especially noticeable in the edge channels. This bandpass structure results in structured bumps in the delay power spectrum (see figure <ref>), significantly contaminating the power spectrum. To address this issue, we explored two methods: Gaussian Process Regression (GPR) and Non-uniform Fast Fourier Transform (NFFT), to inpaint and mitigate the impact of the bandpass-affected edge channels on our power spectrum estimation. However, we decided against adopting the NFFT method because, although it reduced the magnitude of the bandpass, the bandpass remained evident in the NFFT delay spectra (see figure <ref>). Finally, we estimated the 21-cm power spectra using closure phase delay spectra. Additionally, we performed forward modelling in parallel with the observations to gain insights into the nature of the power spectrum under ideal observing conditions. The main findings of our analysis are summarised below.
When we averaged closure phases across multiple timestamps within the same Local Sidereal Time (LST) bin, we noticed a significant residual bandpass structure, particularly noticeable in the edge channels (see Fig. <ref>). Since closure phases are unaffected by element-based bandpass variations, we concluded that these bandpass issues cannot be simplified into element-based terms and could instead be dependent on the baseline. To test this hypothesis, we simulated the same effect on foreground visibilities (see figure <ref>). We explored data-inpainting techniques to address these systematic errors to estimate closure phases in channels contaminated by baseline-dependent bandpass systematics (see figure <ref>). It is important to note that while these baseline-dependent issues are most noticeable in the edge channels, they could potentially affect all frequency channels since closure phases do not eliminate them. These issues also impact standard visibility-based power spectrum analysis methods. Understanding how the antenna layout contributes to such systematic errors is crucial for executing baseline-based mitigation strategies. Further investigation is needed to fully understand the extent to which these systematic errors affect MWA EoR power spectrum estimates. With a simple baseline dependent gains in the forward modelling, we aim to address the anomalies present in the DATA.
On comparing the final closure phase power spectrum of the DATA and Model for the EoR0 field (see figure <ref>), we found that the peak power (at τ=0μ s) (i.e. ≈ 10^14 pseudo mK^2h^-3 Mpc^3) of the DATA and Model only roughly matches for the 14 m triads, however the same for 24 m and 28 m triads differ significantly. During the initial closure phase estimation stage, we found EoR0 data having multiple phase wraps, which could be due to the presence of some systematics or residual RFI. It caused an overall shift in the peak power away from zero delays in the coherent averaging. This effect can be seen in the 28 m triad, which shows greater power next to the zeroth delay (see figure <ref>). The RMS floor level is between 1-2 orders of magnitude higher in the DATA (≈ 10^8 pseudo mK^2h^-3 Mpc^3) compared to the Model (≈ 10^7 pseudo mK^2h^-3 Mpc^3) and Model with g_ij (≈ 10^8 pseudo mK^2h^-3 Mpc^3). This excess power in the data compared to the Model may arise from a smaller DATA sample size in the EoR0 field or systematics and residual RFI. Using a baseline-dependent gain factor in the simulation, we aimed to incorporate such systematics. We performed a 2-sided KS test on the DATA and Model at k_|| > 0.15 pseudo h Mpc^-1, which shows rejection of the null hypothesis for the likelihood of DATA and Model drawn from the same distribution at all baseline cases, which is expected since both differ significantly. In contrast, the KS-test setifies the null hypothesis when comparing DATA and Model with g_ij.In the closure phase power spectrum of the EoR1 field, the peak power of the DATA and Model (≈ 10^15 pseudo mK^2h^-3 Mpc^3) match for all triads. The RMS floor level between the Model and DATA gets significantly better compared to the EoR0 field. They nearly match in all cases, except for the 14 m triads where the difference is approximately an order of magnitude higher in the DATA compared to the Model (see figure <ref>). It shows that we can improve our estimates of the power spectrum with an increased number of datasets. Thus, the analysis is data-limited for the amount of the data used. However, similar to the EoR0 case, the 2-sided KS test rejected the null hypothesis for all cases on Model, whereas fail to reject the null hypothesis in favour of the two distributions Model with g_ij might be drawn from the same distribution.
Since our observations lie in the middle of the DTV broadcasting band, we further investigated for residual RFI in our data. We shifted our entire analysis to the lower frequency band (167-177 MHz), avoiding the central DTV-affected band (although, as shown in figure no. 2 in <cit.>, the DTV RFI nearly covers the entire EoR high band observations).
This analysis can help understand whether the residual RFI or other systematics are present in the data. Note that since we reduced our bandwidth by nearly three, we reduced our sensitivity by the same factor in the delay power spectrum. Therefore, the direct comparison of the mean power at k_|| > 0.15 pseudo h Mpc^-1 might not be valid with the previous result.
The closure phase power spectrum for the shifted spectrum is shown in figure <ref>, and <ref>. We can see significant improvements in the peak power of the DATA and Model compared to previous results. However, the overall RMS level was increased by an order of magnitude in all cases, possibly due to the lesser sensitivity (lower sample size). The DATA peak power at τ=0 μ s, especially in the EoR0 field, now matches the Model. Thus, we can justifiably argue that DTV RFI, which is expected to be prominent near 180 MHz, significantly contributed to the systematics present in the EoR0 DATA. On the other hand, EoR1 DATA seem relatively similar in both analyses, thus indicating that the systematics (such as persistent RFI) other than DTV RFI might be present in the data. Our findings are also confirmed when performing the KS-test on the DATA and Model, which shows non-rejection of the null hypothesis in all EoR1 field scenarios.
We also compared the results with Model with g_ij which are shown in table <ref> shows KS-test outcomes of the Data and Model and Data and Model with g_ij on the shifted spectrum.
We estimated the 2σ (95% confidence interval assuming Gaussianity from the convergence to Central Limit Theorem) on the 21-cm power spectrum for both the EoR0 and EoR1 fields. Our best upper limit on the 21-cm power spectrum of ≲ (184 pseudo mK)^2 came from EoR1 field on 14 m triads at k=0.36 pseudo h Mpc^-1 with the noise only uncertainties. In the EoR0 field, our best estimate, ≲ (188 pseudo mK)^2, came from the 24 m triads at k=0.18 pseudo h Mpc^-1 again using the noise-only uncertainty. However, as we discussed earlier, the systematics or residual RFI might have still affected these estimates, which we aim to address by introducing baseline-dependent gains in the modelling. It should be noted that, the exact nature of such baseline-dependent gains is not well understood. We have seen that, unlike Foregrounds, which usually gets restricted in lower delay modes, allowing faint HI signal to fluctuate visible at higher delay modes, the systematics equally affect all delay modes. Thus, for the scientific goal of observing milliradian-level sensitivity could be a significant challenge if such baseline dependent gains are present in the DATA. However, with extensive coherent averaging, the effect of such can be reduced. It should also be noted that the exact description of anomalies in the DATA can not be solely due to baseline-dependent systematics. Therefore, we state that if only the baseline-dependent systematics is present in the data, the level of noise introduced by the baseline-dependent gains justifies our DATA. Nevertheless, the level of fiducial HI and FG+HI powers are still lower than the data by 4-5 and 3-4 orders of magnitude, respectively; however, since our analysis is still data-limited, there is significant scope for improving upon the current estimates. Table <ref> provides the best 2σ estimates while table <ref> provides all estimates for each triad studied here.
§ SUMMARY
We present independent EoR 21-cm power spectrum results from the closure phase analysis of ≈ 12 hours of MWA phase-II data in the frequency range 167-197 MHz on three redundant baseline groups, namely, 14 m, 24 m, and 28 m baselines. Using the closure phase diagnostic, we found evidence for significant baseline-dependent systematics in the MWA data. Our best estimates of the 21-cm power spectrum at z=6.79 are ≲ (184 pseudo mK)^2 at k=0.36 pseudo h Mpc^-1 in the EoR1 field using 14 m baseline triads and ≲ (188 pseudo mK)^2 at k=0.18 pseudo h Mpc^-1 in the EoR0 field using 24 m baseline triads. Even with the limited amount of data analyzed, our closure phase method shows significant promise in independently constraining the 21-cm power spectrum during the EoR. Our results are still data-limited; hence, there is scope for further improvement by including more data in the analysis. Extensive sky modelling, such as accounting for the Galactic diffuse emission, is required before directly comparing the closure phase analysis with the standard visibility-based power spectrum.
§ ACKNOWLEDGEMENTS
This project is supported by an ARC Future Fellowship under grant FT180100321. This research was partially supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) through project number CE170100013. The International Centre for Radio Astronomy Research (ICRAR) is a Joint Venture of Curtin University and The University of Western Australia, funded by the Western Australian State government. The MWA Phase II upgrade project was supported by the Australian Research Council LIEF grant LE160100031 and the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto. This scientific work uses Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamatji people as the traditional owners and native title holders of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS) under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the Pawsey Supercomputing Centre, which is supported by the Western Australian and Australian Governments. Data were processed at the Pawsey Supercomputing Centre.
§ DATA AVAILABILITY
This project was developed using Python and dependent libraries mentioned in the main text. Our data processing pipeline is publically available at [<https://github.com/himmng/Closure_phase_analysis>], along with the final processed datasets. The core data used in this work will be made available upon reasonable request.
§ APPENDIX
§.§ Inner vs Outer triads
The core of MWA Hexagons may be more affected by mutual coupling than the tiles near the edges. Therefore, we tried to perform an independent check on the power differences between the inner and outer tile triads to quantify the cross-talk level in the data. We chose 14m baseline triads to get a higher count and only estimated the closure phase power spectrum from them. We used first and second outer layer of the Hexagon configuration as the outer triads, and third and fourth tile layers as inner triads (see right panel of figure <ref>). In scenarios, where either two tiles from outer while the third from inner, or vice-versa, we considered those triads to be outer or inner triads, respectively. Figure <ref> shows the closure phase power spectrum of the inner and outer triads at 14m baseline lengths. We observed the relative percentage difference between the RMS power estimated for k_||>0.15 pseudo h Mpc^-1 between inner triads is higher than the outer triads are about 41%, 34% for EoR0 and EoR1 fields, respectively. However, the relative difference with the Model RMS (figure <ref>, <ref>) ≈ 6%, -25% for inner and outer triads in EoR0 field, and ≈ 186%, 114% for inner and outer triads in EoR1 field. These when comparing with the mean RMS value of the Model (≈ 10^8, 10^7 pseudo mK^2h^-3 Mpc^3 for E0R0, EoR1 fields) consistent with each other.
§.§ Avoiding DTV
The observations are centred around 180 MHz, which overlap with the Australian Digital Television (DTV) band; therefore, despite the extensive RFI flagging, some RFI could remain in the final processed data. Therefore, a useful test would be to shift the spectral window to lower frequencies, avoiding the DTV band in the analysis.
To do this, we worked with the first ≈ 10 MHz band (167–177 MHz) of the total 30.72 MHz bandwidth and estimated the cross-power spectrum using the same procedure. However, the effective bandwidth for this analysis was reduced to ≈ 3.3 MHz, thus reducing the sensitivity by the same factor.
The idea was to check whether the floor level of the power spectra (refer to figure <ref>, <ref>), which shows the excess power in the DATA, reduces and matches with the Model in the shifted window. It would suggest that the RFI could be localised in the central portion of the band, which is one of the major contributors to the excess power.
However, the power spectra of shifted window (see figure <ref> and <ref>) show similar behavior as that in figure <ref> and <ref>, indicating that the level of the RFI might be ubiquitous across the entire observing band along with the systematics in the data which are not being modelled in the simulations.
§.§ Diagnosing bandpass systematics
We checked the closure phases for the baseline-dependent systematics, which persisted in the MWA data by modifying the antenna element-based gains (𝐠_i' s eq. <ref>).
First, we created one set of MWA bandpass using random Gaussian between [0, 1] for each edge-channel frequency. We multiplied these by each visibilities in the correct parity pair order. It provides us with modified visibilities with randomised gains at the edge channels, which mimics the bandpass-affected visibilities. We used simulated flux densities for this procedure since they were produced ideally with unity antenna gains (although it does not affect having unity gains in the first place).
Second, we used two scenarios to modify the bandpass further. In the first, the antenna-element-based gains were modified with new gains multiplied by the existing ones. This step verifies how the individual antenna-based gains vanish in the closure phase, illustrated in the top panel of figure <ref>. In the second scenario, instead of multiplicative gains from individual antenna elements (e.g. 𝐠_i, 𝐠_j), we multiplied an additional baseline-dependent term (𝐠_ij) that is not factorisable into element-based terms. It demonstrated that baseline-dependent gains do not cancel in the closure phase; see the same figure <ref> bottom panel, where the residual difference between the residuals between the closure phase modified with 𝐠_ij and the original does not vanish.
§.§ Incomplete modelling
Realistic vs. ideal beams or using fewer foreground sources can affect the final power estimates. Thus, we produced two test foreground simulations, one with the same 20,000 foreground sources but with ideal beam conditions where all dipole gains are active and set to unity (FGI_20k) and a second with only 5,000 foreground sources and real beam conditions, dead/missing dipoles incorporated in the beam evaluation (FGR_5k), which we compared against 20,000 foreground sources but with real beam conditions (FGR_20k).
Note that in the main results, we used 20,000 foreground sources with real beams, which are compared against the two test scenarios. The three cases' final closure phase power spectrum, foreground with real dipole gains with 20,000 sources, foreground with unity dipole gains with 20,000 sources, and foreground with real dipole gains with 5,000 sources, are shown in figure <ref>. We obtained RMS power at ≥ 1.0 pseudo h Mpc^-1 to differentiate the two scenarios with the foreground with real dipole gains with 20,000 sources. The relative percentage error for unity dipole gains (20,000 sources) at 14m, 24m, and 28m baselines are 4%, 39%, 0.3%, and for real dipole gains (5,000 sources) at 14m, 24m, 28m baselines are 1.7%, 1.9%, 40%, respectively. Comparing with the RMS value of the Model from figure <ref>, <ref>) (≈ 10^8, 10^7 pseudo mK^2h^-3 Mpc^3 for E0R0, EoR1 fields), these relative differences in both the scenarios are sufficiently less, thus, consistent with the Model.
§.§ Data Structure Flowchart
As we proceed through the different averaging steps, the data structure is shown as a flowchart in figure <ref>.
|
http://arxiv.org/abs/2409.02975v1 | 20240904122732 | Slow-roll inflation from a geometric scalar-tensor model with self-interacting potentials | [
"Abraão J. S. Capistrano",
"Gilberto M. Kremer"
] | gr-qc | [
"gr-qc"
] |
#1
|
http://arxiv.org/abs/2409.02747v1 | 20240904142658 | Tractable Offline Learning of Regular Decision Processes | [
"Ahana Deb",
"Roberto Cipollone",
"Anders Jonsson",
"Alessandro Ronca",
"Mohammad Sadegh Talebi"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.FL"
] |
Power-grid modelling via gradual
improvement of parameters
I. Papp
September 4, 2024
================================================================
§ ABSTRACT
This work studies offline Reinforcement Learning (RL) in a class of non-Markovian environments
called Regular Decision Processes (RDPs).
In RDPs, the unknown dependency of future observations and rewards from the past interactions can be captured by some hidden finite-state automaton.
For this reason, many RDP algorithms first reconstruct this unknown dependency using automata learning techniques.
In this paper, we show that it is possible to overcome two strong limitations of previous offline RL algorithms for RDPs, notably RegORL <cit.>.
This can be accomplished via the introduction of two original techniques:
the development of a new pseudometric based on formal languages, which removes a problematic dependency on -distinguishability parameters, and the adoption of Count-Min-Sketch (CMS), instead of naive counting.
The former reduces the number of samples required in environments that are characterized by a low complexity in language-theoretic terms.
The latter alleviates the memory requirements for long planning horizons.
We derive the PAC sample complexity bounds associated to each of these techniques, and we validate the approach experimentally.
§ INTRODUCTION
The Markov assumption is fundamental for most Reinforcement Learning (RL) algorithms, requiring that the immediate reward and transition only depend on the last observation and action.
Thanks to this property, the computation of (near-)optimal policies involves only functions over observations and actions.
However, in complex environments,
observations may not be complete representations of the internal environment state.
In this work, we consider RL in Non-Markovian Decision Processes (NMDPs).
In these very expressive models, the probability of future observations and rewards may depend on the entire history, which is the past interaction sequence composed of observations and actions.
The unrestricted dynamics of the NMDP formulation is not tractable for optimization.
Therefore, previous work in non-Markovian RL focus on tractable subclasses of decision processes.
In this work, we focus on Regular Decision Processes (RDPs) <cit.>.
In RDPs, the distribution of the next observation and reward is allowed to vary according to regular properties evaluated on the history.
Thus, these dependencies can be captured by a Deterministic Finite-state Automaton (DFA).
RDPs are expressive models that can represent complex dependencies, which may be based on events that occurred arbitrarily back in time.
For example, we could model that an agent may only enter a restricted area if it has previously asked for permission and the access was granted.
Due to the properties above, RL algorithms for RDPs are very general and applicable.
However, provably correct and sample efficient algorithms for RDPs are still missing.
On one hand, local optimization approaches are generally more efficient, but lack correctness guarantees.
In this group, we recall <cit.> and all RL algorithms with policy networks that can operate on sequences.
On the other hand, algorithms with formal guarantees do not provide a practical implementation <cit.> or can only be applied effectively in small environments <cit.>.
In this work, we propose a new offline RL algorithm for RDPs with a Probably Approximately Correct (PAC) sample complexity guarantee.
Our algorithm improves over previous work in three ways.
First, we overcome a limitation of previous sample complexity bounds,
which is the dependence on a parameter called -distinguishability.
This is desirable because there exist simple non-Markovian domains in which this parameter decays exponentially with respect to the number of RDP states; an example is the T-maze by <cit.>, discussed below.
Second, a careful treatment of the individual features that compose the trace and each observation allows us to further improve the efficiency.
Third, inspired by the automaton-learning algorithm FlexFringe <cit.>, we use a data structure called Count-Min-Sketch (CMS) <cit.> to compactly represent probability distributions on large sets.
[T-maze]
As a running example, we consider the T-Maze domain <cit.>, a deterministic non-Markovian grid-world environment.
An agent has to reach a goal G from an initial position S in a corridor of length N that terminates with a T-junction as shown in Figure <ref>.
The agent can move one cell at a time, taking actions 𝑁𝑜𝑟𝑡ℎ, 𝑆𝑜𝑢𝑡ℎ, 𝐸𝑎𝑠𝑡, or 𝑊𝑒𝑠𝑡.
In each episode, the rewarding goal G can be in the cell above or below the T-junction.
Depending on the position of the goal, the observation in state S is 011 or 110.
In the corridor the observation is 101, and at the T-junction the observation is 010.
This means that, when crossing the corridor, the agent cannot observe the current location or the goal position.
This yields a history-dependent dynamics that cannot be modeled with an MDP or any k-order MDP.
As we show later, this domain can be expressed as an RDP.
Contributions
We study offline RL for Regular Decision Processes (RDPs) and address limitations of previous algorithms.
To do so, we develop a novel language metric L_, based on the theory of formal languages, and define a hierarchy of language families that allows for a fine-grained characterization of the RDP complexity.
Unlike previous algorithms, the RL algorithm we develop based on this language hierarchy does not depend on -distinguishability and is exponentially more sample efficient in domains having low complexity in language-theoretic terms.
In addition, we reduce the space complexity of the algorithm and modify RegORL with the use of Count-Min-Sketch.
To validate our claims, we provide a theoretical analysis for both variants, when the L_-distinguishability or CMS is used.
Finally, we provide an experimental analysis of both approaches.
§.§ Related work
RL in RDPs
The first online RL algorithm for RDPs is provided in <cit.>.
Later, <cit.> and <cit.> developed the first online RL algorithms with sample complexity guarantees.
The algorithm and the sample complexity bound provided in <cit.> adapts analogous results from automata learning literature <cit.>.
RegORL from <cit.> is an RL algorithm for RDPs with sample complexity guarantees for the offline setting.
In this work, we study the same setting and improve on two significant weaknesses of RegORL, regarding the sample compexity and the space requirements.
The details of these improvements are discussed in the following sections.
Lastly, the online RL algorithm in <cit.> can also be applied to RDPs,
but it is not proven to be sample efficient.
Non-Markovian RL
Apart from RDP algorithms, one can apply RL methods to more general decision processes.
Indeed, the automaton state of an RDP can be seen as an information state, as defined in <cit.>.
As shown in <cit.>,
any RDP can also be expressed as a POMDP whose hidden dynamics evolves according to its finite-state automaton.
Therefore, any RL algorithm for generic POMDPs can also be applied to RDPs.
Unfortunately, planning and learning in POMDPs is intractable <cit.>.
More efficient learning algorithms, with more favorable complexity bounds, have been obtained for subclasses of POMDPs, such as undercomplete POMDPs <cit.>,
few-step reachability <cit.>,
ergodicity <cit.>,
few-step decodability <cit.>,
or weakly-revealing <cit.>.
However, none of these assumptions exhaustively capture the entire RDP class, and they cannot be applied in general RDPs.
Regarding more general non-Markovian dynamics,
Predictive State Representations (PSRs) <cit.> are general descriptions of dynamical systems that capture POMDPs and therefore RDPs.
There exist polynomial PAC bounds for online RL in PSRs <cit.>.
However, these bounds involve parameters that are specific to PSRs and do not immediately apply to RDPs.
Feature MDPs and state representations both share the idea of having a map from histories to a state space.
This is analogous to the map determined by the transition function of the automaton underlying an RDP.
Algorithmic solutions for feature MDPs are based on suffix trees, and they cannot yield optimal performance in our setting <cit.>.
The automaton of an RDP can be seen as providing one kind of state representation <cit.>.
The existing bounds for state representations show a linear dependency on the number of candidate representations, which is exponential in the number of states in our case.
A similar dependency is also observed in <cit.>.
With respect to temporal dependencies for rewards, Reward Machines consider RL with non-Markovian rewards
<cit.>.
However, this dependency is usually assumed to be known, which makes the Markovian state computable.
Lastly, non-Markovianity is also introduced by the logical specifications that the agent is
required to satisfy
<cit.>;
however, it is resolved a priori from the known specification.
Offline RL in MDPs.
There is a rich and growing literature on offline RL, and provably sample efficient algorithms have been proposed for various settings of MDPs <cit.>. For example, in the case of episodic MDPs, it is established that the optimal sample size in offline RL depends on the size of state-space, episode length, as well as some notion of concentrability, reflecting the distribution mismatch between the behavior and optimal policies.
A closely related problem is off-policy learning <cit.>.
§ PRELIMINARIES
Notation
Given a set Y, Y denotes the set of probability distributions over Y.
For a function f: X→Y,
f(y x) is the probability of y∈Y given x∈X.
Further, we write y ∼ f(x) to abbreviate y ∼ f(· x).
Given an event E, (E) denotes the indicator function of E, which equals 1 if E is true, and 0 otherwise.
For any pair of integers m and n such that 0≤ m≤ n, we let [m,n]:={m,…,n} and [n]:={1, …, n}.
The notation 𝒪(·) hides poly-logarithmic terms.
Count-Min-Sketch
Count-Min-Sketch, or CMS <cit.>, is a data structure that compactly represents a large non-negative vector v=[v_1,…,v_m].
CMS takes two parameters δ_c and ε as input, and constructs a matrix C with d=⌈log1/δ_c⌉ rows and w=⌈e/ε⌉ columns.
For each row j∈[d], CMS picks a hash function h_j:[m]→[w] uniformly at random from a pairwise independent family <cit.>.
Initially, all elements of v and C equal 0.
An update (i,c) consists in incrementing the element v_i by c>0.
CMS approximates an update (i,c) by incrementing C(j,h_j(i)) by c for each j∈[d].
At any moment, a point query v_i returns an estimate of v_i by taking the minimum of the row estimates, i.e. v_i = min_j C(j,h_j(i)).
It is easy to see that v_i≥ v_i, i.e. CMS never underestimates v_i.
§.§ Languages and operators
An alphabet is a finite non-empty set of elements called letters.
A string over is a concatenation a_1 ⋯ a_ℓ of letters from , and we call ℓ its length.
In particular, the string containing no letters, having length zero, is a valid string called the empty string, and is denoted by .
Given two strings x_1 = a_1 ⋯ a_ℓ and x_2 = b_1 ⋯ b_m, their concatenation x_1x_2 is the string a_1 ⋯ a_ℓ b_1 ⋯ b_m. In particular,
for any given string x, we have x = x = x.
The set of all strings over alphabet is written as ^*, and the set of all strings of length ℓ is written as ^ℓ.
Thus, ^* = ∪_ℓ∈^ℓ.
A language is a subset of ^*.
Given two languages _1 and _2, their concatenation is the language defined by
_1 _2 = { x_1 x_2 | x_1 ∈_1 , x_2 ∈_2 }.
When concatenating with a language { a } containing a single string consisting of a letter a ∈, we simply write a instead of { a }.
Concatenation is associative and hence we can write the concatenation _1 _2 ⋯_k of an arbitrary number of languages.
Given the fundamental definitions above, we introduce two operators to construct sets of languages.
The first operator C_k^ℓ is defined for any two non-negative integers ℓ and k ∈{1,…,ℓ}, it takes a set of languages 𝒢, and constructs a new set of languages as follows:
C_k^ℓ(𝒢) =
{^ℓ∩ S_1 G_1 S_2 G_2 ⋯ S_k G_k S_k+1| G_1, …, G_k ∈𝒢, S_1, …, S_k+1∈𝒮},
where the set 𝒮 = {^*, {}} consists of the language ^* of all strings over the considered alphabet and the singleton language {} consisting of the empty string only.
In the definition, each S_i can be Γ^* or .
Choosing S_i = Γ^* allows arbitrary letters between a string from G_i and the next string from G_i+1, whereas choosing S_i = {} enforces that a string from G_i is immediately followed by a string from G_i+1.
Also, the intersection with Γ^ℓ amounts to restricting to strings of length ℓ in the languages given by the concatenation.
The second operator we define, , takes a set of languages , and it constructs a new set of languages () = ^⊔∪^⊓ by taking combinations as follows:
^⊔ = {_1 ∪_2 |_1,_2 ∈},
^⊓ = {_1 ∩_2 |_1, _2 ∈}.
The set ^⊔ consists of the pair-wise unions of the languages in .
For example, when = {{ ac, ad }, {ac,bc}}, we have ^⊔ = ∪{{ ac, ad, bc }}.
Similarly, the set ^⊓ consists of the pair-wise intersections of the languages in .
For example, when = {{ ac, ad }, {ac,bc}}, we have ^⊓ = ∪{{ ac }}.
From the previous example we can observe that ⊆^⊔, ^⊓, which holds in general since
= {∩| X ∈}⊆^⊓ and similarly = {∪| X ∈}⊆^⊔
Therefore ⊆().
The operators will later be used to define relevant patterns on episode traces.
They are inspired by classes of languages in the first level of the dot-depth hierarchy, a well-known hierarchy of star-free regular languages <cit.>.
§.§ Episodic regular decision processes
We first introduce generic episodic decision processes.
An episodic decision process is a tuple =,,,T,R,H, where is a finite set of observations, is a finite set of actions, ⊂[0,1] is a finite set of rewards, and H≥ 1 is a finite horizon.
We frequently consider the concatenation of the sets and .
Let _t=()^t+1 be the set of histories of length t+1, and let e_m:n∈_n-m denote a history from time m to time n, both included.
Each action-observation pair ao∈ in a history has an associated reward label r∈, which we write ao/r∈/.
A trajectory e_0:T is the full history generated until (and including) time T.
We assume that a trajectory e_0:T can be partitioned into episodes e_ℓ:ℓ+H∈_H of length H+1,
In each episode e_0:H, a_0=a_ is a dummy action used to initialize the distribution on _0.
The transition function T:×→ and the reward function R:×→ depend on the current history in =∪_t=0^H_t.
Given , a generic policy is a function :()^*→ that maps trajectories to distributions over actions.
The value function V^:[0,H]×→ of a policy is a mapping that assigns real values to histories.
For h∈, it is defined as V^(H, h) 0 and
V^(t, h) [ ∑_i=t+1^H r_i | h, ], ∀ t<H, ∀ h∈_t.
For brevity, we write V^π_t(h):=V^π(t,h).
The optimal value function V^* is defined as V^*_t(h) sup_π V^π_t(h), ∀ t∈ [H], ∀ h∈_t,
where sup is taken over all policies :()^*→. Any policy achieving V^* is called optimal, which we denote by π^*; namely V^π^*=V^*.
Solving amounts to finding ^*.
In what follows, we consider simpler policies of the form :→ mapping finite histories to distributions over actions.
Let Π_ denote the set of such policies.
It can be shown that Π_ always contains an optimal policy, i.e. V^*_t(h) max_π∈Π_ V^π_t(h), ∀ t∈ [H], ∀ h∈_t. An episodic MDP is an episodic decision process whose dynamics at each timestep t only depends on the last observation and action <cit.>.
Episodic RDPs
An episodic Regular Decision Process (RDP) <cit.> is an episodic decision process =,,,T,R,H described by a finite transducer (Moore machine) , , , , , q_0, where is a finite set of states, = is a finite input alphabet composed of actions and observations, is a finite output alphabet, : ×→ is a transition function, : → is an output function, and q_0∈ is a fixed initial state <cit.>.
The output space = _𝗈×_𝗋 consists of a finite set of functions that compute the conditional probabilities of observations and rewards, on the form _𝗈⊂→ and _𝗋⊂→.
For simplicity, we use two output functions, : ×→ and : ×→, to denote the individual conditional probabilities.
Let ^-1 denote the inverse of , i.e. ^-1(q)⊆× is the subset of state-symbol pairs that map to q∈.
An RDP implicitly represents a function : → from histories in to states in , recursively defined as
(h_0) (q_0, a_0 o_0), where a_0 is some fixed starting action, and (h_t) ((h_t-1), a_t o_t).
The dynamics and of are defined as T(o h,a)=(o (h),a) and R(r h,a)=(r (h),a), ∀ h∈, ∀ ao/r∈/.
As in previous work <cit.>, we assume that any episodic RDP generates a designated termination observation ∈ after exactly H transitions.
This ensures that any episodic RDP is acyclic, i.e. the states can be partitioned as =_0∪⋯∪_H+1, where each _t+1 is the set of states generated by the histories in _t for each t∈[0,H].
An RDP is minimal if its Moore machine is minimal.
We use A, R, O, Q to denote the cardinality of , , ,, respectively, and assume A≥ 2 and O≥ 2.
Since the conditional probabilities of observations and rewards are fully determined by the current state-action pair (q,a), an RDP adheres to the Markov property over its states, but not over the observations.
Given a state q_t ∈ and an action a_t ∈,
the probability of the next transition is
(r_t, o_t, q_t+1 q_t, a_t, ) =
(r_t q_t, a_t) (o_t q_t, a_t) (q_t+1=(q_t,a_t o_t)).
Since RDPs are Markovian in the unobservable states , there is an important class of policies that is called regular.
Given an RDP , a policy :→ is called regular if (h_1)=(h_2) whenever (h_1)=(h_2), for all h_1,h_2∈.
Hence, we can compactly define a regular policy as a function of the RDP state, i.e. π:→.
Let Π_ denote the set of regular policies for .
Regular policies exhibit powerful properties. First, under a regular policy, suffixes have the same probability of being generated for histories that map to the same RDP state. Second, there exists at least one optimal policy that is regular.
Finally, in the special case where an RDP is Markovian in both observations and rewards, it reduces to a nonstationary episodic MDP.
[RDP for T-maze]
Consider the T-maze described in Example <ref>.
This can be modeled as an episodic RDP
, , , , , q_0
with states q_0 ∪ ({ q_⊤ 1, …, q_⊤ 13}∪{ q_ 1, …, q_ 13}) ×{1, …, H}, which include the initial state q_0 and two parallel components, ⊤ and , for the 13 cells of the grid world.
In addition, each state also includes a counter for the time step.
Within each component {(q_⊤ i, t)}_i and {(q_ i, t)}, the transition function mimics the grid world dynamics of the maze and increments the counter t.
From the initial state and the start action a_0, (q_0, a_0 o_0) equals q_1,⊤ if o_0 = 110, and q_1, if o_0 = 011.
Observations are deterministic as described in Example <ref>,
except for (q_0, a_0) = {110,011}.
The rewards are null, except for a 1 in the top right or bottom right cell, depending if the current state is in the component ⊤ or , respectively.
Occupancy and distinguishability
Given a regular policy :→ and a time step t∈[0,H], let _t^∈_t× be the induced occupancy, i.e. a probability distribution over the states in _t and the input symbols in , recursively defined as _0^(q_0,a_0o_0) = (o_0 q_0, a_0) and
_t^(q_t,a_to_t) = ∑_(q,ao)∈^-1(q_t)_t-1^(q,ao) ·(a_t q_t) ·(o_t q_t, a_t), t>0.
Of particular interest is the occupancy distribution _t^* _t^^* associated with an optimal policy ^*. Let us assume that ^* is unique, which further implies that d^*_t is uniquely defined[This assumption is imposed to ease the definition of the concentrability coefficient that follows, and is also considered in the offline reinforcement learning literature (e.g., <cit.>). In the general case, one may consider the set D^* of occupancy distributions, collecting occupancy distributions of all optimal policies.].
Consider a minimal RDP with states = ∪_t ∈ [0,H+1]_t.
Given a regular policy ∈Π_ and a time step t∈[0,H], each RDP state q ∈_t defines a unique probability distribution (· q_t = q, ) on episode suffixes in _H-t=( / )^H-t+1.
The states in _t can be compared in terms of the probability distributions they induce over _H-t.
Consider any L = {L_ℓ}_ℓ = 0^H, where each L_ℓ is a metric over _ℓ.
We define the L-distinguishability of and as the maximum ≥ 0 such that,
for any t ∈ [0,H] and any two distinct q, q' ∈_t, the probability distributions over suffix traces e_t:H∈_ℓ from the two states satisfy
L_H-t((e_t:H q_t = q, ), (e_t:H q_t = q', )) ≥ .
We will often omit the remaining episode length ℓ = H-t from L_ℓ and simply write L.
We consider the -distinguishability, instantiating the definition above with
the metric (p_1, p_2) = max_u ∈ [0,ℓ], e ∈_up_1(e *) - p_2(e *),
where p_i(e *) represents the probability of the trace prefix e∈_u, followed by any trace e' ∈_ℓ-u-1. The -distinguishability is defined analogously using (p_1, p_2) = ∑_u ∈ [0,ℓ], e ∈_up_1(e *) - p_2(e *).
§ LEARNING RDPS WITH STATE-MERGING ALGORITHMS FROM OFFLINE DATA
Here we describe an algorithm called AdaCT-H <cit.> for learning episodic RDPs from a dataset of episodes.
The algorithm starts with a set of episodes generated using a regular behavior policy , where the k-th episode is of the form
e_0:H^k=a_0^ko_0^k/r_0^k⋯ a_H^ko_H^k/r_H^k and where, for each t∈[0,H],
q_0^k = q_0, a_t^k∼(q_t^k), o_t^k∼(q_t^k,a_t^k), r_t^k∼(q_t^k,a_t^k), q_t+1^k = (q_t^k,a_t^ko_t^k).
Note that the behavior policy and underlying RDP states q_t^k are unknown to the learner.
The algorithm is an instance of the PAC learning framework and takes as input an accuracy ε∈ (0,H] and a failure probability δ∈(0,1).
The aim is to find an ε-optimal policy satisfying V_0^*(h) - V_0^(h)≤ε for each h∈_0 with probability at least 1-δ, using the smallest dataset possible.
Since AdaCT-H performs offline learning, it is necessary to control the mismatch in occupancy between the behavior policy and the optimal policy ^*.
Concretely, the single-policy RDP concentrability coefficient associated with RDP and behavior policy is defined as <cit.>:
C_^* = max_t ∈ [H], q ∈_t, ao ∈_t^*(q, ao)/_t(q, ao) .
As <cit.>, we also assume that the concentrability is bounded away from infinity, i.e. that C_^*<∞.
AdaCT-H is a state-merging algorithm that iteratively constructs the set of RDP states _1,…,_H and the associated transition function .
For each t∈[0,H], AdaCT-H maintains a set of candidate states qao∈_t-1×.
Each candidate state qao has an associated multiset of suffixes (qao)={e_t:H^k : e^k∈, (e_0:t-1^k)=q, a_t-1^ko_t-1^k=ao}, i.e. episode suffixes whose history is consistent with qao.
To determine whether or not the candidate qao should be promoted to _t or merged with an existing RDP state q_t,
AdaCT-H compares the empirical probability distributions on suffixes using the prefix distance defined earlier.
For reference, we include the pseudocode of AdaCT-H(,δ) in Appendix <ref>.
<cit.> prove that AdaCT-H(,δ) constructs a minimal RDP with probability at least 1-4AOQδ.
In practice, the main bottleneck of AdaCT-H is the statistical test on the last line,
(_1,_2)≥√(2log(8(ARO)^H-t/δ)/min(|_1|,|_2|)),
since the number of episode suffixes in _ℓ is exponential in the current horizon ℓ.
The purpose of the present paper is to develop tractable methods for implementing the statistical test.
These tractable methods can be directly applied to any algorithm that performs such statistical tests, e.g. the approximation algorithm AdaCT-H-A <cit.>.
§ TRACTABLE OFFLINE LEARNING OF RDPS
The lower bound derived in <cit.> shows that sample complexity of learning RDPs is inversely proportional to the -distinguishability.
When testing candidate states of the unknown RDP, is the metric that allows maximum separation between distinct distributions over traces.
Unfortunately, accurate estimates of are impractical to obtain for distributions over large supports—in our case the number of episode suffixes which is exponential in the horizon.
Accurate estimates of are much more practical to obtain.
However, there are instances for which states can be well separated in the -norm, but have an -distance that is exponentially small.
To address these issues, in this section we develop two improvements over the previous learning algorithms for RDPs.
§.§ Testing in structured languages
The language of traces
Under a regular policy, any RDP state _t uniquely determines a distribution over the remaining trace a_t o_t / r_t … a_H o_H / r_H ∈ ( / )^H-t+1.
Existing work <cit.> treats each element a_i o_i / r_i as an independent atomic symbol.
This approach neglects the internal structure of each tuple and of the observations, which are often composed of features.
As a result, common conditions such as the presence of a reward become unpractical to express.
In this work, we allow observations to be composed of internal features.
Let = 1×…×m be an observation space.
We choose to see it as the language
= 1⋯m given by the concatenation of the features.
Then, instead of representing an observation (o1, …, om) as an atomic symbol, we consider it as the word
o1 o2⋯ om.
This results into traces _i = (1⋯m/)^i+1,
and each regular policy and RDP state uniquely determine a distribution over strings from _H-t+1.
This fine-grained representation greatly simplifies the representation of most common conditions,
such as the presence of specific rewards or features in the observation vector.
Testing in the language metric
The metrics induced by the L_1 and L_∞ norms are completely generic and can be applied to any distribution.
Although this is generally an advantage, this means that they do not exploit the internal structure of the sample space.
In our application, a sample is a trace that, as discussed above, can be regarded as a string of a specific language.
An important improvement, proposed by <cit.>, is to consider and , which take into account the variable length and conditional probabilities of longer suffixes.
This was the approach followed by the previous RDP learning algorithms.
However, these two norms are strongly different and lead to dramatically different sample and space complexities.
In this section, we define a new formalism that unifies both metrics and will allow the development of new techniques for distinguishing distributions over traces.
Specifically, instead of expressing the probability of single strings, we generalize the concepts above by considering the probability of sets of strings.
A careful selection of the sets to consider, which are languages, will allow an accurate trade-off between generality and complexity.
Let ℓ∈ℕ, let be an alphabet, and let be a set of languages consisting of strings in ^ℓ.
The language metric in
is the function L_: ^ℓ×^ℓ→, on
pairs of probability distributions p,p' over Γ^ℓ, defined as
L_(p, p') max_∈p() - p'(),
where the probability of a language is p() ∑_x ∈ p(x).
This original notion unifies all the most common metrics. Considering distributions over Γ^ℓ,
when = {{x}| x ∈^ℓ},
the language metric L_ reduces to L_∞.
When = 2^^ℓ, which is the set of all languages in ^ℓ,
the language metric becomes the total variation distance, which is half the value of L_1.
A similar reduction can be made for the prefix distances.
The language metric captures when = {x ^ℓ-t| t ∈ [0,ℓ], x ∈^t }, and it
captures when = ∪_t ∈ [0,ℓ] 2^^t.
Testing in language classes
The language metric can be applied directly to the language of traces and used for testing in RDP learning algorithms.
In particular, it suffices to consider any set of languages that satisfy ⊆_H-h = (1⋯m/)^H-t+1⊆^H-t+1, for each ∈.
However, as we have seen above, the selection of a specific set of languages has a dramatic impact on the metric that is being captured.
In this section, we study an appropriate trade-off between generality and sample efficiency, obtained through a suitable selection of .
Intuitively, we seek to evaluate the distance between candidate states based on increasingly complex sets of languages. We present a way to construct a hierarchy of sets of languages of increasing complexity. As a first step, we define sets of basic patterns 𝒢_i of increasing complexity.
𝒢_1 = { a / | a ∈}∪{ / r | r ∈}∪{1⋯ oi⋯m / | i ∈ [m] , oi∈i},
𝒢_i = 𝒢_i-1∪(𝒢_i-1), ∀ i ∈ [2,m+2].
In particular, 𝒢_1 focuses on single components, by matching an action a, a reward r, or a single observation feature oi. At the opposite side of the spectrum, the set 𝒢_m+2 considers every possible combination of actions, observation, reward; namely, it includes the set of singleton languages
{{ ao1⋯ om/r }| ao1⋯ om/r ∈1⋯m/},
which is the most fine-grained choice of patterns, but it grows exponentially with m. On the contrary, the cardinality of 𝒢_1 is linear in m.
Starting from the above hiearchy, we identify two more dimensions along which complexity can grow. One dimension results from concatenating the basic patterns from 𝒢_i, and it is obtained by applying the operator C_k^ℓ. The other dimension results from Boolean combinations of languages, and it is obtained by applying the operator .
Thus, letting ℓ = H-t+1,
we define the following three-dimensional hierarchy of sets _i,j,k of languages:
_i,j,1 = _i,j-1,1∪C_j^ℓ(𝒢_i), ∀ i ∈ [1,m+2], ∀ j ∈ [ℓ],
_i,j,k = _i,j,k-1∪(_i,j,k-1), ∀ i ∈ [1,m+2], ∀ j ∈ [ℓ], ∀ k ∈ [2,ℓ].
The family _i,j,k induces a family of language metrics L__i,j,k, which are non-decreasing along the dimensions of the hiearchy:
L__i,j,k≤ L__i+1,j,k, L__i,j+1,k, L__i,j,k+1, ∀ i ∈ [1,m+2], ∀ j ∈ [ℓ], ∀ k ∈ [ℓ],
and it may actually be increasing since we include more and more languages as we move towards higher levels of the hierarchy.
Moreover, the last levels _m+2,ℓ,k satisfy
L__m+2,ℓ,k≥ for every k ∈ [ℓ]
since {x ^ℓ-t| t ∈ [0,ℓ], x ∈^t }⊆_m+2,ℓ,k. Therefore, the metric L__m+2,ℓ,k is at least as effective as in distinguishing candidate states. It can be much more effective as shown next.
[Language metric in T-maze]
We now discuss the importance of our language metric for distinguishability,
with respect to the T-maze described in Example <ref> and the associated RDP in Example <ref>.
Consider the two states (q_⊤ 6, 6) and (q_ 6, 6) in the middle of the corridor and assume that the behavior policy is uniform.
In these two states, the probability of each future sequence of actions, observations, and rewards is identical, except for the sequences that reach S or the top/bottom cells.
These differ in the starting observation or the reward, respectively.
Thus, the maximizer in the definition of -distinguishability is the length u=6 and any string x that contains the initial observation.
Since the behavior policy is a random walk, the probability of x is 0.5^6 from one state, and 0 from the other, depending on the observation.
Therefore, -distinguishability decreases exponentially with the length of the corridor.
Consider instead the language _1,1,1 and its associated L__1,1,1.
This set of languages includes ^* “110" ^* ∩^ℓ, which represents the language of any episode suffix containing the observation 110.
Since it does not depend on specific paths, its probability is 0 for (q_ 6, 6) and it equals the probability of visiting the leftmost cell for (q_⊤ 6, 6).
For sufficiently long episodes, this probability approaches 1.
Thus, in some domains, the language metric improves exponentially over .
The behavior policy has an L__i,j,k-distinguishability of at least > 0,
where _i,j,k is constructed as above and is an input to the algorithm.
Given i∈[0,H], let p, p' ∈_i be two distributions over traces.
To have accurate estimates for the language metric over some _i,j,k,
we instantiate two estimators, p and p', respectively built using datasets of episodes C and C', defined as the fraction of samples that belong to the language;
that is, p∑_e ∈C(e ∈_i,j,k) / |C|
and p' ∑_e ∈C'(e ∈_i,j,k) / |C'|.
§.§ Analysis
In this section, we show how to derive high-probability sample complexity bounds for AdaCT-H when Count-Min-Sketch (CMS) or the language metric L_ is used.
Intuitively, it is sufficient to show that the statistical test is correct with high probability; the remaining proof is identical to that of <cit.>.
AdaCT-H(,δ) returns a minimal RDP with probability at least 1-4AOQδ when CMS is used to store the empirical probability distributions of episode suffixes, the statistical test is
(_1,_2)≥√(8log(16(ARO)^H-t/δ)/min(|_1|,|_2|)),
and the size of the dataset is at least ||≥𝒪(√(H)/ d_minμ_0), where d_min = min_t,q,ao d_t(q,ao).
The proof of Theorem <ref> appears in Appendix <ref>.
AdaCT-H(,δ) returns a minimal RDP with probability at least 1-4AOQδ when the statistical test is implemented using the language metric L_ and equals
L_(_1,_2)≥√(2log(4||/δ)/min(|_1|,|_2|)),
and the size of the dataset is at least ||≥𝒪(1/ d_minμ_0).
The proof of Theorem <ref> also appears in Appendix <ref>.
Note that by definition, μ_0 is the L-distinguishability of for the chosen language set , which has to satisfy μ_0>0 for AdaCT-H to successfully learn a minimal RDP.
Even though our sample complexity results are similar to those of previous work up to a factor √(H), as discussed earlier, μ_0 may be exponentially smaller for L_ than for .
We remark that the analysis does not hold if CMS is used to store the empirical probability distribution of the language metric L_, since the languages in a set may overlap, causing the probabilities to sum to a value significantly greater than 1.
§ EXPERIMENTAL RESULTS
In this section we present the results of experiments with two versions of AdaCT-H: one that uses CMS to compactly store probability distributions on suffixes, and one that uses the restricted language family _1,1,1.
We compare against FlexFringe <cit.>, a state-of-the-art algorithm for learning probabilistic deterministic finite automata, which includes RDPs as a special case.
To approximate episodic traces, we add a termination symbol to the end of each trace, but FlexFringe sometimes learns RDPs with cycles.
Moreover, FlexFringe uses a number of different heuristics that optimize performance, but these heuristics no longer preserve the high-probability guarantees.
Hence the automata output by FlexFringe are not always directly comparable to the RDPs output by AdaCT-H.
We perform experiments in five domains from the literature on POMDPs and RDPs: Corridor <cit.>, T-maze <cit.>, Cookie <cit.>, Cheese <cit.> and Mini-hall <cit.>.
Appendix <ref> contains a detailed description of each domain, as well as example automata learned by the different algorithms.
Table <ref> summarizes the results of the three algorithms in the five domains.
We see that AdaCT-H with CMS is significantly slower than FlexFringe, which makes sense since FlexFringe has been optimized for performance.
CMS compactly represents the probability distributions over suffixes, but AdaCT-H still has to iterate over all suffixes, which is exponential in H for the distance.
As a result, CMS suffers from slow running times, and exceeds the alloted time budget of 1800 seconds in the Mini-hall domain.
On the other hand, AdaCT-H with the restricted language family _1,1,1 is faster than FlexFringe in all domains except T-maze, and outputs smaller automata than both FlexFringe and CMS in all domains except Mini-hall.
The number of languages of _1,1,1 is linear in the number of actions, observation symbols, and reward values, so the algorithm does not have to iterate over all suffixes, and the resulting RDPs better exploit the underlying structure of the domains.
In Corridor and Cookie, all algorithms learn automata that admit an optimal policy.
However, in T-maze, Cheese and Mini-hall, the RDPs learned by AdaCT-H with the restricted language family _1,1,1 admit a policy that outperforms those of FlexFringe and CMS.
As mentioned, the heuristics used by FlexFringe are not optimized to preserve reward, which is likely the reason why the derived policies perform worse.
§ CONCLUSION
In this paper, we propose two new approaches to offline RL for Regular Decision Processes and provide their respective theoretical analysis. We also improve upon existing algorithms for RDP learning, and propose a modified algorithm using Count-Min-Sketch with reduced memory complexity. We define a hierarchy of language families and introduce a language-restricted approach, removing the dependency on -distinguishability parameters and compare the performance of our algorithms to FlexFringe, a state-of-the-art algorithm for learning probabilistic deterministic finite automata. Although CMS suffers from a large running time, the language-restricted approach offers smaller automata and optimal (or near optimal) policies, even on domains requiring long-term dependencies. Finally, as a future work, we plan to expand our approach to the online RDP learning setting.
abbrvnat
§ PSEUDOCODE OF ADACT-H
functionfunction
myalg
AdaCT–H
Dataset containing N traces in _H, failure probability 0<δ<1
Set of RDP states, transition function :× →
_0 {q_0}, (q_0) *initial state
t = 0, …, H
_𝖼,t+1{qao q∈_t, ao∈ } *get candidate states
qao∈_𝖼,t+1(qao) {e_t+1:H aroe_t+1:H∈(q)} *[h]compute suffixes
q_𝗆a_𝗆o_𝗆max_qao∈_𝖼,t+1 |(qao)| *most common candidate
_t+1{q_𝗆a_𝗆o_𝗆}, (q_𝗆,a_𝗆o_𝗆)=q_𝗆a_𝗆o_𝗆 *promote candidate
_𝖼,t+1_𝖼,t+1∖{q_𝗆a_𝗆o_𝗆} *remove from candidate states
qao∈_𝖼,t+1
{q'∈_t+1 not TestDistinct(t, (qao), (q'), δ)} *[h]confidence test
= ∅
_t+1_t+1∪{qao},
(q,ao)=qao *[h]promote candidate
q' element in ,
(q,ao)=q',
(q') (q') ∪(qao) *[h]merge states
_0∪⋯∪_H+1,
TestDistinct
Time t, two multisets _1 and _2 of traces in (×)^H-t+1, failure probability 0<δ<1
True if _1 and _2 are regarded as distinct, False otherwise
(_1,_2)≥√(2log(8(ARO)^H-t/δ)/min(|_1|,|_2|))
§ PSEUDOMETRICS
We provide the definition of pseudometric spaces.
Given a set 𝒦 and a non-negative function d: ×→, we call (, d) a pseudometric space with pseudometric d, if for all x, y, z ∈:
(i) d(x, x) = 0,
(ii) d(x, y) = d(y, x),
(iii) d(x, y) + d(y, z) ≥ d(x, z).
If it also holds that d(x, y) = 0 iff x = y, then d is a metric.
§ PROOF OF THEOREMS
In this appendix we prove Theorems <ref> and <ref> using a series of lemmas. We first describe how to implement AdaCT-H using CMS.
Given a finite set and a probability distribution q∈Δ(), let q∈Δ() be an empirical estimate of q computed using n samples.
CMS can store an estimate q of the empirical distribution q.
In this setting, the vector v contains the empirical counts of each element of , which implies v_1=n and q(z_i)=v_i/n for each z_i∈.
The following lemma shows how to bound the error between q and q.
Given a finite set , a probability distribution q∈Δ(), and an empirical estimate q∈Δ() of q obtained using n samples, let q be the estimate of q output by CMS with parameters δ_c and ε. With probability at least 1-||δ_c it holds that q-q_∞≤ϵ.
<cit.> show that for a point query that returns an approximation v_i of v_i, with probability at least 1-δ_c it holds that
v_i ≤ v_i + εv_1.
In our case, the estimated probability of an element z_i∈ equals q(z_i)=v_i/n, where v_i is the point query for z_i.
Using the result above, with probability at least 1-δ_c we have
q(z_i) = v_i/n≤v_i/n + εv_1/n = q(z_i) + ε.
Since CMS never underestimates a value, q(z_i)≤q(z_i) trivially holds. Taking a union bound shows that the inequality above holds simultaneously for all z_i∈ with probability 1-||δ_c.
The following two lemmas are analogous to Lemmas 13 and 14 of <cit.>.
For t∈[0,H], let _1 and _2 be multisets sampled from distributions p_1 and p_2 on Δ(_H-t).
Assume that we use CMS with parameters δ_c=δ/8(AOR)^H-t and ε=√(log(2/δ_c) / 2|_i| ) to store an approximation p_i of the empirical estimate p_i of p_i due to _i, i∈[2].
If p_1=p_2, with probability at least 1-δ the statistical test satisfies
(p_1, p_2) ≤√(8log(16(AOR)^H-t/δ)/min(|_1|,|_2|)).
Hoeffding's inequality states that for each i∈[2], with probability at least 1-δ_c it holds for each e∈_u, u≤ H-t, that
p_i(e) - p_i(e) ≤√(log(2/δ_c)/2|_i|).
We can now use Lemma <ref> to upper bound the metric as
L_(p_1, p_2) = max_u∈[0,H-t],e∈_up_1(e) - p_2(e)
≤max_u∈[0,H-t],e∈_u ( p_1(e) - p_1(e) + p_1(e) - p_1(e) + p_1(e) - p_2(e)
+ p_2(e) - p_2(e) + p_2(e) - p_2(e) )
≤√(log(2/δ_c)/2|_1|) + √(log(2/δ_c)/2|_1|) + √(log(2/δ_c)/2|_2|) + √(log(2/δ_c)/2|_2|)≤√(8log(2/δ_c)/min(|_1|,|_2|)).
Taking a union bound implies that this holds with probability at least 1-8(AOR)^H-tδ_c=1-δ.
For t∈[0,H], let _1 and _2 be multisets sampled from distributions p_1 and p_2 on Δ(_H-t).
Assume that we use CMS with parameters δ_c=δ/8(AOR)^H-t and ε=√(log(2/δ_c) / 2|_i| ) to store an approximation p_i of the empirical estimate p_i of p_i due to _i, i∈[2].
If p_1≠ p_2 and min(|_1|,|_2|)≥ 32 log (2/δ_c)/μ_0^2, with probability at least 1-δ the statistical test satisfies
(p_1, p_2) ≥√(8log(16(AOR)^H-t/δ)/min(|_1|,|_2|)).
We can use Lemma <ref> to lower bound the metric as
L_(p_1, p_2) = max_u∈[0,H-t],e∈_up_1(e) - p_2(e)
≥max_u∈[0,H-t],e∈_u ( p_1(e) - p_2(e) - p_1(e) - p_1(e) - p_1(e) - p_1(e)
- p_2(e) - p_2(e) - p_2(e) - p_2(e) )
≥μ_0 - √(log(2/δ_c)/2|_1|) - √(log(2/δ_c)/2|_1|) - √(log(2/δ_c)/2|_2|) - √(log(2/δ_c)/2|_2|)
≥μ_0 - √(8log(2/δ_c)/min(|_1|,|_2|))≥μ_0 - μ_0/2≥μ_0/2≥√(8log(2/δ_c)/min(|_1|,|_2|)).
Taking a union bound implies that this holds with probability at least 1-8(AOR)^H-tδ_c=1-δ.
The remainder of the proof of Theorem <ref> follows exactly the same steps as in the proof of <cit.>.
For completeness, we repeat the steps here.
The proof consists in choosing N=|| and δ such that the condition on min(|_1|,|_2|) in Lemma <ref> is true with high probability for each application of TestDistinct. Consider an iteration t∈[0,H] of AdaCT-H.
For a candidate state qao∈_𝖼,t+1, its associated probability is _t(q,ao) with empirical estimate p_t(qao)=|(qao)| / N, i.e. the proportion of episodes in that are consistent with qao. We can apply an empirical Bernstein inequality to show that
|p_t(qao) - _t(q,ao) |≥√(2p_t(qao)ℓ/N) + 14ℓ/3N = √( 2Mℓ) + 14ℓ/3 /N≤δ,
where M=|(qao)|, ℓ=log(4/δ), and δ is the failure probability of AdaCT-H. To obtain a bound on M and N, assume that we can estimate _t(q,ao) with accuracy _t(q,ao) / 2, which yields
_t(q,ao)/2 ≥√( 2Mℓ) + 14ℓ/3 /N
p_t(qao) ≥_t(q,ao) - √( 2Mℓ) + 14ℓ/3 /N≥_t(q,ao) - _t(q,ao)/2 = _t(q,ao)/2.
Combining these two results, we obtain
M = Np_t(qao) ≥ N_t(q,ao)/2 ≥N/2N√( 2Mℓ) + 14ℓ/3 = 1/2√( 2Mℓ) + 14ℓ/3 .
Solving for M yields M≥ 4ℓ, which is subsumed by the bound on M in Lemma <ref> since <1.
Hence the bound on M in Lemma <ref> is sufficient to ensure that we estimate _t(q,ao) with accuracy _t(q,ao) / 2.
We can now insert the bound on M from Lemma <ref> into (<ref>) to obtain a bound on N:
N ≥2 ( √( 2Mℓ) + 14ℓ/3) /_t(q,ao)≥2ℓ/_t(q,ao)8/√((H-t)log(4ARO)/ℓ+ 1 ) + 14/3≡ N_1.
To simplify the bound, we can choose any value larger than N_1:
N_1 ≤2ℓ/_t(q,ao)8/√(Hlog(4ARO) + Hlog(4ARO) ) + 14/3√(Hlog(4ARO))
< 32ℓ/_min √(Hlog(4ARO))≡ N_0,
where we have used _t(q,ao)≥_min, <1, ℓ=log 4 + log(1/δ)≥ 1, Hlog(4ARO)≥log 4≥ 1 and 8√(2) + 14/3 < 32/2. Choosing δ=δ_0/2QAO, a union bound implies that accurately estimating _t(q,ao) for each candidate state qao and accurately estimating p(e_0:u*) for each prefix in the multiset (qao) associated with qao occurs with probability 1-2QAOδ=1-δ_0, since there are at most QAO candidate states. Ignoring logarithmic terms, this equals the bound in the theorem.
It remains to show that the resulting RDP is minimal. We show the result by induction. The base case is given by the set _0, which is clearly minimal since it only contains the initial state q_0. For t∈[0,H], assume that the algorithm has learned a minimal RDP for sets _0,…,_t. Let _t+1 be the set of states at layer t+1 of a minimal RDP. Each pair of histories that map to a state q_t+1∈_t+1 generate the same probability distribution over suffixes. Hence by Lemma <ref>, with high probability TestDistinct(t,(qao),(q'a'o'),δ) returns false for each pair of candidate states qao and q'a'o' that map to q_t+1. Consequently, the algorithm merges qao and q'a'o'. On the other hand, by assumption, each pair of histories that map to different states of _t+1 have -distinguishability . Hence by Lemma <ref>, with high probability TestDistinct(t,(qao),(q'a'o'),δ) returns true for each pair of candidate states qao and q'a'o' that map to different states in _t+1. Consequently, the algorithm does not merge qao and q'a'o'. It follows that with high probability, will generate exactly the set _t+1, which is that of a minimal RDP.
The proof of Theorem <ref> is achieved by proving two very similar lemmas.
For t∈[0,H], let _1 and _2 be multisets sampled from distributions p_1 and p_2 on Δ(_H-t), and let p_1 and p_2 be empirical estimates of p_1 and p_2 due to _1 and _2, respectively.
If p_1=p_2, with probability at least 1-δ the statistical test satisfies
L_(p_1, p_2) ≤√(2log(4||/δ)/min(|_1|,|_2|)).
Let δ_ℓ=δ/2||.
Hoeffding's inequality states that for each ∈ and each i∈{1,2}, with probability at least 1-δ_ℓ it holds that
p_i() - p_i() ≤√(log(2/δ_ℓ)/2|_i|).
We can now upper bound the language metric as
L_(p_1, p_2) = max_∈p_1() - p_2()
≤max_∈ ( p_1() - p_1() + p_1() - p_2() + p_2() - p_2() )
≤√(log(2/δ_ℓ)/2|_1|) + 0 + √(log(2/δ_ℓ)/2|_2|)≤√(2log(2/δ_ℓ)/min(|_1|,|_2|)).
Taking a union bound implies that this bound holds with probability at least 1-2||δ_ℓ=1-δ.
For t∈[0,H], let _1 and _2 be multisets sampled from distributions p_1 and p_2 on Δ(_H-t), and let p_1 and p_2 be empirical estimates of p_1 and p_2 due to _1 and _2, respectively.
If p_1≠ p_2 and min(|_1|,|_2|)≥ 8log(4||/δ)/μ_0^2, with probability at least 1-δ the statistical test satisfies
L_(p_1, p_2) ≥√(2log(4||/δ)/min(|_1|,|_2|)).
We can lower bound the language metric as
L_(p_1, p_2) = max_∈p_1() - p_2()
≥max_∈ ( p_1() - p_2() - p_1() - p_1() - p_2() - p_2() )
≥μ_0 - √(log(2/δ_ℓ)/2|_1|) - √(log(2/δ_ℓ)/2|_2|)≥μ_0 - √(2log(4||/δ)/min(|_1|,|_2|))
≥μ_0 - μ_2/2≥μ_0/2≥√(2log(4||/δ)/min(|_1|,|_2|)).
Taking a union bound implies that this bound holds with probability at least 1-2||δ_ℓ=1-δ.
The remainder of the proof of Theorem <ref> is analogous to that of Theorem <ref>.
However, we no longer get a term √(H) from √(log((AOR)^H)) unless the number of languages in is exponential in H.
§ DETAILS OF THE EXPERIMENTS
In this appendix we describe each domain in detail and include examples of RDPs learned by AdaCT-H.
§.§ Corridor
This RDP example was introduced in <cit.>. The environment consists of a 2× m grid, with only two actions a_0 and a_1 which moves the agent to states (0,i+1) and (1,i+1) respectively from state (·, i). The goal of the agent is to avoid an enemy which is present in position (0,i) with probability p_i^0, and at (1,i) with probability p_i^1. The agent receives a reward of +1 for avoiding the enemy at a particular column, and the probabilities p_i^0 and p_i^i are switched every time it encounters the enemy. When the agent reaches the last column, its position is reset to the first column. The observation space is given by (i,j,e), where i,j is the cell position of the agent and e ∈{enemy, clear} denotes the presence of the guard in the current cell. Fig <ref> shows the minimal automaton obtained by all three algorithms for H=5.
§.§ T-maze
The T-maze environment was introduced by Bakker <cit.> to capture long term dependencies with RL-LSTMs. As shown in Figure <ref>, at the initial position S, the agent receives an observation X, depending on the position of the goal state G in the last column. The agent can take four actions, North, South, East and West. The agent receives a reward of +4 on taking the correct action at the T-junction, and -1 otherwise, terminating the episode. The agent also recieves a -1 reward for standing still. At the initial state the agent receives observation 011 or 110, 101 throughout the corridor and 010 at the T-junction. Fig <ref> shows the optimal automaton obtained when the available actions in the corridor are restricted to only East (the automaton obtained without this restriction is shown in Fig. <ref>). Table <ref> shows our results with the unrestricted action space. Both our approaches find the optimal policy in this case, unlike Flex-Fringe which fails to capture this long term dependency.
§.§ Cookie domain
We modify the original cookie domain as described in Icarte et al. <cit.>, to a simpler domain consisting of 4 rooms, blue, white, green and red as shown in Fig. <ref>. If the agent presses the button in room red, a cookie appears in room blue or green with equal probability. The agent can move left, right, up or down, can press the button in room red, and eat the cookie to receive a reward 1, and then it may press the button again. There are 6 possible observations (4 for each room, and 2 for observing the cookie in the two rooms). We use the set _1,1,1 for distinguishability in the restricted language case. Our restricted language approach here finds the optimal policy and the smallest state space.
§.§ Cheese maze
Cheese maze <cit.> consists of 10 states, and 6 observations, and 4 actions. After reaching the goal state, the agent receives a reward of +1, and the position of the agent is reintialised to one of the non-goal states with equal probability. Our restricted language approach uses the set _1,1,1. For a horizon of 6, the results for the restricted language and Flex-Fringe are comparable, however upon further increasing the horizon, Flex-Fringe outperforms AdaCT-H by learning cyclic RDPs which is not possible in our approach.
§.§ Mini-hall
The mini-hall environment <cit.> shown in Fig <ref> has 12 states, 4 orientations in 3 rooms, a goal state given by a star associated with a reward of +1, 6 observation and 3 actions, and the position of the agent is reset after the goal is reached. This setting is much more complex than the others because 12 states are mapped into 6 observations; for example, starting from observation 3, 3 actions are required under the optimal policy to solve the problem if the starting underlying state was in Room B or C. We use the set _1,1,1 in our restricted language approach for distinguishability. Although we get a much larger state space, our algorithm gets closer to the optimal policy. However our CMS approach is not efficient in this case and exceeds the alloted time budget of 1800 seconds, as it requires to iterate over the entire length of trajectories.
|
http://arxiv.org/abs/2409.02610v1 | 20240904105231 | A spatial model for dormancy in random environment | [
"Helia Shafigh"
] | math.PR | [
"math.PR"
] |
Helia Shafigh]
A Spatial Model for Dormancy in Random Environment
[
September 9, 2024
==================================================
empty
Helia Shafigh[WIAS Berlin, Mohrenstraße 39, 10117 Berlin, Germany, [email protected]]
WIAS Berlin
§ ABSTRACT
In this paper, we introduce a spatial model for dormancy in random environment via a two-type branching random walk in continuous-time, where individuals can switch between dormant and active states through spontaneous switching independent of the random environment. However, the branching mechanism is governed by a random environment which dictates the branching rates. We consider three specific choices for random environments composed of particles: (1) a Bernoulli field of immobile particles, (2) one moving particle, and (3) a Poisson field of moving particles. In each case, the particles of the random environment can either be interpreted as catalysts, accelerating the branching mechanism, or as traps, aiming to kill the individuals. The different between active and dormant individuals is defined in such a way that dormant individuals are protected from being trapped, but do not participate in migration or branching.
We quantify the influence of dormancy on the growth resp. survival of the population by identifying the large-time asymptotics of the expected population size. The starting point for our mathematical considerations and proofs is the parabolic Anderson model via the Feynman-Kac formula. Especially, the quantitative investigation of the role of dormancy is done by extending the Parabolic Anderson model to a two-type random walk.
Keywords and phrases. Parabolic Anderson model, dormancy, populations with seed-bank, branching random walk, Lyapunov exponents, Rayleigh-Ritz formula, switching diffusions, Feynman-Kac formula, large deviations for two-state Markov chains
§ INTRODUCTION AND MAIN RESULTS
§.§ Biological Motivation
Dormancy is an evolutionary trait that has developed independently across various life forms and is particularly common in microbial communities. To give a definition, we follow <cit.> and refer to dormancy as the ability of individuals to enter a reversible state of minimal metabolic activity. The collection of all dormant individuals within a population is also often called a seed-bank. Maintaining a seed-bank leads to a decline in the reproduction rate, but it also reduces the need for resources, making dormancy a viable strategy during unfavourable periods. Initially studied in plants as a survival strategy (cf. <cit.>), dormancy is now recognized as a prevalent trait in microbial communities with significant evolutionary, ecological, and pathogenic implications, serving as an efficient strategy to survive challenging environmental conditions, competitive pressure, or antibiotic treatment. However, it is at the same time a costly trait whose maintenance requires energy and
a sophisticated mechanisms for switching between active and dormant states. Moreover, the increased survival rate of dormant individuals must be weighed against their low reproductive activity. Despite its costs, dormancy still seems to provide advantages in variable environments. For a recent overview on biological dormancy and seed-banks we refer to <cit.>.
The existing stochastic models for dormancy can be roughly categorized into two approaches: population genetic models and population dynamic models. While the first approach assumes a constant population size and focusses on the genealogical implications of seed-banks, the latter is typically concerned with individual-based modelling through the theory of branching processes. Following a brief example in the book <cit.>, a two-type branching process (without migration) in a fluctuating random environment has been introduced in <cit.>, which served as a motivation for this paper. In <cit.>, the authors consider three different switching strategies between the two types (dormant and active), namely the stochastic (or: spontaneous; simultaneous) switching, responsive switching and anticipatory switching. In the latter two strategies, individuals adapt to the fluctuating environment by selecting their state (dormant or active) based on environmental conditions via e.g. an increased reproduction activity during beneficial phases and a more extensive seed-bank during unfavourable ones (resp. vice versa). In contrast, the stochastic switching strategy, which remains unaffected by environmental changes, proves especially advantageous during catastrophic events, as it, with high probability, ensures the existence of dormant individuals, which may contribute to the survival of the whole population, when a severely adverse environment might eradicate all active ones. As an example, it is estimated that more than 80% of soil bacteria are
metabolically inactive at any given time, forming extensive seed-banks of dormant individuals independent of the current conditions (cf. <cit.> and <cit.>). This makes the understanding of the stochastic switching strategy an interesting and important task.
§.§ Modelling Approach and Goals
The aim of this paper is to investigate the stochastic switching strategy in order to quantitatively compare the long-term behaviour of populations with and without this dormancy mechanism.
Inspired by the Galton-Watson process with dormancy introduced in <cit.>, our first goal was to extend this model to a continuous-time spatial model on ℤ^d.It is worth noting that spatial models for dormancy have already been considered in the setting of population genetics (cf. <cit.>), where the population consists of different genetics types being inherited from parents to children. In such models, the total population size is fixed, so that the questions that arises are not about the extinction and survival of the whole population but rather about the evolution of the fraction of the different types. Notably, one of the goals in <cit.> is to determine criteria for co-existence resp. clustering of types in the limit of large population sizes. Another similar population genetics model, but this time with a (static) random environment, has been introduced in <cit.>, in which the authors investigate the influence of dormancy again on co-existence and clustering. To the best of our knowledge, corresponding spatial models for dormancy in the setting of population size models are still missing, such that the extension of the branching process in <cit.> seems to be a natural step. At the same time, there is a large repertoire of branching random walk models in random environment in the literature (cf. <cit.> for a survey), even though none of them incorporates dormancy. Hence, by introducing a continuous-time spatial model with migration, branching resp. extinction driven by a random environment, as well as a Markovian switching between the two states active and dormant, we bridge the gap between (non-spatial) two-type branching processes with dormancy on one side, and spatial branching random walks in random environments (without dormancy) on the other side of the existing literature. Especially, our main interest lies in a quantitative comparison of the population size of our model to those of existing branching random walk models without dormancy.
§.§ Description of the Model
In our model, the population lives on ℤ^d and consists of two different types i∈{0,1} of particles, where we refer to 0 as dormant and to 1 as active. Let η(x,i,t) be the number of particles in spatial point x∈ℤ^d and state i at time t≥ 0, which shall evolve in time according to the following rules:
* at time t=0, there is only one active particle in 0∈ℤ^d and all other sites are vacant;
* all particles act independently of each other;
* active particles become dormant at rate s_1≥ 0 and dormant particles become active at rate s_0≥ 0;
* active particles split into two at rate ξ^+(x,t)≥ 0 and die at rate ξ^-(x,t)≥ 0, depending on their spatial location x and on time t, where both ξ^+ and ξ^- are random fields;
* active particles jump to one of the neighbour sites with equal rate κ≥ 0;
* dormant particles do not participate in branching, dying or migration.
Write η_0(x,i):=η(x,i,0)=δ_(0,1)(x,i) and ℙ_η_0 for the corresponding probability measure with start in η_0. Then (η,ℙ_η_0) describes a Markov process on ℕ^ℤ^d. In the following, we abbreviate ξ(x,t):=ξ^+(x,t)-ξ^-(x,t) for the balance between branching and dying and refer to ξ as the underlying random environment. Let
u(x,i,t):=𝔼_η_0[η(x,i,t)]
denote the expected number of particles in x∈ℤ^d and state i∈{0,1} at time t with initial condition
u(x,i,0)=δ_(0,1)(x,i).
Note, that the expectation is only taken over switching, branching and dying and not over the random environment ξ. If we average over ξ as well, what we will denote in the following by <·>, then we refer to
<u(x,i,t)>=<𝔼_η_0[η(x,i,t)]>
as the annealed number of particles in x∈ℤ^d and in state i∈{0,1} at time t.
§.§ Choices of the Random Environment
In this paper we are going to consider three specific choices of the random random environment ξ which derives the evolution of the particles:
* Bournoulli field of immobile particles. Place in each point x∈ℤ^d independently and with probability p∈(0,1) one single particle which does not exhibit any movement. This results in a stationary field of immobile particles with a Bernoulli distribution over ℤ^d.
* One moving particle. Here the random environment is dynamic and consists of one single particle starting in the origin and moving around according to a simple symmetric random walk with jump rate 2dρ.
* Poisson field of moving particles. At point x∈ℤ^d, independently and according to a Poisson distribution with intensity ν, place a random number of particles. The particles move independently of each other, each performing a simple symmetric random walk with jump rate 2dρ. This setup generates a field of moving particles starting from a Poisson cloud.
Clearly, for each of the choices above, ξ is a non-negative number, which results always in an positive balance between branching and killing. To allow for negative rates as well, we multiply ξ with some factor γ∈[-∞,∞) and will consider γξ as the underlying random environment. Thus, each of our three choices can be either interpreted as a field of traps, which corresponds to γ<0, or catalysts, if γ>0. In the first case, active individuals will die with rate |γ| if they encounter one of the traps, whereas they branch into two with rate γ in the presence of catalysts in the latter case.
§.§ Results
Recall the number of particles u(x,i,t) in point x∈ℤ^d and state i∈{0,1} at time t, as defined in (<ref>). The quantity we are interested in at most in the current paper is the annealed total number of particles
<U(t)>:=∑_x∈ℤ^d∑_i∈{0,1}<u(x,i,t)>,
which turns into the annealed survival probability up to time t for γ<0. Our results concern the large-time asymptotics of <U(t)> in case of both positive and negative γ and for the three specific choices of the random environment mentioned above.
Our first theorem quantifies the asymptotic behaviour of <U(t)> if the environment consists of a Bernoulli field of particles:
Let ξ be chosen according to (1) and d≥ 1.
(a) If γ=-∞, then the annealed survival probability converges to 0, as t→∞, and obeys the asymptotics
log<U(t)>=-c_d(log(1-p))^2/d+2(κ
s_0/s_0+s_1)^d/d+2t^d/d+2(1+o(1)), t→∞,
for some constant c_d depending only on the dimension d.
(b) If γ∈(0,∞), then the annealed number of particles satisfies
lim_t→∞1/tlog<U(t)>=γ-s_1-(γ+s_0-s_1)^2-s_0s_1/√(γ^2+2γ(s_0-s_1)+(s_0+s_1)^2).
Our second theorem deals with the case of one moving particle:
Let ξ be chosen according to (2).
(a) If γ∈(-∞,0), then the annealed survival probability converges to 0 in the dimensions d∈{1,2}, as t→∞, and satisfies the asymptotics
<U(t)>={[ 2√((s_0+s_1)(s_0(ρ+κ)+s_1ρ))/√(π)s_0|γ|1/√(t)(1+o(1)), d=1,; 4π(s_1ρ+s_0(ρ+κ))/s_0|γ|1/log(t)(1+o(1)), d=2 ].
as t→∞.
In dimensions d≥ 3 the annealed survival probability admits the limit
lim_t→∞<U(t)>=1/1+|γ| G_d∈(0,1)
for a constant G_d∈(0,∞).
(b) If γ∈(0,∞), then for all d≥ 1, the annealed number of particles satisfies
lim_t→∞1/tlog<U(t)>=sup_f∈ℓ^2(ℤ^d×{0,1}),f_2=1(A_1(f)-A_2(f)-A_3(f))+√(s_0s_1)
where
A_1(f):= γ f(0,1)^2,
A_2(f):= 1/2∑_i∈{0,1}∑_x,y∈ℤ^d, x∼ y(iκ+ρ)(f(x,i)-f(y,i))^2,
A_3(f):= ∑_x∈ℤ^d√(s_0s_1)(f(x,1)-f(x,0))^2+∑_i∈{0,1}∑_x∈ℤ^ds_i f(x,i)^2.
Finally, we establish the asymptotics of <U(t)> for the third choice of the environment, namely a Poisson field of moving particles:
Let ξ be chosen according to (3).
(a) If γ∈[-∞,0), then the annealed survival probability converges exponentially fast to 0 as t→∞ in all dimensions d≥ 1 and obeys the asymptotics
log<U(t)>={[ -4ν√(ρ s_0/(s_0+s_1)π)√(t)(1+o(1)), d=1,; -4νρπ s_0/s_0+s_1t/log(t)(1+o(1)), d=2,; -λ_d,γ, ρ,ν,s_0,s_1 t(1+o(1)), d≥ 3, ].
as t→∞, for some constant λ_d,γ, ρ,ν,s_0,s_1>0 depending on all the parameters.
(b) If γ∈(0,∞), then for all dimensions d≥ 1 the annealed number of particles satisfies the double-exponential asymptotics
lim_t→∞1/tloglog<U(t)>=sup_f∈ℓ^2(ℤ^d),f_2=1(γ f(0)^2-1/2∑_x,y∈ℤ^d, x∼ yρ(f(x)-f(y))^2).
§.§ Relation to the Parabolic Anderson Model
Recall the number of particles u(x,i,t) in point x∈ℤ^d and state i∈{0,1} at time t as defined in (<ref>). It is already known (cf. <cit.>) that u:ℤ^d×{0,1}×[0,∞)→ℝ solves the partial differential equation
{[ /ṭu(x,i,t) = iκΔ u(x,i,t) + Q u(x,i,t)+iγξ(x,t)u(x,i,t), t>0,; u(x,i,0) = δ_(0,1)(x,i), ].
where
Q u(x,i,t):= s_i(u(x,1-i,t)-u(x,i,t))
and Δ is the discrete Laplacian
Δ f(x):=∑_y∈ℤ^d, x∼ y[f(y)-f(x)]
acting on functions f:ℤ^d→ℝ, such that
Δ u(x,i,t):=∑_y∈ℤ^d, x∼ y[u(y,i,t)-u(x,i,t)].
We call (<ref>) the parabolic Anderson model with switching. If we consider a one-type branching random walk with only active particle evolving under the same evolution rules except of the switching mechanism, starting from one single particle in the origin, then in is well-known and has been shown first in <cit.> that the corresponding expected number of particles solves the Parabolic Anderson model (without switching)
{[ /ṭu(x,t) = κΔ u(x,t) + γξ(x,t)u(x,t), t>0, x∈ℤ^d; u(x,0) = δ_0(x), x∈ℤ^d. ].
The parabolic Anderson model has been studied intensely during the past years and a comprehensive overview of results can be found in <cit.>. One of the most powerful tools and often the starting point of the analysis of the PAM is the Feynman-Kac formula
u(x,t)=𝔼_x^X[exp(∫_0^tγξ(X(s),t-s) ṣ)δ_0(X(t))],
where 𝔼_x^X denotes the expectation over a simple symmetric random walk X with start in x and generator κΔ. In other words, the Feynman-Kac formula asserts that the time evolution of all particles can be expressed as an expectation over one single particle moving around according to the same migration kernel and with a varying mass, representing the population size. As we can see on the right hand-side of (<ref>), the mass of X changes exponentially depending on the random environment ξ. Note, that if γ<0, then the right hand-side of (<ref>) lies in [0,1] and represents the survival probability of a single particle up to time t. Now, since the Feynman-Kac formula is a mighty tool for the study of the parabolic Anderson model, it is only natural to pursue an analogous formulation in case of our two-type process with switching. To this end, let α=(α(t))_t≥ 0 be a continuous-time Markov process with state space {0,1} and generator
Qf(i):=s_i(f(1-i)-f(i))
for f:{0,1}→ℝ. Conditioned on the evolution of α, we define a continuous-time random walk X=(X(t))_t≥ 0 on ℤ^d which is supposed to stay still at a time t, if α(t)=0, or perform a simple symmetric walk with jump rate 2dκ, if α(t)=1. In other words, the joint process (X,α) is the Markov process with the generator
ℒf(x,i):=iκ∑_y∼ x(f(y,i)-f(x,i))+s_i(f(x,1-i)-f(x,i))
for x∈ℤ^d, i,j∈{0,1} and a test function f:ℤ^d×{0,1}→ℝ. Note, that the random walk X itself is not Markovian due to the dependence on α. Then, we call (X,α) a regime-switching random walk (cf. <cit.> for the continuous-space version) and interpret X as a particle which is active at time t, if α(t)=1, and dormant otherwise. Then, given a fixed realization of ξ, the formal solution of (<ref>) is given by the Feynman-Kac formula
u(x,i,t)=𝔼_(x,i)^(X,α)[exp(∫_0^tγα(s)ξ(X(s),t-s) ṣ)δ_(0,1)(X(t),α(t))],
where 𝔼_(x,i)^(X,α) denotes the expectation over the joint process (X,α) starting in (x,i) (cf. <cit.>). Thus, the study of our two-type branching process can be reduced to the analysis of only one particle with the same migration, branching and switching rates.
§.§ Related Results
The parabolic Anderson model without switching has been a topic of great interest during the past years. For a recent overview of results related to the classical model as well as many extensions we refer to <cit.>. In this section we recall few results according to the parabolic Anderson model on ℤ^d which are most relevant for and most related to our models. Let us start with the case γ<0, i.e. with the case of a trapping random environment. The first model is best known as random walk among Bernoulli obstacles and is the analogous version of our model (1) without switching, i.e. the time-independent potential ξ, which represents the traps or obstacles, is Bernoulli distributed with some parameter p>0. After getting trapped, the random walk with generator κΔ dies immediately, which corresponds to the hard trapping case γ=-∞. It has been shown in <cit.> using a coarse graining technique known an method of
enlargement of obstacles that the annealed survival probability up to time t decays asymptotically as
exp(-c_d(log(1-p))^2/d+2(κ t)^d/d+2(1+o(1))), t→∞,
with the same constant c_d as in (<ref>). In the setting of time-dependent potentials, the case of one moving trap with generator ρΔ with soft killing (γ∈(-∞,0)) has been studied in <cit.> for which it has been proven that the survival probability of a random walk with generator κΔ among this mobile traps has the asymptotics
{[ 2√(ρ+κ)/√(π)γ1/√(t)(1+o(1)), d=1,; 4π(ρ+κ)/|γ|log(t)(1+o(1)), d=2,; 1-G_d(0)/ρ+κ+|γ| G_d(0), d≥ 3, ].
for t→∞ where G_d denotes the Green's function of a random walk with generator Δ. Hence, in dimension d∈{1,2}, the survival probability converges polynomially to zero, where the rate of convergence depends on all the parameters ρ,κ and γ, whereas in dimensions d≥ 3 the survival probability converges to some number in (0,1) and depends on the averaged total time spent in the origin, represented by the Green's function, as well. Finally, the case of a Poisson field of moving traps with the random potential according to (3) has been investigated in <cit.>. Here, the survival probability of the random walk with jump rate 2dκ is known to decay asymptotically as
{[ exp(-4ν√(ρ/π)√(t)(1+o(1))), d=1,; exp(-4νρπt/log(t)(1+o(1))), d=2,; exp(-λ_d,γ,ρ,ν t(1+o(1))), d≥ 3, ].
for some λ_d,γ,ρ,ν>0, in both case of hard and soft trapping rates γ∈[-∞,0).[In <cit.> the authors have considered slightly different migration rates resulting in slightly different prefactors in the asymptotics, namely the normalized Laplacian 1/2dΔ instead of Δ. For better comparison, we stated (<ref>) in case of the non-normalized Laplacian Δ.] That the survival probability, at least in dimensions d∈{1,2}, seems to be independent of the jump rate κ of the individuals, is due to the fact that the asymptotics (<ref>) come from the behaviour that the individuals stay in the origin throughout the time, which corresponds to κ=0.
In the case γ>0 of catalysts, the analogous version without switching of our first model regarding Bernoulli distributed immobile catalysts is covered in <cit.> as an example of a static bounded potential ξ. Although the results in <cit.> are held very general, applying them to the Bernoulli potential yields that
lim_t→∞1/tlog<U(t)>=γ,
which come from the behaviour that the individuals find a region full of catalysts and stay there the whole time, such that they can continue branching throughout the time.
For the case of one moving catalyst with jump rate 2dρ, is has been shown in <cit.> that the annealed number of particles increases exponentially as well and the exponential growth rate it given by the variational formula
lim_t→∞1/tlog<U(t)>=sup_f∈ℓ^2(ℤ^d),f_2=1(γ f(0)^2-1/2∑_x,y∈ℤ^d, x∼ y(κ+ρ)(f(x)-f(y))^2).
A similar result but with double-exponential growth has been proven in <cit.> for the case of a Poisson field of moving catalysts, where the limit
lim_t→∞1/tloglog<U(t)>=sup_f∈ℓ^2(ℤ^d),f_2=1(γ f(0)^2-1/2∑_x,y∈ℤ^d, x∼ yρ(f(x)-f(y))^2).
has been shown to be finite in all dimensions d≥ 1. The absence of κ is due to the fact that it is most favourable for the population growth if the individuals stay immobile, which corresponds to κ=0. Moreover, is has been shown in <cit.> that there are cases, where already the quantity 1/tlog<U(t)> converges to a finite limit, namely in the transient dimensions d≥ 3 under the assumption that 0≤γ/ρ≤ G_d(0)^-1, where G_d(0) denotes the Green's function again.
§.§ Discussion
In this section, we discuss the extent to which the stochastic dormancy strategy, as defined in our model, affects the long-time dynamics of the population. Let us start with the case of catalytic random environments.
Recall from (<ref>) that if the environment is chosen according to a static Bernoulli field, then in the analogous model without dormancy the population grows exponentially fast in time t with rate γ. Theorem 1.1.(b) asserts that the population growth still occurs exponentially in t when dormancy is incorporated; however, the growth rate is no longer γ any more but rather a smaller constant, as seen from (<ref>). Indeed, an easy calculation shows that
s_1+(γ+s_0-s_1)^2-s_0s_1/√(γ^2+2γ(s_0-s_1)+(s_0+s_1)^2)>0
for all γ,s_0,s_1>0, such that the growth rate is strictly decreased after incorporating the stochastic dormancy strategy. It is worth mentioning that this effect comes from the probability for large deviations of the time spent in the active state. Indeed, as we will see later in the proofs, (<ref>) can also be expressed as
lim_t→∞1/tlog<U(t)>=γ-inf_[a∈[0,1]{I(a)+γ(1-a)},
where I is a non-negative and strictly convex function defined as
I(a)=-2√(s_0s_1a(1-a))+(s_1-s_0)a+s_0.
Next, note that the population only grows in those time intervals in which the individuals are active. Now, on one hand, we have a law of large numbers for the fraction of time spent in the active state, which asserts that the average time proportion each individual spends in the active state equal s_0/(s_0+s_1) with probability one. On the other hand, as we will prove in Section 2, there is a large deviation principle asserting that for any other fraction of time a∈[0,1], the probability of spending a· t time units in the active state up to time t decays exponentially in t for t→∞, where the decay rate depends on the proportion a. This decays rate is nothing but the function I defined in (<ref>). Hence, (<ref>) tells us that the original growth rate γ is decreased by γ multiplied with the proportion 1-a of time spent in the dormant state on one hand, as there is no reproduction in this case, and by I(a) on the other hand, as it represents the probabilistic cost to spend exactly the proportion a of time in the active state. At the end, this probabilistic cost has to be weighed against the positive contribution to reproduction, such that I(a)+γ(1-a) has to be optimized over α.
Continuing with one moving catalyst and comparing (<ref>) to (<ref>), we see that the population again grows exponentially in t and the rate is affected by all the involved mechanisms. While the first term in the variational formula (<ref>) shows that branching with rate γ occurs whenever the distance between individuals and catalytic particles is equal to 0, the term γ f(0,1)^2 appearing in (<ref>) has the interpretation that in case of dormancy, individuals can only branch with rate γ if, first, the distance between them and the catalyst is equal to 0, and second, if they are in the active state 1. To highlight another difference, we see that the only probabilistic cost that appears in the variational formula (<ref>) is the one coming from the movement of the individuals (with rate 2dκ) as well as of the catalyst (with rate 2dρ), whereas in (<ref>) besides the movements appearing in the term A_2, also the exchange between states, represented by A_3, has to be taken into account. The additional term √(s_0s_1) comes from a change of measure , which will be clarified in Section 2.
In case of a Poisson field of moving catalysts, we see that our asymptotics (<ref>) is on a double-exponential scale and equals the variational formula (<ref>) for the corresponding model without dormancy. In other words, although the stochastic dormancy strategy slows down the population growth due to the lack of reproduction in dormant phases in the first two choices of the environment, this inactivity does not seem to influence the population growth at all, if the moving catalysts start from a Poisson cloud. As will be revealed in Section 5, this is due to the fact that if the individuals manage to find favourable regions with a high density of catalysts, then the reproduction rate in the active state is on such a high scale, namely double-exponentially in time t, that the exponential probabilistic cost to stay active is negligible in comparison to the high positive outcome. Thus, the variational formula (<ref>) does not take dormancy into account and depends only on the branching rate and movement of the catalysts.
As we will see later in the proofs, the independence of κ arises from the fact that it is most favourable for the population growth if the individuals stay immobile, which corresponds to κ=0, and matches the behaviour in the model without dormancy, as mentioned in the last section.
Next, we discuss the case γ<0 of trapping environments. Our results concerning this case can be summarized more briefly, since they share a similarity regarding the dependence on dormancy: At least in dimensions, in which we have an explicit expression for the asymptotic survival probability, we see that the latter is increased after incorporating the stochastic dormancy strategy in comparison to the corresponding models without dormancy, and is monotone in the average time s_1/(s_0+s_1) spent in the dormant state. Moreover, setting s_1=0 yields exactly the same asymptotics as for the models without dormancy, making our results a generalization for arbitrary s_1≥ 0. This is immediate clear for the Bernoulli field of immobile traps in all dimensions, as comparing (<ref>) and (<ref>) shows. As will be addressed in the proof, formula (<ref>) also indicates that the higher survival probability results from a time-change, since individuals can only move towards the immobile traps during active phases. Therefore, if we only take into account those time intervals in which the individuals are mobile, which in average accounts for a proportion of s_0/(s_0+s_1) of the whole time due to the law of large numbers, then (<ref>) translates into (<ref>). Note that here the law of large numbers dictates the behaviour of the time spent in the active state and not the large deviation principle, since the scale t^d/(d+2) is much smaller than the large deviation scale t.
The law of large number seems to play a role also in the case of one moving catalyst. However, comparing (<ref>) and (<ref>) demonstrates that in this case the positive effect of dormancy on the survival probability does not only come from a time-change. Rewriting the pre-factor
2√((s_0+s_1)(s_0(ρ+κ)+s_1ρ))/√(π)s_0|γ|=2√(s_0/s_0+s_1κ+ρ)/√(π)s_0/s_0+s_1|γ|
of the polynomial asymptotics (<ref>) in dimension d=1 and comparing it to (<ref>) suggests that, although the time-change is still present as a pre-factor of the jump rate κ, the killing rate γ is reduced as well, since the individuals are again only a proportion of s_0/(s_0+s_1) of the time active and therefore vulnerable to the traps. For d=2, we can see both effects as well. Also the monotonicity of the survival probability in the average dormancy proportion s_1/(s_0+s_1) becomes evident through a simple calculation.
We see similar effects also in case of a Poisson field of moving traps by comparing (<ref>) to (<ref>), where the reduction of the exponential decay rate by factor √(s_0/(s_0+s_1)) in dimension 1 and by factor s_0/(s_0+s_1) in dimension 2 comes from the law of large numbers as well, whereas in the dimensions d≥ 3 the large deviation principle dictates the asymptotics due to the joint time scale t. The (surprising) independence of the survival probability of the killing rate γ as well as the jump rate κ has been discussed in <cit.> for the corresponding model without dormancy and underlies the same reasons in our case.
To summarize, for γ>0 the number of particles up to time t is either unchanged or reduced due to dormancy in our models, while for γ<0, at least in those cases in which an explicit expression is given, the survival probability is increased with dormancy and is monotone in the average time spent in the dormant state.
§.§ Outline
The rest of this paper is organized as follows. In section 2 we will convert our branching model into a switching random walk via the Feynman-Kac formula in order to obtain a more convenient representation of <U(t)>. Further, we will collect some results related to the distribution and large deviations of the local times of α. Section 3, 4 and 5 respectively are dedicated to the proofs of our main Theorems 1.1, 1.2 and 1.3 respectively.
§ PREPARATORY FACTS
§.§ Feynman-Kac formula
As discussed in the introduction, our dormancy model is motivated by population dynamics and initially defined as a two-type branching random walk with Markovian switching between the types. However, all our proofs and considerations are founded on the Feynman-Kac formula (<ref>), which serves as the cornerstone for the subsequent steps throughout the remainder of this paper. Note that our choices of our dynamic random environments (2) and (3) are reversible in time, in the sense that (ξ(·,t))_0≤ t≤ T is equally distributed to (ξ(·,T-t))_0≤ t≤ T, for all T>0. Hence, taking the expectation with respect to ξ and changing the order of integrals, we can write
<U(t)> =∑_x∈ℤ^d∑_i∈{0,1}𝔼_(x,i)^(X,α)[<exp(∫_0^tγα(s)ξ(X(s),t-s) ṣ)δ_(0,1)(X(t),α(t))>]
=∑_x∈ℤ^d∑_i∈{0,1}𝔼_(x,i)^(X,α)[<exp(∫_0^tγα(s)ξ(X(s),s) ṣ)δ_(0,1)(X(t),α(t))>].
Moreover, we can change the starting and end point of the path (X,α) to obtain
<U(t)> =∑_x∈ℤ^d∑_i∈{0,1}𝔼_(0,1)^(X,α)[<exp(∫_0^tγα(s)ξ(X(s),s) ṣ)δ_(x,i)(X(t),α(t))>]
=𝔼_(0,1)^(X,α)[<exp(∫_0^tγα(s)ξ(X(s),s) ṣ)>]
=<𝔼_(0,1)^(X,α)[exp(∫_0^tγα(s)ξ(X(s),s) ṣ)]>.
Especially, U(t) can also be interpreted as the solution ũ of the partial differential equation
{[ /ṭũ(x,i,t) = iκΔũ(x,i,t) + s_i (ũ(x,1-i,t)-ũ(x,i,t))+iγξ(x,t)ũ(x,i,t), t≥ 0,; ũ(x,i,0) = 1 ].
in point (0,1), which differs from (<ref>) only in the initial distribution. From now on, we will work with the representation (<ref>). Note, that (<ref>) also holds for the static choice (1), as ξ does not depend on time.
§.§ Large deviation principle for α
In this section we establish a large deviation principle for the normalized local times
1/tL_t(i)=1/t∫_0^tδ_i(α(s))ṣ
of α in state i∈{0,1}. Where such a principle is already well-known in the literature for discrete-space Markov processes with symmetric transition rates (cf. <cit.>), the corresponding large deviation principle in case of asymmetric rates is still missing. In our case, we can obtain the probability for large deviations directly by computing the exact distribution of the local times in:
For all s_0,s_1>0 the probability density function of the local times (L_t(1))_t>0 of α in state 1 is given by
ℙ(L_t(1)∈ỵ)= s_1^-s_0t-(s_1-s_0)y(∑_k=0^∞(s_0s_1y(t-y))^k/k!k!(s_0y/k+1+1)).
Let N(t) be the number of jumps of the Markov chain α up to time t>0 when α starts in state 1. We denote by a_i, i∈ℕ, the waiting times of transitions from 0 to 1 and by b_i those from 1 to 0 such that a_i resp. b_i are independent and exponentially distributed with parameter s_0 resp. s_1. If N(t) is even, α will be in state 0 after the last jump before t. Then
ℙ(L_t(0)∈ỵ, N(t)=2k)= ℙ(∑_i=1^ka_i∈ỵ, ∑_i=1^ka_i+∑_i=1^kb_i<t, b_k+1>t-(∑_i=1^ka_i+b_i))
= ∫_x=0^t-yℙ(∑_i=1^kb_i∈x̣, ∑_i=1^ka_i∈ỵ, b_k+1>t-x-y)
= ∫_0^t-ys_1^kx^k-1e^-s_1x/(k-1)!s_0^ky^k-1^-s_0y/(k-1)!^-s_1(t-x-y) x̣ ỵ
= (s_0s_1)^k/(k-1)!k!^-s_1t-(s_0-s_1)y(t-y)^ky^k-1
for y∈[0,t] and therefore
ℙ(L_t(1)∈ỵ, N(t)=2k)=(s_0s_1)^k/(k-1)!k!^-s_0t-(s_1-s_0)y(t-y)^k-1y^k.
In case of an odd number of jumps, where α is in state 1 after the last jump before t, the joint distribution of L_t(1) and N(t) reads
ℙ(L_t(1)∈ỵ, N(t)=2k+1)= ∫_x=0^t-yℙ(∑_i=1^k+1b_i∈ỵ, ∑_i=1^ka_i∈x̣, a_k+1>t-x-y)
= s_0^ks_1^k+1/k!k!^-s_0t-(s_1-s_0)y(t-y)^ky^k.
The claim follows after summing over all k∈ℕ.
Making use of the exact distribution of the local times of α, we are able to establish the following large deviation principle:
For all choices of the transition rates s_0,s_1≥ 0, the normalized local times (1/tL_t(1))_t≥ 0 of α in state 1 satisfy a large deviation principle on [0,1] with rate function I:[0,1]→ℝ given by
I(a)=-2√(s_0s_1a(1-a))+(s_1-s_0)a+s_0.
This follow immediately from lemma <ref>. Indeed, for all t>0 and a∈[0,1] we can write
ℙ^α_1(L_t(1)∈(̣at))=^-s_0t-(s_1-s_0)at(s_0s_1at/k+1+s_1)I_0(2t√(s_0s_1a(1-a)))
where
I_0(x)=∑_k=0^∞(1/4x^2)^k/k!k! is the modified Bessel function with parameter 0, which can also be written as
I_0(x)=1/π∫_0^π^xcos(θ) θ̣.
Hence,
I(a)= -lim_t→∞1/tlogℙ^α_1(L_t(1)∈(̣at))
= s_0+(s_1-s_0)a-lim_t→∞1/tlog(∫_0^π^t2√(s_0s_1a(1-a))cos(θ) θ̣)
Now, ppplying the method of Laplace for integrals (cf. <cit.>) to the representation (<ref>) yields
lim_t→∞1/tlog(∫_0^π^t2√(s_0s_1a(1-a))cos(θ) θ̣)=max_θ∈[0,π]2√(s_0s_1a(1-a))cos(θ)=2√(s_0s_1a(1-a))
and thus
I(a)=s_0+(s_1-s_0)a-2√(s_0s_1a(1-a)).
Note, that the rate function I is strictly convex and, as an easy calculations shows, attends its unique minimizer at a_min=s_0
/s_0+s_1. This implies a law of large numbers for the proportion of time spent in the active state, i. e.
lim_t→∞1/tL_t(1)=s_0/s_0+s_1 ℙ^α- almost surely.
Further, if we choose s:=s_0=s_1 equally, the rate function becomes
I(a)=s-2s√(a(1-a))=s(1-2√(a(1-a)))=s(√(a)-√(1-a))^2
which is the well-known large deviation rate function in the case of symmetric transition rates (cf. <cit.>).
The next lemma shows that if we look at the exponential moments of the normalized local times (1/tL_t(1))_t≥ 0 of α with a smaller exponential rate than t, then the best (1/tL_t(1))_t≥ 0 can do in order to maximize the exponent, is to take its long-term average:
Let (f(t))_t≥ 0 be a sequence of positive real numbers with lim_t→∞f(t)=∞ and lim_t→∞f(t)/t=0. Then, for any continuous and bounded function F[0,1]→ℝ,
lim_t→∞1/f(t)log𝔼_1^α[^f(t)F(1/tL_t(1))]=F(s_0/s_0+s_1).
For the lower bound, fix δ>0 and let G be some open ball around x_0:=s_0/s_0+s_1 such that F(x_0)-δ≤inf_G F(x_0)≤ F(x_0)+δ. Then we use the law of large numbers for the sequence (1/tL_t(1))_t≥ 0 to obtain
lim inf_t→∞1/f(t)log𝔼_1^α[^f(t)F(1/tL_t(1))]≥ lim inf_t→∞1/f(t)log(^f(t)inf_a∈ GF(a)ℙ_1^α(1/tL_t(1)∈ G))
≥ inf_a∈ GF(a)+lim inf_t→∞1/f(t)logℙ_1^α(1/tL_t(1)∈ G)
≥ F(x_0)-δ.
For the upper bound, the boundedness of F by some M>0 gives
𝔼_1^α[^f(t)F(1/tL_t(1))]≤ ^f(t)(F(x_0)+δ)ℙ_1^α(1/tL_t(1)∈G̅)+^f(t)Mℙ_1^α(1/tL_t(1)∈ G^c).
Now, as
lim sup_t→∞1/f(t)logℙ_1^α(1/tL_t(1)∈ G^c)=-lim sup_t→∞t/f(t)inf_a∈ G^cI(a)=-∞,
where I is the rate function defined in (<ref>), and using the law of large numbers again as well as the method of Laplace for sums, we deduce that
lim sup_t→∞1/f(t)log𝔼_1^α[^f(t)F(1/tL_t(1))]≤max{F(x_0)+δ, -∞}=F(x_0)+δ.
The claim follows after letting δ→ 0.
§.§ Change of measure for α
One of the proof methods we will use in section 4 to obtain the representation (<ref>) is the Perron-Frobenius spectral theory for bounded self-adjoint operators, which we would like to apply to the generator
ℒ̅f(z,i)=∑_y∼ z(iκ+ρ)(f(y,i)-f(z,i))+s_i(f(z,1-i)-f(z,i))
of a Markov process (Z,α), where fℤ^d×{0,1}→ℝ is a suitable test function. However, this may be a problem, as the matrix Q defined in (<ref>) is not symmetric and therefore the generator (<ref>) of (Z,α) is not self-adjoint. In order to fix this, we will use a result from <cit.> which we formulate here for the convenience of the reader:
Let α=(α(t))_t≥ 0 be any Markov process on a finite state space ℳ with transition rates q_ij from state i∈ℳ to j∈ℳ. For a positive function hℳ→(0,∞), let α̃=(α̃(t))_t≥ 0 be another Markov process on ℳ defined on the same filtered space (Ω, (ℱ_t)_t≥ 0)) with transition rates q̃_ij given by
q̃_ij=q_ijh(j)/h(i)
for i≠ j and q̃_ii=-∑_k≠ iq_ikh(k)/h(i). Denote by ℙ^α resp. ℙ^α̃ the distribution of α resp. α̃. Then ℙ^α is absolutely continuous with respect to ℙ^α̃ with the Radon-Nikodym derivative
P^α/P^α̃|_ℱ_t=h(α̃(t))/h(α̃(0))exp(-∫_0^tQ̃h(α̃(s))/h(α̃(s)) ṣ).
Cf. proof of <cit.>.
We can now apply <ref> to build our favourite Markov process with symmetric rates out of α:
Let α̃ be a Markov process on {0,1} with generator
Q̃f(i):=√(s_0s_1)(f(1-i)-f(i))
for f{0,1}→ℝ, and write ℙ^α̃ for its distribution with start in state 1. Further, denote by ℙ^α the distribution of the Markov chain α with generator defined in (<ref>) and start in 1. Then,
P^α/P^α̃|_ℱ_t=exp(√(s_0s_1)t-s_0L̃_t(0)-s_1L̃_t(1))
where we wrote L̃_t(i)=∫_0^tδ_i(α̃(s)) ṣ for the local times of α̃ in state i∈{0,1} up to time t.
Define h:{0,1}→ℝ as h(0)=√(s_1) and h(1)=√(s_0). Conditioned on α̃(t)=1, this yields
P^α/P^α̃|_ℱ_t= h(α̃(t))/h(α̃(0))exp(-∫_0^tQ̃h(α̃(s))/h(α̃(s)) ṣ)
= exp(-L̃_t(0)√(s_0s_1)(√(s_0)-√(s_1))/√(s_1)-L̃_t(1)√(s_0s_1)(√(s_1)-√(s_0))/√(s_0))
= exp(-L̃_t(0)(s_0-√(s_0s_1))-L̃_t(1)(s_1-√(s_0s_1)))
= exp(√(s_0s_1)t-s_0L̃_t(0)-s_1L̃_t(1)).
Especially, if (Z̃,α̃) is the Markov process with symmetric generator
L̃f(x,i):=(iκ+ρ)κ∑_y∼ x(f(y,i)-f(x,i))+√(s_0s_1)(f(x,1-i)-f(x,i))
for test functions fℤ^d×{0,1}→ℝ and if we write ℙ^(Z,α)_(0,1) resp. ℙ̃^(Z,α̃)_(0,1) for the distribution of (Z,α) resp. (Z̃,α̃) with start in (0,1), then
P^(Z,α)_(0,1)/̣̃ℙ^(Z,α̃)_(0,1)|_ℱ_t=exp(√(s_0s_1)t-s_0L̃_t(0)-s_1L̃_t(1))
as well, since the generator of Z, conditioned on α, equals that of Z̃, conditioned on α̃, and since α resp. α̃ is independent of Z resp. Z̃.
§.§ Hitting probabilities
The following lemma asserts that the hitting probabilities of the two-type switching random walk (Z,α) defined in (<ref>) can be expressed as time-changed hitting probabilities of a simple symmetric random walk without switching:
Let (Z,α) be the switching random walk with generator (<ref>) and Z̃ a simple symmetric rate random walk with generator Δ. Then, for any fixed realization of (α(t))_t≥ 0,
ℙ_y^Z(∃ s≤ t:(Z(s),α(s))=(0,1))=ℙ_y^Z̃(∃ s≤ L_t(1): Z̃(ρ s+bκ L_s(1))=0),
where ℙ_y^Z resp. ℙ_y^Z̃ denotes the probability with respect to Z resp. Z̃ with start in y.
Write
ψ_α(y,t):=ℙ_y^Z(∃ s∈[0,t]: Z(s)=0, α(s)=1)=ℙ_y^Z(∃ s∈ A_t: Z(s)=0)
for a fixed realization of α with A_t:={s∈[0,t]:α(s)=1}. Note, that
ψ_α(y,t)=∫_0^tℙ_y^Z(Z(s)=0)1_{α(s)=1} ṣ
and let N_t denote the number of jumps of α till t and write s_1,s_2,⋯,N_t for the jump times of α and τ_k:=s_k-s_k-1 for the corresponding waiting times, such that A_t=[0,s_1]∪[s_2,s_3]∪⋯∪[s_N_t,t] and
ψ_α(y,t)= ∫_0^s_1ℙ_y^Z(Z(s)=0) ṣ+∫_s_2^s_3ℙ_y^Z(Z(s)=0) ṣ+⋯+∫_s_N_t^tℙ_y^Z(Z(s)=0) ṣ
where we w.l.o.g. assume that N_t is even. For each k-th intervall, starting with the intervall [s_2,s_3], we use a linear substitution of the integration variable to obtain that each integration interval starts with the endpoint of the previous one, i.e.
ψ_α(y,t)= ∫_0^s_1ℙ_y^Z(Z(s)=0) ṣ+∫_s_1^s_3-s_2+s_1ℙ_y^Z(Z(s)=0) ṣ+∫_s_3-s_2+s_1^s_5-s_4+s_3-s_2+s_1ℙ_y^Z(Z(s)=0) ṣ
+⋯+∫_∑_j=0^N_t-1(-1)^j+1s_j^∑_j=0^N_t+1(-1)^j+1s_jℙ_y^Z(Z(s)=0) ṣ
= ∫_0^t-∑_j=0^1/2N_tτ_2jℙ_y^Z(Z(s)=0) ṣ=∫_0^t-L_t(0)ℙ_y^Z(Z(s)=0) ṣ=∫_0^L_t(1)ℙ_y^Z(Z(s)=0) ṣ
with N_t+1:=t, where we used that the sum ∑_j=0^1/2N_tτ_2j corresponds to the waiting times of α in state 0.
Now, we use the fact that the end point Z(s) has the same distribution as Z̃(ρ s+κ L_s(1)), given α, which yields
ψ_α(y,t)=∫_0^L_t(1)ℙ_y^Z̃(Z̃(ρ s+κ L_s(1))=0)ṣ=ℙ_y^Z̃(∃ s≤ L_t(1): (Z̃(ρ s+κ L_s(1))=0).
§ PROOF OF THEOREM <REF>
This section is dedicated to the proof of Theorem <ref>. For this, let ξ=(ξ(x))_x∈ℤ^d be a static field built out of Bernoulli distributed particles, i. e. , for each x∈ℤ^d the random variable ξ(x) is independent and Bernoulli distributed with ℙ(ξ(x)=1)=p=1-ℙ(ξ(x)=0). As the random environment is static, we only have to average over the movement of the switching random walk X and the initial distribution of the Bernoulli field in order to determine the long-term behaviour of <U(t)>. Thus, the proof of theorem <ref> is based on a time-change combined with existing results regarding the behaviour of random walks (without switching) in a Bernoulli field of particles. More precisely, let X̃ be a simple symmetric random walk without switching and with generator κΔ. Then it is well-known that, conditioned on (α(s))_s≤ t, the endpoint X(t) is equal to X̃(L_t(1)) in distribution. This will be the starting point of the following proof:
§.§ Proof of Theorem <ref>
For (x,i)∈ℤ^d×{0,1} let
ℓ_t(x,i):=∫_0^tδ_(x,i)(X(s),α(s))ṣ
denote the local time of the process (X,α) in (x,i). Then, for an arbitrary γ∈[-∞, ∞), we can rewrite <U(t)>, using the independence of the Bernoulli distribution in each spatial point x∈ℤ^d, as
<U(t)>= <𝔼_(0,1)^(X,α)[exp(∑_x∈ℤ^d∑_i∈{0,1}ξ(x,i)ℓ_t(x,i))]>
= 𝔼_(0,1)^(X,α)[∏_x∈ℤ^d<exp(∑_i∈{0,1}ξ(x,i)ℓ_t(x,i))>]
= 𝔼_(0,1)^(X,α)[∏_x∈ℤ^d(p^γℓ_t(x,1)+1-p)]
= 𝔼_(0,1)^(X,α)[exp(∑_x∈ℤ^dlog(p^γℓ_t(x,1)+1-p)1_{ℓ_t(x,1)>0})].
Now, let γ=-∞. Then the annealed survival probability up to time t reads
<U(t)>= 𝔼_(0,1)^(X,α)[exp(∑_x∈ℤ^dlog(1-p)1_{ℓ_t(x,1)>0})]=𝔼_(0,1)^(X,α)[(1-p)^∑_x∈ℤ^d1_{ℓ_t(x,1)>0}].
Recall, that the walk X can only move in the active state 1 such that each newly visited point x∈ℤ^d is crossed by X for the first time in state 1. Therefore,
∑_x∈ℤ^d1_{ℓ_t(x,1)>0}=∑_x∈ℤ^d1_{ℓ̅_t(x)>0}= R(t),
where we write ℓ̅_t(x) for the projection of ℓ_t(x,i), i∈{0,1}, on the first component and denote by R(t) the range of the random walk X up to time t, i.e. , the number of all distinct visited points up to time t by X. Changing the order of the expectations yields
<U(t)>=𝔼_1^α𝔼_0^X[(1-p)^R(t)]=𝔼_1^α𝔼_0^X̃[(1-p)^R̃(L_t(1))],
where 𝔼_0^X̃ denotes the expectation with respect to X̃ and R̃(t) the range of X̃ up to time t. For a fixed realization of α, the inner expectation is nothing but the survival probability of the simple random walk X̃ among Bernoulli traps up to the (deterministic) time L_t(1), to which we can apply the well-known result from <cit.> asserting that
𝔼_0^X̃[(1-p)^R̃(L_t(1))]=exp(-c_dκ^d/d+2(log(1-p))^2/d+2L_t(1)^d/d+2(1+o(1)))
as t→∞, where the constant c_d is given by
c_d:=(d+2)d^2/d+2-1λ_d^d/d+2
and λ_d denotes the principal Dirichlet eigenvalue of -Δ on [-1,1]^d⊆ℝ^d. Finally, application of lemma <ref> finishes the proof of part (a).
Now, let γ>0. Then
<U(t)> =𝔼_(0,1)^(X,α)[exp(∑_x∈ℤ^dlog(p^γℓ_t(x,1)+1-p)1_{ℓ_t(x,1)>0})]
=𝔼_(0,1)^(X,α)[exp(∑_x∈ℤ^d(log(p)+log(^γℓ_t(x,1)+1-p/p))1_{ℓ_t(x,1)>0})]
=𝔼_(0,1)^(X,α)[p^R(t)·exp(∑_x∈ℤ^dlog(^γℓ_t(x,1)+1-p/p)1_{ℓ_t(x,1)>0})].
Using the asymptotics (<ref>) of the first term in the expectation (with 1-p replaced by p) and time-change again, we can lower-bound
<U(t)>≥ 𝔼_(0,1)^(X,α)[p^R(t)·exp(∑_x∈ℤ^dlog(^γℓ_t(x,1))1_{ℓ_t(x,1)>0})]
= 𝔼_1^α𝔼_0^X̃[p^R̃(L_t(1))exp(γ L_t(1))]
= 𝔼_1^α[exp(γ L_t(1)-c_dκ^d/d+2log(p)^2/d+2L_t(1)^d/d+2(1+o(1)))], t→∞.
Now, recall the large deviation principle <ref> for the normalized local times 1/tL_t(1) of α in 1 with rate function I defined in (<ref>). Using Varadhan's lemma (cf. <cit.>), we can deduce
lim_t→∞1/tlog<U(t)>≥sup_a∈[0,1]{γ a-I(a)}=γ-inf_a∈[0,1]{(1-a)γ+I(a)},
as t→∞, since the term
L_t(1)^d/d+2=t^-2/d+2(1/tL_t(1))^d/d+2
vanishes on scale t for t→∞. On the other hand, for large t, we can upper-bound
<U(t)>≤ 𝔼_(0,1)^(X,α)[p^R(t)·exp(∑_x∈ℤ^dlog(2max{^γℓ_t(x,1),1-p/p})1_{ℓ_t(x,1)>0})]
= 𝔼_(0,1)^(X,α)[(2p)^R(t)·exp(γ L_t(1))]
= 𝔼_1^α[exp(t(γ1/tL_t(1)-t^-2/d+2c_dκ^d/d+2log(2p)^2/d+2(1/tL_t(1))^d/d+2(1+o(1))))], t→∞,
using the same considerations as before. The second term in the expectation is again negligible on the time scale t for t→∞ and thus
lim_t→∞1/tlog<U(t)>≤γ-inf_a∈[0,1]{(1-a)γ+I(a)}.
For an explicit expression, we calculate the minimizer of the function f(a)=(1-a)γ+I(a) and find that
γ-inf_a∈[0,1]{(1-a)γ+I(a)}=γ-s_1-(γ+s_0-s_1)^2-s_0s_1/√(γ^2+2γ(s_0-s_1)+(s_0+s_1)^2).
§ PROOF OF THEOREM <REF>
In this section, we prove Theorem <ref>, in which the underlying environment ξ is dynamic and consists of one single particle moving independently of X. More precisely, ξ is the Markov process on {0,1}^ℤ^d given by
ξ(x,t)=δ_x(Y(t)),
where Y=(Y(t))_t≥ 0 is a continuous-time simple symmetric random walk on ℤ^d with jump rate 2dρ for a constant ρ>0 and starting in the origin. Hence,
<U(t)>=𝔼_0^Y𝔼_(0,1)^(X,α)[exp(γ∫_0^tδ_(0,1)(X(s)-Y(s),α(s)) d s)],
where 𝔼_0^Y denotes the expectation with respect to Y. Set Z:=X-Y. Then (Z,α) has the generator
ℒ̅f(z,i)=∑_y∼ z(iκ+ρ)(f(y,i)-f(z,i))+s_i(f(z,1-i)-f(z,i))
for z∈ℤ^d, i,j∈{0,1} and a test function f:ℤ^d×{0,1}→ℝ, and thus
<U(t)>=𝔼_(0,1)^(Z,α)[exp(γ∫_0^tδ_(0,1)(Z(s),α(s)) d s)]=:v(0,1,t).
Especially, the new potential ξ̃(z,i):=δ_(0,1)(x,i) does not depend on the time any more. Using the Feynman-Kac formula, we further see that (<ref>) is the solution to
{[ d/ dt v(y,i,t) = (iκ+ρ)Δ v(y,i, t)+Q v(y,i,t)+γ·δ_(0,1)(y,i)v(y,i,t), t>0; v(y,i,0) = i, ].
with (y,i)=(0,1). In the following, we shall use either of the representations (<ref>), (<ref>) or (<ref>), depending on what is to be proven. We start with the proof of theorem <ref>(a) and show theorem <ref>(b) separately, as different methods are used in the case γ<0 and γ>0 respectively.
§.§.§ Proof of Theorem <ref>(a)
Let γ∈(-∞,0) and denote by p(y,i,t) the probability density function of the switching diffusion (Z,α) with start in (0,1). Then we get the representation
v(0,1,t)=1+γ∫_0^t p(0,1,s)v(0,1,t-s) ṣ,
to which we want to apply the Laplace transform. Denoting by v̂_̂1̂(λ) resp. p̂_̂1̂(λ) the Laplace transform of v(0,1,t) resp. p(0,1,t), and solving (<ref>) for v̂_̂1̂(λ), we arrive at
v̂_̂1̂(λ)=1/λ·1/1-γp̂_̂1̂(λ).
Our next aim is therefore to compute p̂_̂1̂(λ). For this, note that the probability density function p satisfies the system of equations
{[ /ṭp(y,1,t) = (κ+ρ)Δ p(y,1,t)+s_0p(y,0,t)-s_1p(y,1,t)), t>0,; /ṭp(y,0,t) = ρΔ p(y,0,t)+s_1p(y,1,t)-s_0p(y,0,t), t>0,; p(y,i,0) = δ_(0,1)(y,i). ].
Forth order systems of the form (<ref>) with two different diffusion constants have been studied in <cit.>. For the convenience of the reader, we will include the first steps and ideas to calculate the solution of (<ref>). We denote by p̂_i(y,λ) the Laplace transform of p(y,i,·) and apply this to (<ref>), using the initial condition, to obtain the new system
0= (κ+ρ)Δp̂_1(y,λ)-(λ+s_1)p̂_1(y,λ)+s_0p̂_0(y,λ)+δ_0(y),
0= ρΔp̂_0(y,λ)-(λ+s_0)p̂_0(y,λ)+s_1p̂_1(y,λ),
which, after solving (<ref>) for p̂_1(y,λ) and applying this to (<ref>), translates in to the forth-order equation
(Δ^2-(s_1+λ/κ+ρ+s_0+λ/ρ)Δ+(s_1+λ)(s_0+λ)-s_0s_1/(κ+ρ)ρ)p̂_0(y,λ)=s_1/(κ+ρ)ρδ_0(y).
Forth order systems of the form (<ref>) are known to have the solution
p̂_0(y,λ)=s_1/2(κ+ρ)ρ (a^2-b^2)(1/a^a|y|-1/b^b|y|),
in dimension d=1, where
a,b=-√(λ(κ+ρ)+s_1ρ+s_0(κ+ρ)±√(κ^2+2λκ(s_1ρ-s_0(κ+ρ))+(s_1ρ+s_0(κ+ρ))^2))/√(2ρ(κ+ρ)).
Using the relation between p̂_1(y,λ) and p̂_0(y,λ) and inserting y=0 we obtain
p̂_1(0,λ)= -(λ+s_0+ρ ab)/2ρ(κ+ρ)ab(a+b)∼1/√(λ)·s_0/2√((s_0+s_1)(s_0(ρ+κ)+s_1ρ)), λ→ 0,
as long and tedious calculations show. In dimension d=2 we proceed in a similar way to find
p̂_1(0,λ)∼s_0/4π(s_1ρ+s_0(κ+ρ))log(1/λ), λ→ 0.
Thus, we deduce from (<ref>) that
v̂_1(λ)∼{[ 1/√(λ)2√((s_0+s_1)(s_0(ρ+κ)+s_1ρ))/s_0|γ|, d=1,; 4π(s_1ρ+s_0(κ+ρ))/s_0|γ|λlog(1/λ), d=2, ].
as λ→ 0.
Using the Littlewood-Hardy Tauberian theorem we finally arrive at
v(1,0,t)∼{[ 2√((s_0+s_1)(s_0(ρ+κ)+s_1ρ))/√(π)s_0|γ|1/√(t), d=1,; 4π(s_1ρ+s_0(κ+ρ))/s_0|γ|log(t), d=2, ].
as λ→∞. Next, let d≥ 3 and denote by
G_d(x,i):=∫_0^∞p_d(x,i,t) ṭ
the Green's function of (Z,α) in (x,i), which has the probabilistic representation
G_d(x,i)=𝔼_(x,i)^(Z,α)[∫_0^∞δ_(0,1)(Z(s),α(s)) ṣ].
Hence, for all (x,i)∈ℤ^d×{0,1} the quantity
v(x,i):=lim_t→∞v(x,i,t)=𝔼_(x,i)^(Z,α)[exp(γ∫_0^∞δ_(0,1)(Z(s),α(s)) ṣ)]
lies in (0,1). Moreover, v solves the boundary value problem
{[ 0 = (iκ+ρ)Δ v(x,i)+γδ_(0,1)(x,i)v(x,i), (x,i)∈ℤ^d×{0,1},; 1 = lim_x→∞v(x,i), i∈{0,1}, ].
and can therefore be written as
v(0,1)=1+γ∫_0^∞p_d(0,1,t)v(0,1) ṭ=1+γ v(0,1)G_d(0,1)
in point (0,1). Solving for v(0,1) gives
v(0,1)=1/1-γ G_d(0,1).
The survival probability converges therefore to a non-trivial limit in (0,1) in all dimensions d≥ 3.
We now continue with the case γ>0 of catalysts. Recall the two-state Markov chain α̃ with symmetric generator (<ref>). Before proving theorem <ref>(b), we need two statements that are highly inspired by <cit.>:
Let r(t)=tlog^2(t), Q_r(t)=[-r(t),r(t)]^d∩ℤ^d and V:ℤ^d×{0,1}→ℝ a bounded function. Further, abbreviate
A_t:=∫_0^tV(Z(s),α̃(s)) ṣ.
Then, the following holds true:
(a) As t→∞,
𝔼^(Z,α̃)_(0,1) [^A_t]=(1+o(1))∑_z∈ℤ^d𝔼_(0,1)^(X,α̃)𝔼_z^Y[^A_t·δ_0(Y(t))1_{X(t)∈ Q_r(t)}].
(b) As t→∞,
∑_y∈ Q_r(t)𝔼_(0,1)^(X,α̃) 𝔼_0^Y[^A_t·δ_y(X(t))δ_y(Y(t))]=(1+o(1))∑_y∈ℤ^d𝔼_(0,1)^(X,α̃)𝔼_0^Y[^A_t·δ_y(X(t))δ_y(Y(t))].
Note that A_t∈[0, Mt] with M:=sup_(x,i)∈ℤ^d×{0,1}V(x,i). Then,
𝔼^(Z,α̃)_(0,1)[^I_t]=∑_z∈ℤ^d𝔼_(0,1)^(X,α̃)𝔼_0^Y[^A_tδ_z(Y(t))]=∑_z∈ℤ^d𝔼_(0,1)^(X,α̃)𝔼_z^Y[^A_tδ_0(Y(t))]
using Fubini and a time reversal for Y.
In order to prove part (a) we have to check that
a(t):=∑_z∈ℤ^d𝔼_(0,1)^(X,α̃)𝔼_z^Y[^A_tδ_0(Y(t))]-∑_z∈ Q_r(t)𝔼_(0,1)^(X,α̃)𝔼_z^Y[^A_tδ_0(Y(t))1_{X(t)∈ Q_r(t)}]/∑_z∈ℤ^d𝔼_(0,1)^(X,α̃)𝔼_z^Y[^A_tδ_0(Y(t))],
converges to zero as t→∞. which is done in a similar way as in the proofs of <cit.>, such that we only highlight the differences. Splitting ℤ^d in Q_r(t) and it's complement, upper-bounding ^A_t by ^M t and using a time reversal for Y again, we obtain the bound
a(t)≤ ^M t(ℙ_0^Y(Y(t)∉ Q_r(t))+ℙ_(0,1)^(X,α̃)(X(t)∉ Q_r(t))/ℙ_0^Y(Y(t)=0)
= ^M t(ℙ_0^Y(Y(t)∉ Q_r(t))+ℙ_1^α̃ℙ_0^X̃(X̃(L̃_t(1))∉ Q_r(t))/ℙ_0^Y(Y(t)=0),
where X̃ is a simple symmetric random walk without switching and with generator κΔ and L̃_t(1) denotes the local time of α̃ in state 1 up to time t. Now, <cit.> asserts that
ℙ_0^Y(Y(t)∉ Q_r(t))≤ 2^d+1^-r(t)log(r(t)/dρ t)+r(t)
such that for our choice of r(t) and for sufficiently large t,
ℙ_0^Y(Y(t)∉ Q_r(t))≤^-r(t),
as a quick estimation shows. Analogously,
ℙ_1^α̃ℙ_0^X̃(X̃(L_t(1))∉ Q_r(t))≤ ℙ_1^α̃[2^d+1exp(-r(t)log(r(t)/dκL̃_t(1))+r(t))]
= 2^d+1exp(-r(t)(log(r(t)/dκ)-1))𝔼_1^α̃[ exp(r(t)log(L̃_t(1)))]
≤ 2^d+1exp(-r(t)log(r(t)/dκ t)+r(t)).
Thus, we have again
ℙ_1^α̃ℙ_0^X̃(X̃(L̃_t(1))∉ Q_r(t))≤^-r(t)
for sufficiently large t. This shows that the nominator of (<ref>) converges exponentially in t to zero, whereas its dominator converges only polynomially. Hence, a(t)→ 0 for t→∞.
In order to prove part (b), we define
b(t):=∑_y∉ Q_r(t)𝔼_(0,1)^(X,α̃)𝔼_0^Y[^A_tδ_y(X(t))δ_y(Y(t))]/∑_y∈ℤ^d𝔼_(0,1)^(X,α̃)𝔼_0^Y[^A_tδ_y(X(t))δ_y(Y(t))]
and proceed in a similar way to obtain the upper bound
b(t)≤^M t(ℙ_(0,1)^(X,α̃)(X(t)∉ Q_r(t))ℙ_0^Y(Y(t)∉ Q_r(t)))/ℙ_(0,1)^(X,α̃)(X(t)=0)ℙ_0^Y(Y(t)=0).
From the proof of part (a) we already know that the nominator decays exponentially in t as t→∞. Moreover,
ℙ_(0,1)^(X,α̃)(X(t)=0)=ℙ_1^α̃ℙ_0^X̃(X̃(L̃_t(1))=0),
where ℙ_0^X̃(X̃(L̃_t(1))=0) decays polynomially in L̃_t(1). Hence, lemma <ref> asserts that
ℙ_1^α̃ℙ_0^X̃(X̃(L̃_t(1))=0)=ℙ_0^X̃(X̃(s_0/s_0+s_1t)=0)
for sufficiently large t. Thus, the dominator of the right hand-side of (<ref>) decays polynomially in t and therefore b(t)→ 0 as t→∞.
We are now ready to prove theorem <ref>(b).
§.§ Proof of Theorem <ref>(b)
Let α̃ be the two-state Markov chain with symmetric generator Q̃ defined in (<ref>) and denote by ℓ_t(x,i) resp. ℓ̃_t(x,i) the local times of (Z,α) resp. (Z,α̃) in (x,i) up to time t. Then, combining the representation (<ref>) with corollary <ref> and remark <ref>, the annealed number of particles up to time t reads
<U(t)>= 𝔼^(Z,α)_(0,1)[exp(γ∫_0^tδ_(0,1)(Z(s),α(s)) ṣ)]
= 𝔼^(Z,α)_(0,1)[exp(γℓ_t(0,1))]
= 𝔼^(Z,α̃)_(0,1)[exp(√(s_0s_1)t-s_0L̃_t(0)-s_1L̃_t(1)+γℓ̃_t(0,1))]
= 𝔼^(Z,α̃)_(0,1)[exp(√(s_0s_1)t-s_0∑_y∈ℤ^dℓ̃_t(y,0)-s_1∑_y∈ℤ^dℓ̃_t(y,1)+γℓ̃_t(0,1))]
= ^√(s_0s_1)t𝔼^(Z,α̃)_(0,1)[exp(∫_0^tV(Z(s),α̃(s)) ṣ)]
where we define
V(z,i):=-s_0δ_0(i)-s_1δ_1(i)+γδ_(0,1)(z,i)
for (z,i)∈ℤ^d×{0,1}.
Let us start with the upper bound, which is done in a similar way as in the proof of <cit.>. Applying lemma <ref>(a) yields
𝔼^(Z,α̃)_(0,1) [exp(∫_0^tV(Z(s),α̃(s)) ṣ)]
= (1+o(1))∑_z∈ℤ^d𝔼_(0,1)^(X,α̃)𝔼_z^Y[exp(∫_0^tV(X(s)-Y(s),α̃(s)) ṣ)δ_0(Y(t))1_{X(t)∈ Q_r(t)}]
≤ (1+o(1))∑_z∈ℤ^d𝔼_(0,1)^(X,α̃)𝔼_z^Y[exp(∫_0^tV(X(s)-Y(s),α̃(s)) ṣ)1_{X(t)-Y(t)∈ Q_r(t)}]
= (1+o(1))∑_z∈ℤ^d𝔼_(z,1)^(Z,α̃)[exp(∫_0^tV(Z(s),α̃(s)) ṣ)1_{Z(t)∈ Q_r(t)}].
Denote with (·,·) the inner product in ℓ^2(ℤ^d×{0,1}) with corresponding norm · and let
λ:=supSp(L̃+V)
be the largest eigenvalue of the bounded and self-adjoint operator L̃+V. Then, applying the spectral representation to the right hand-side of (<ref>) and proceeding in the standard way we obtain the upper bound
𝔼^(Z,α̃)_(0,1)[exp(∫_0^tV(Z(s),α̃(s))ṣ)]≤ (1+o(1))(^(L̃+V)t1_Q_r(t),1_Q_r(t))
≤ (1+o(1))^tλ1_Q_r(t)^2
≤ (1+o(1))^tλ|Q_r(t)|
= (1+o(1))^tλ(2tlog^2(t))^d.
As (2tlog^2(t))^d grows only polynomially, we have
lim_t→∞1/tlog𝔼^(Z,α̃)_(0,1)[exp(∫_0^tV(Z(s),α̃(s)) ṣ)]≤λ.
For the lower bound, we proceed as proof of <cit.> to obtain
𝔼^(Z,α̃)_(0,1)[exp(∫_0^tV(Z(s),α̃(s))ṣ)]≥1/|Q_r(t)|(∑_y∈ Q_r(t)𝔼^(X,α̃)_(0,1)𝔼_0^Y[^A_t/2δ_y(X(t/2)δ_y(Y(t/2))])^2
for A_t:=exp(∫_0^t V(X(s)-Y(s),α̃(s) ṣ). Then, applying lemma <ref>(b) yields
𝔼^(Z,α̃)_(0,1)[exp(∫_0^tV(Z(s),α̃(s)) ṣ)]≥ (1+o(1))/|Q_r(t)|(∑_y∈ℤ^d𝔼^(X,α̃)_(0,1)𝔼_0^Y[^A_t/2δ_y(X(t/2)δ_y(Y(t/2))])^2
= (1+o(1))/|Q_r(t)|(𝔼^(X,α̃)_(0,1)𝔼_0^Y[^A_t/2δ_0(X(t/2)-Y(t/2))])^2
≥ (1+o(1))/|Q_r(t)|(^(L̃+V)t/2δ_0,δ_0)^2.
Now, we restrict the operator L̃+V to finite boxes Q_n:=([-n,n]^d∩ℤ^d)×{0,1} and apply the Perron-Frobenius theorem for non-negative irreducible matrices to derive the existence of a largest eigenvalue λ_n of L̃+V on Q_n:=([-n,n]^d∩ℤ^d)×{0,1}, for which
lim_t→∞1/tlog𝔼^(Z,α̃)_(0,1)[exp(∫_0^tV(Z(s),α̃(s)) ṣ)]≥λ_n
holds for every n∈ℕ, and show that lim_n→∞λ_n=λ. We omit the details as refer to the proof of <cit.>. Altogether, we have shown that
lim_t→∞1/tlog𝔼^(Z,α̃)_(0,1)[exp(∫_0^tV(Z(s),α̃(s)) ṣ)]=λ,
where, according to the Rayleigh-Ritz formula, λ is given by
λ= sup_f∈ℓ^2(ℤ^d×{0,1}),f_2=1<(L̃+V)f,f>.
Let us calculate the inner product. We have
<V f, f>=-s_0∑_x∈ℤ^df(x,0)^2-s_1∑_x∈ℤ^df(x,1)^2+γ f(0,1)^2
and
<L̃f,f>= ∑_i∈{0,1}∑_x∈ℤ^d((iκ+ρ)∑_y∼ x(f(y,i)-f(x,i))f(x,i)+√(s_0s_1)(f(x,1-i)-f(x,i))f(x,i))
= ∑_i∈{0,1}∑_j=1^d(iκ+ρ)∑_x∈ℤ^d(f(x+e_j,i)-f(x,i))f(x,i)+(f(x,i)-f(x+e_j,i))f(x+e_j,i)
- ∑_x∈ℤ^d√(s_0s_1)(f(x,1)-f(x,0))^2
= -1/2∑_i∈{0,1}(iκ+ρ)∑_x,y∈ℤ^d, x∼ y(f(x,i)-f(y,i))^2-∑_x∈ℤ^d√(s_0s_1)(f(x,1)-f(x,0))^2,
where the factor 1/2 comes from summing over ordered pairs (x,y). Now, recall from (<ref>) that
lim_t→∞1/tlog<U(t)>=√(s_0s_1)+λ
to conclude.
§ PROOF OF THEOREM <REF>
In this chapter, we give a proof for Theorem <ref> and consider a dynamic random environment given by a field of independent random walks with equal jump rate 2dρ starting from a Poisson cloud on ℤ^d with intensity ν. More precisely, we define the potential ξ to be
ξ(x,t)=∑_y∈ℤ^d∑_j=1^N_yδ_x(Y_j^y(t)),
where N_y is a Poisson random variable with intensity ν>0 for each y∈ℤ^d and {Y_j^y: y∈ℤ^d, j=1,⋯,N_y, Y_j^y(0)=y} is the collection of random walks with jump rate 2dρ>0.
Our first lemma, which provides a more convenient representation of <U(t)>, is an adaptation of <cit.> to our setting for switching random walks:
For all t≥ 0 and all γ∈[-∞,∞),
<U(t)>=𝔼_(0,1)^(X,α)[exp(νγ∫_0^tα(s)v_(X,α)(X(s),s) ds)],
where v_(X,α)(y,t):ℤ^d×[0,∞)→ℝ is the solution of
{[ d/d tv_(X,α)(y,t) = ρΔ v_(X,α)(y,t)+γδ_(X(t),α(t))(y,1)v_(X,α)(y,t), t>0,; v_(X,α)(y,0) = 1 ].
conditioned on a fixed realization of (X,α).
The proof is similar to the proof of <cit.>, but with the additional component α. Write 𝔼_ν for the expectation of a Poisson random variable with intensity ν. As in <cit.>, we integrate out the Poisson system ξ to obtain
<U(t)> =<𝔼_(0,1)^(X,α)[exp(γ∫_0^t∑_kδ_(Y^k(s),1)(X(s),α(s)) ṣ) ]>
=<𝔼_(0,1)^(X,α)∏_k[exp(γ∫_0^tδ_(Y^k(s),1)(X(s),α(s)) ṣ) ]>
=𝔼_(0,1)^(X,α)∏_y∈ℤ^d𝔼^ν𝔼_y^Y[exp(γ∫_0^tδ_(Y(s
),1)(X(s),α(s)) d s)]
=𝔼_(0,1)^(X,α)∏_y∈ℤ^d𝔼^ν[v_(X,α)(y,t)].
Then, taking the expectation with respect to a Poisson random variable yields
<U(t)> =𝔼_(0,1)^(X,α)[∏_y∈ℤ^d∑_n(ν v_(X,α)(y,t))^n/n!e^-ν]
=𝔼_(0,1)^(X,α)[∏_y∈ℤ^dexp(-ν(1-v_(X,α)(y,t)))]
=𝔼_(0,1)^(X,α)[exp(-ν∑_y∈ℤ^dw_(X,α)(y,t))]
for w_(X,α):=1-v_(X,α). Note, that for γ<0, the quantity v_(X,α)(y,t) represents the survival probability of Y with start in y up to time t, where the (fixed) trajectory of X is seen as a trap, which tries to capture Y with rate γ whenever it crosses the latter and if α takes the value 1 in that moment. For γ>0, v_(X,α)(y,t) represents the number of particles build out of one single particle starting in 0 which moves around with jump rate 2dκ and branches into two, whenever it meets the random Walk X and if α equals to 1 at this time. Next, we see that
∑_y∈ℤ^d d/ d tw_(X,α)(y,t) =-∑_y∈ℤ^d d/ d tv_X(y,t)
=-∑_y∈ℤ^d(ρΔ v_(X,α)(y,t)+γδ_(X(t),α(t))(y,1)v_(X,α)(y,t))
=-γα(t)v_(X,α)(X(t),t),
which together with the initial condition ∑_y∈ℤ^dw_(X,α)(0)=0 yields
∑_y∈ℤ^dw_(X,α)(y,t)=-γ∫_0^tα(s)v_(X,α)(X(s),s) ds.
Combined with (<ref>), this proofs the claim.
Before continuing our investigations regarding to the asymptotics of (<ref>), we will first consider the case κ=0, i. e. , the case of an immobile particle X staying in 0 the whole time. This idea is highly inspired by <cit.> and <cit.> and will be extended here to the case of switching random walks. For κ=0 the equation (<ref>) with X≡ 0 reduces to
{[ /ṭv_(0,α)(y,t) = ρΔ v_(0,α)(y,t)+γα(t)δ_0(y)v_(0,α)(y,t), y∈ℤ^d, t>0,; v_(0,α)(y,0) = 1, y∈ℤ^d, ].
such that the annealed survival probability becomes
<U(t)>=𝔼_1^α[exp(νγ∫_0^tα(s)v_(0,α)(0,s) ṣ)].
As we will see later, the following two propositions will help us with the general case κ≥ 0:
Let γ∈[-∞,0) and κ=0. Then, as t→∞,
log<U(t)>={[ -4ν√(ρ s_0/(s_0+s_1)π)√(t)(1+o(1)), d=1,; -4νρπ s_0/s_0+s_1t/log(t)(1+o(1)), d=2,; -λ̃_d,γ t(1+o(1)), d≥ 3, ].
with
λ̃_d,-∞=inf_a∈[0,1]{s_0-√(s_0s_1)+(s_1-s_0+2dνρ G_d(0)^-1)a-2√(s_0s_1a(1-a))},
where G_d(0) is the Green's function of a simple symmetric random walk in 0, and
λ̃_d,γ=inf_a∈[0,1]{s_0-√(s_0s_1)+(s_1-s_0+2dνρ/2dρ/|γ|+G_d(0))a-2√(s_0s_1a(1-a))}
for γ∈(-∞,0).
We start with the case of hard traps, i. e. , γ=-∞, where the random walk X is immediately killed after crossing one of the traps, if α takes the value 1 at this times. Then, in an analogous manner as in (<ref>), the annealed survival probability up to time t can be written as
<U(t)>=𝔼_1^α[exp(-ν∑_y∈ℤ^dψ_α(y,t))],
where
ψ_α(y,t)=ℙ_y^Y(∃ s∈[0,t]: Y(s)=0, α(s)=1)
which for γ=∞ corresponds to the probability that Y has been killed up to time t, if we consider 0 as a trap which is only open if α is equal to 1. Recall from Lemma <ref> that ψ_α(y,t)=ψ(y,L_t(1)) with
ψ(y,t):=ℙ_y^Y(∃ s∈[0,t]:Y(s)=0)
the probability, that Y with start in y does not hit 0 up to time t, regardless of values of α, which solves the differential equation
{[ /ṭψ(y,t) = ρΔψ(y,t), t>0, y≠ 0,; ψ(0,t) = 1, t>0,; ψ(y,0) = 0, y≠ 0. ].
Observe that due to the chain rule,
/ṭψ(y,L_t(1))=/ṭL_t(1)·/ṭψ(y,·)(L_t(1))=α(t)·Δψ(y,L_t(1)).
Hence,
/ṭ∑_y∈ℤ^dψ_α(y,t) =∑_y∈ℤ^dα(t)Δψ(y,L_t(1))-α(t)Δψ(0,t)
=-α(t)ρ∑_y∼ 0(ψ(y,L_t(1))-ψ(0,L_t(1))
=-2dρα(t)(ψ(e_1,L_t(1))-1)
with e_1=(1,0,⋯,0)^T the first unit vector, where we used the symmetry of the random walk as well as the fact that ∑_yΔψ_α(y,t)=0. Thus, setting ϕ(e_1,t):=1-ψ(e_1,t),
∑_y∈ℤ^dψ_α(y,t)=∫_0^t2dρϕ(e_1,L_s(1))α(s)ṣ=∫_0^L_t(1)2dρϕ(e_1,s)ṣ
where we substituted L_s(1) in the last step. Now, the quantity ϕ(e_1,t), which represents the probability that a random walk with jump rate 2dρ and start in e_1 does not hit the origin up to time t, is known (see e.g. <cit.>) to have the asymptotics
ϕ(e_1,t)={[ √(1/πρ t)(1+o(1)), d=1,; π/log(t)(1+o(1)), d=2,; G_d(0)^-1(1+o(1)), d≥ 3, ].
as t→∞, where G_d is the Green's function of a d-dimensional symmetric random walk with generator Δ. Thus,
∑_y∈ℤ^dψ^α(y,t)={[ 4√(ρ/π)√(L_t(1))(1+o(1)), d=1,; ; 4πρL_t(1)/log(L_t(1))(1+o(1)), d=2,; ; 2dρ G_d(0)^-1L_t(1)(1+o(1)), d≥ 3, ].
as t→∞. In the following, we distinguish between the recurrent dimensions d∈{1,2} on one side and the transient dimensions d≥ 3 on the other side. Let us start with the case d≥ 3 and recall the large deviation principle for the normalized local times (1/tL_t(1))_t≥ 0 of α in state 1 from theorem <ref> with rate function I given by (<ref>), which has a unique zero at s_0/s_0+s_1. As the function g [0,1]→ℝ, x↦ -νρ G_d(0)^-1x is continuous and bounded, we can apply Varadhan's lemma to deduce the limit
-lim_t→∞1/tlog𝔼_1^α[exp(-ν2dρ/G_d(0)L_t(1))]=inf_a∈[0,1]{I(a)+2dνρ/G_d(0)a}=:λ̃_d,∞.
This establishes the asymptotics (<ref>) for d≥ 3. For d=1, we apply lemma <ref> with f(t)=√(t) to obtain
lim_t→∞1/√(t)log𝔼_1^α[exp(-ν√(8ρ/π)√(L_t(1)))]=-4ν√(ρ/π)√(s_0/s_0+s_1).
The case d=2 is similar with f(t)=t/log(t) and F(a)=-4νπρ a. Next, we continue with γ∈(-∞,0). Recall the representation (<ref>) of the annealed survival probability as well as the solution v_(0,α)(y,t) of (<ref>), which is the survival probability of Y up to time t, if we interpret 0 as a trap which tries to kill Y with rate |γ| at time s if (Y(s),α(s))=(0,1). Here, v_(0,α) plays the analogous role of ϕ_α, and w_(0,α)=1-v_(0,α) the role of ψ_α. Denote by w(y,t) the killing probability of a random walk Y with start in y up to time t, if Y is killed in 0 at rate γ independent of α, and v:=1-w, which solves the differential equation
{[ /ṭv(y,t) = ρΔ v(y,t)+γδ_0 v(y,t), y∈ℤ^d, t>0,; v(y,0) = 1, y∈ℤ^d ]..
Then, applying lemma <ref> again yields w_(0,α)(y,t)=w(y,L_t(1)) and therefore
v_(0,α)(y,t)=v(y,L_t(1)).
In <cit.> it has been shown that v(0,t) has the asymptotics
v(0,t)={[ 1/|γ|√(ρ/π)1/√(t)(1+o(1)), d=1,; 4πρ/|γ|1/log(t)(1+o(1)), d=2,; 2dρ/2dρ-γ G_d(0)(1+o(1)), d≥ 3, ].
as t→∞. Moreover,
∫_0^tα(s)v_(0,α)(0,s) ṣ=∫_0^tα(s)v(0,L_s(1)) ṣ=∫_0^L_t(1))v(0,s) ṣ
as in (<ref>). This yields
νγ∫_0^tα(s)v_(0,α)(0,s)ṣ={[ -4ν√(ρ/π)√(L_t(1))(1+o(1)), d=1,; -4νπρL_t(1)/log(L_t(1))(1+o(1)), d=2,; νγ2dρ/2dρ-γ G_d(0)L_t(1)(1+o(1)), d≥ 3, ].
for t→∞. In the dimensions d∈{1,2}, we can apply Lemma <ref> as in the case γ=∞, whereas in dimension d≥ 3 Varadhan's Lemma again tells us that
-lim_t→∞1/tlog𝔼_1^α[exp(2dνρ/2dρ/γ-G_d(0)L_t(1))]=inf_a∈[0,1]{I(a)-2dνρ/2dρ/γ-G_d(0)a}=:λ̃_d,γ.
This proofs the proposition for the case γ∈(-∞,0).
Note that in the first two dimensions the survival probability decays sub-exponentially and does not depend on γ and in higher dimensions d≥ 3 the asymptotics for γ=∞ are consistent with those of the case γ<∞, as lim_γ→∞λ̃_d,γ=λ̃_d,∞.
The next proposition deals with the case γ>0 of catalysts, still under the assumption κ=0.
Let γ∈(0,∞) and κ=0. Then for all dimensions d≥ 1 the annealed number of particles satisfies the double-exponential asymptotics
lim_t→∞1/tloglog<U(t)>=sup_f∈ℓ^2(ℤ^d),f_2=1(γ f(0)^2-1/2∑_x,y∈ℤ^d, x∼ yρ(f(x)-f(y))^2).
Recall the representation (<ref>) as well as the solution v_(0,α) to (<ref>) for a fixed realization of α, which can also be written as
v_(0,α)(0,t)=𝔼_0^Y[exp(γ∫_0^tα(s)δ_0(Y(s))ṣ)]
in point 0. Let
v(0,t):=𝔼_0^Y[exp(γ∫_0^tδ_0(Y(s))ṣ)]
which plays the analogous role of the function v in the proof of Proposition <ref>, but now for positive γ. In <cit.> it has been shown that
lim_t→∞1/tlog v(0,t)=μ
where
μ:=sup_f∈ℓ^2(ℤ^d),f_2=1(γ f(0)^2-1/2∑_x,y∈ℤ^d, x∼ yρ(f(x)-f(y))^2)
is the largest eigenvalue of the self-adjoint operator ℋ:=ρΔ+γδ_0 on ℓ^2(ℤ^d). Furthermore, it is known from <cit.> that μ is always positive in dimensions d=1,2 and
μ{[ = 0, 0<γ/ρ≤1/G_d(0),; >0, γ/ρ> 1/G_d(0) ].
in dimensions d≥ 3, where G_d is the Green's function of a simple symmetric random walk with jump rate 2d. Next, we aim to compare v(0,t) to v_(0,α)(0,t) by applying lemma <ref> again in the same manner as in the proof of proposition <ref>, which yields
v_(0,α)(0,t)=v(0,L_t(1)).
Hence, for κ=0,
<U(t)>= 𝔼_1^α[exp(νγ∫_0^tα(s)v(0,L_s(1)) ṣ)]=𝔼_1^α[exp(νγ∫_0^L_t(1)v(0,s) ṣ)]
= 𝔼_1^α[exp(νγ∫_0^L_t(1)^μ s(1+o(1)) ṣ)]=𝔼_1^α[exp(νγ/μ^μ L_t(1)(1+o(1)))]
as t→∞. Now, on one hand, we have the upper bound
<U(t)>≤exp(νγ/μ^μ t(1+o(1))), t→∞,
and on the other hand,
𝔼_1^α[exp(νγ/μ^μ L_t(1)(1+o(1)))]≥ exp(νγ/μ^μ t(1+o(1)))ℙ_1^α(L_t(1)=t),
as t→∞. Now, recall the large deviation principle for for the normalized local times with rate function I on scale t, which asserts that
lim_t→∞1/tlogℙ_1^α(L_t(1)=t)=-I(1).
Hence, combining (<ref>) with (<ref>),
lim_t→∞1/tloglog𝔼_1^α[exp(νγ/μ^μ L_t(1)(1+o(1)))]≥ lim_t→∞1/tloglog(exp(νγ/μ^μ t)ℙ_1^α(L_t(1)=t))
= lim_t→∞1/tlog(νγ/μ^μ t+logℙ_1^α(L_t(1)=t))
= lim_t→∞1/tlog(νγ/μ^μ t)=μ.
The next ingredient for the proof of our main result is the so called Pascal principle which asserts that if we average over the environment, than the best the random walk X can do in order to maximize its mass is to stay still in the starting point, which brings us back to the case κ=0. In the setting of a simple random walk without any switching component, this has been proven in <cit.> and <cit.> for γ<0 and γ>0 respectively. Therefore, the question arises if the Pascal principle can still provide an upper bound also in our case of a switching random walk, or if there is a better joint strategy of the random walk together with the dormancy component α. The next lemma ensures that the Pascal principle still proves best also in our case.
Recall v_(X,α) as the solution to (<ref>) and the solution v_(0,α) of (<ref>) for any fixed realization of (X,α). Then, for all γ∈[-∞,∞), y∈ℤ^d and t≥ 0,
v_(X,α)(y,t)≤ v_(0,α)(0,t).
First, let γ∈[-∞,0) and recall from (<ref>) that v_(0,α)(y,t)=v(y,L_t(1)). Now, <cit.> asserts that for any piecewise constant function X̂:[0,t]→ℤ^d with a finite number of discontinuities,
v_X̂(y,t)≤ v(0,t),
where v_X̂ is the solution to
{[ /ṭv_X̂(y,t) = ρΔ v_X̂(y,t)+γδ_X̂(y)v_X̂(y,t), y∈ℤ^d, t>0,; v_X̂(y,0) = 1, y∈ℤ^d. ].
In a similar way as in (<ref>), we see that if v_(X̂,α) is the solution to
{[ /ṭv_(X̂,α)(y,t) = ρΔ v_(X̂,α)(y,t)+γα(t)δ_X̂(t)v_(X̂,α)(y,t), y∈ℤ^d, t>0,; v_(X̂,α)(y,0) = 1, y∈ℤ^d, ].
then v_(X̂,α)(y,t)=v_X̂(y,L_t(1)), as X̂ is independent of α. Hence,
v_(X̂,α)(y,t)=v_X̂(y,L_t(1))≤ v(0,L_t(1))=v_(0,α)(0,t).
Finally, conditioned on α, the random walk X is a piecewise constant function with a finite number of discontinuities, such that we can replace X̂ with X.
Next, let γ>0. The proof works along the same lines as in <cit.>. From this proof we already know that if p_ρ(x,y) denotes the probability density function of a random walk with generator ρΔ and start in 0, then
max_x∈ℤ^dp_ρ(x,y)=p_ρ(0,t)
for all t≥ 0. Further, in <cit.> has been shown that h^*n→ 0, n→∞, uniformly on compact intervals, where h^*n denotes the n-fold convolution of the function
h(t):=γ p_ρ(0,t).
Since
v_(X,α)(X(t),t)= 1+γ∫_0^tp_ρ(X(t)-X(s),t-s)α(s)v_(X,α)(X(s),s) ṣ
≤ 1+γ∫_0^tp_ρ(0,t-s)α(s)v_(X,α)(X(s),s) ṣ
and
v_(0,α)(0,t)= 1+γ∫_0^tp_ρ(0,t-s)α(s)v_(0,α)(0,s) ṣ,
we have
v_(X,α)(X(·),·)≤ 1+h^*(α v_(X,α)(X(·),·))
as well as
v_(0,α)(0,·)= 1+h^*(α v_(0,α)(0,·)).
Hence,
v_(0,α)(0,·)-v_(X,α)(X(·),·)≥ h^*n(α(v_(0,α)(0,·)-v_(X,α)(X(·),·))
by iteration and substraction of (<ref>) and (<ref>). As h^*n→ 0 for n→∞, we can deduce
v_(X,α)(X(·),·)≤ v_(0,α)(0,·)
for all realizations of (X,α). Altogether, we obtain
v_(X,α)(y,t)= 1+γ∫_0^tp_ρ(y-X(s),t-s)α(s)v_(X,α)(X(s),s) ṣ
≤ 1+γ∫_0^tp_ρ(0,t-s)α(s)v_(X,α)(X(s),s) ṣ
≤ 1+γ∫_0^tp_ρ(0,t-s)α(s)v_(0,α)(0,s)ṣ=v_(0,α)(0,t),
as desired.
We are now ready to prove Theorem 1.3.
§.§.§ Proof of Theorem <ref>
The proof can be summarized as follows: We will first show that the case κ=0 considered in the Proposition <ref> and <ref> provides a lower bound for the general case κ≥ 0 in dimensions d=1,2. This gives us together with the upper bound asserted in Lemma <ref> the desired asymptotics.
Let us start with the case γ<0 of traps and show that the asymptotics of <U(t)> are lower-bounded by (<ref>) in dimensions d∈{1,2}. We adapt the notations from the proof of <cit.>: Let E_t be the event that none of the traps starts in a ball B_R_t of radius R_t around 0, where we choose R_t to be t/ln(t) for d=1 resp. ln(t) for d=2. This event has the probability ^-ν(R_t+1)^d. Further, let G_t be the event that X with start in 0 stays in B_R_t up to time t. Analogously, we define G̃_t to be the event that a simple symmetric random walk X̃ without switching and with jump rate 2dκ stays in B_R_t up to time t. Then, using time-change again,
ℙ(G_t) =ℙ_(0,1)^(X,α)(X(s)∈ B_R_t∀ s≤ t)=ℙ_1^αℙ_0^X̃(X̃(s)∈ B_R_t∀ s≤ L_t(1))
≥ℙ_1^αℙ_0^X̃(X̃(s)∈ B_R_t∀ s≤ t)=ℙ(G̃(t))≥exp(ln(β)t/R_t^2),
where the last inequality is known from <cit.> for some β>0. Moreover, let F_t be the event that each trap which starts outside B_R_t only intersects B_R_t during time periods where α takes the value 0, i.e. , where X is dormant. Then, we can lower-bound
<U(t)>≥ℙ(E_t)ℙ(F_t)ℙ(G_t).
Note that the event F_t differs from the analogous event appearing in the proof of <cit.>, which was the event that each trap which starts outside B_R_t never enters B_R_t up to time t. Here, making use of the protection provided by the dormancy mechanism, we can relax this condition and only require that the traps stay outside of B_R_t whenever X is active. In order to compute ℙ(F_t) we first look at F̃_t which shall denote the event that no trap Y which starts in y≠ 0 ever hits 0 at time points s with α(s)=1. The probability of this event is nothing but the survival probability of X in case κ=0, which has been studied in Proposition <ref>. More precisely,
ℙ(F_t)=𝔼_1^α[exp(-ν∑_y∈ℤ^dw_(0,α)(y,t))],
for X≡ 0, as seen in (<ref>). Thus, ℙ(F̃_t) is asymptotically equal to (<ref>). Comparing ℙ(F_t) to ℙ(F̃_t) exactly in the same way as in <cit.>, we find that these are asymptotically equal. Finally, we compare the decay rates of all probabilities ℙ(E_t),ℙ(F_t),ℙ(G_t) for t→∞ to conclude that the annealed survival probability is asymptotically lower bounded by ℙ(F_t) and hence (<ref>).
Note, that we did not prove a lower bound in dimension d=3, so that we can only deduce the existence of a constant λ_d,γ,ρ,κ,s_0,s_1≥λ̃_d,γ which may depend on all the parameters.
We continue with the case γ>0. The upper bound again follows from the Pascal principle stated in Lemma <ref>. For the lower bound, we force the random walk X to stay in the starting point 0 up to time t and use the fact that, for a simple symmetric random walk X̃ without switching and with jump rate 2dκ,
ℙ_0^X̃(X̃(s)=0∀ s∈[0,t])=^-2dκ t
to deduce that
lim_t→∞1/tlogℙ_(0,1)^(X,α)(X(s)=0∀ s∈[0,t])= lim_t→∞logℙ_1^α(X̃(s)=0∀ s∈[0,L_t(1)])
= lim_t→∞1/tlog𝔼_1^α(^-2dκ L_t(1))
= -inf_a∈[0,1](2dκ a+I(a)),
where we used Varadhan's Lemma together with the large deviation principle for 1/tL_t(1) with rate function I. Moreover, recall from the proof of <ref> that
<U(t)_0>≤exp(νγ/μ^μ t(1+o(1))), t→∞,
where U(t)_0 shall denote the number of particles up to time t in the case κ=0. Hence,
lim_t→∞1/tloglog<U(t)> =
lim_t→∞1/tloglog(exp(νγ/μ^μ t(1+o(1)))·ℙ_(0,1)^(X,α)(X(s)=0∀ s∈[0,t]))
=lim_t→∞1/tlog(νγ/μ^μ t(1+o(1))+log𝔼_1^α(^-2dκ L_t(1)))
=μ.
§.§ Acknowledgement
The author would like to thank Professor Wolfgang König for his invaluable support and many helpful suggestions.
WK2020
[AH79]doubleI
E. Aifantis, J. Hill,
On the Theory of Diffusion in Media with Double Diffusivity I. Basic Mathematical Results,
The Quarterly Journal of Mechanics and Applied Mathematics 33, No. 1, 1–21 (February 1980).
[A95]antal
P. Antal,
Enlargement of obstacles for the simple random walk,
Annals of probability 23, No. 3, 1061-1101 (July 1995).
[BYZ13]baran
N. A. Baran, G. Yin, Chao Zhu,
Feynman-Kac formula for switching
diffusions: connections of systems of partial
differential equations and stochastic
differential equations,
Advances in Difference Equations, No. 315 (2013).
[BK01]bounded
M. Biskup, W. König,
Long-time Tails in the Parabolic Anderson Model with Bounded Potential,
The Annals of Probability 29, No. 2, 636–682 (2001).
[BHS21]blath
J. Blath, F. Hermann, M. Slowik,
A branching process model for dormancy and seed banks in
randomly fluctuating environments,
Journal of Mathematical Biology 83, No. 17 (2021).
[BHLWB21]dormancy
J. Blath, J. T. Lennon, F. den Hollander, M. Wilke Berenguer,
Principles of seed banks and the emergence of
complexity from dormancy,
Nature Communications 12, No. 4807 (2021).
[C66]cohen
D. Cohen,
Optimizing reproduction in a randomly varying environment,
Journal of Theoretical Biology 16, 267–282 (1966).
[DGRS11]drewitz
A. Drewitz, J. Gärtner, A. Ramirez, R. Sun,
Survival probability of a random walk among a poisson system of moving traps,
Conference Paper. In Probability in complex physical systems, 119–158, Springer (2012)
[EdHM16]dyn
D. Erhard, F. den Hollander, G. Maillard,
Parabolic Anderson Model in a Dynamic Random Environment: Random Conductances,
Mathematical Physics, Analysis and Geometry 19, No. 5 (2016).
[GdHM07]exclusion
J. Gärtner, F. den Hollander, G. Maillard,
Intermittency on catalysts: symmetric exclusion,
Electronic Journal of Probability 12, No. 12, 516-573 (2007).
[GdH06]intermittency
J. Gärtner, F. den Hollander,
Intermittency in a catalytic random medium,
Annals of probability 6, 2219-2287 (2006).
[GdHO22]spatialseedbanks
A. Greven, F. den Hollander, M. Oomen,
Spatial populations with seed-bank: well-posedness, duality and equilibrium,
Electronic Journal of Probability 27, 1-88 (2022).
[GH06]movingcat
J. Gärtner, M. Heydenreich,
Annealed asymptotics for the parabolic Anderson model with a moving catalyst,
Stochastic Processes and their Applications 116, 1511–1529 (2006).
[GM90]garmol
J. Gärtner, S. Molchanov,
Parabolic problems for the Anderson
model I. Intermittency and related topics,
Communications in Mathematical Physics 132, 613–
655 (1990).
[dHN]nandan
F. den Hollander, S. Nandan,
Spatially Inhomogeneous Populations with Seed-Banks: I. Duality, Existence and Clustering,
Journal of Theoretical Probability 35, 1795–1841 (2022).
[HJV07]bookhaccou
P. Jagers, P. Haccou, V. A. Vatutin,
Branching processes: variation, growth, and extinction of populations,
Biometrics 62, No. 4, 1269–1270 (December 2006).
[H04]heyd
M. Heydenreich,
Intermittency in the Parabolic Anderson Model with Catalyst,
Master's thesis, LMU München (2004).
[JL11]jl11
S. E. Jones, J. T. Lennon,
Microbial seed banks: the ecological and evolutionary implications of dormancy,
Nature Reviews Microbiology 9, 119–130 (2011).
[K16]PAM
W. König,
The Parabolic Anderson Model: Random Walk in Random Potential,
Birkhäuser (2016).
[K20]koenigldp
W. König,
Große Abweichungen: Techniken und Anwendungen,
Birkhäuser (2020).
[K20a]koenigsurvey
W. König,
Branching random walks in random environment,
in Probabilistic Structures in Evolution, 23-41,
European Mathematical Society (2020).
[L92]landim
C. Landim,
Occupation time large deviations for the symmetric simple exclusion process,
Annals of Probability 20, 206–231, (1992).
[L96]lawler
G. F. Lawler,
Intersections of random walks,
Birkhäuser Boston (1996).
[LS18]ls18
J. T. Lennon, W. TR. Shoemaker,
Evolution with a seed bank: the population genetic consequences of
microbial dormancy,
Evolutionary Applications 11, No. 1, 60–75 (2018).
[PR02]girsanovmarkov
Z. Palmowski, T. Rolski,
A technique for exponential change of
measure for Markov processes,
Bernoulli 8, No. 6, 767–785 (2002).
[SW11]sw
A. Schnitzler, T. Wolff,
Precise asymptotics for the parabolic Anderson
model with a moving catalyst or trap,
Conference Paper. In Probability in complex physical systems, 69-89, Springer (2012)
[YZ10]switching
G. George Yin, Chao Zhu,
Hybrid Switching Diffusions: Properties and Applications
Springer New York, NY (2010)
|
http://arxiv.org/abs/2409.02684v1 | 20240904131620 | Neural timescales from a computational perspective | [
"Roxana Zeraati",
"Anna Levina",
"Jakob H. Macke",
"Richard Gao"
] | q-bio.NC | [
"q-bio.NC",
"cs.LG",
"stat.ML"
] |
Pointwise and uniform bounds for functions of the Laplacian on non-compact symmetric spaces
[
September 9, 2024
===========================================================================================
§ ABSTRACT
Timescales of neural activity are diverse across and within brain areas, and experimental observations suggest that neural timescales reflect information in dynamic environments. However, these observations do not specify how neural timescales are shaped, nor whether particular timescales are necessary for neural computations and brain function.
Here, we take a complementary perspective and synthesize three directions where computational methods can distill the broad set of empirical observations into quantitative and testable theories:
We review (i) how data analysis methods allow us to capture different timescales of neural dynamics across different recording modalities, (ii) how computational models provide a mechanistic explanation for the emergence of diverse timescales, and (iii) how task-optimized models in machine learning uncover the functional relevance of neural timescales.
This integrative computational approach, combined with empirical findings, would provide a more holistic understanding of how neural timescales capture the relationship between brain structure, dynamics, and behavior.
§ INTRODUCTION
Ongoing neural activity unfolds across a wide range of timescales that are suggested to be related to the large diversity of behavioral timescales and those of the behavior-relevant inputs to the nervous system.
These timescales are often characterized by the decay rate of a neural signal's autocorrelation function. While precise definitions differ, recent experimental studies of neural timescales present a consistent picture across different recording modalities, species, and cognitive tasks: timescales increase along the cortical hierarchy, are diverse within each brain area, and are often correlated with behavioral variables during cognitive tasks (for recent reviews of empirical evidence, see<cit.>).
As such, this fundamental quantity—the characteristic time constant of neuronal dynamics—may be an important signature of neural circuits across the brain. Importantly, neural timescales are suggested to take on a “representational” role, i.e., the diversity of timescales across the brain mirrors the multiple fluctuation timescales of relevant information in a dynamic environment, which is leveraged by the brain for perception, memory, planning, and action. However, while the large body of experimental observations demonstrates the relevance of neural timescales, they often blur crucial differences between signal modalities, and do not specify the mechanisms underlying how neural timescales arise, nor whether they are necessary (or sufficient) for representation and transformation of information in the brain.
Here, we take a computational perspective and review how models of the brain—when constrained by multi-modal observations of neural timescales—can help provide a mechanistic understanding of neural timescales complementary to experimental investigations. Specifically, we discuss three research directions for how computational methods can distill the broad set of empirical observations into quantitative and testable theories, formulated as the following questions (<ref>):
* “Neural timescales” are measured from different physiological signals (e.g., spikes, field potentials, calcium imaging, fMRI BOLD) and quantified using different methods: how do these measurements relate and differ from one another?
(Section <ref>)
* Which cellular and circuit processes contribute to the diversity of observed neural timescales, and how can mechanistic models of brain dynamics help propose and exclude candidate mechanisms? (Section <ref>)
* When are “representational neural timescales” that mirror environmental timescales necessary (or sufficient) for supporting task-relevant computations, and how to use functional models from machine learning (in particular, recurrent neural networks) to explore such hypotheses? (Section <ref>)
We advocate using an integrative perspective, combining computational and empirical approaches, to understand how neural timescales provide a link between brain structure, dynamics, and behavior and discuss studies that laid stepping stones to build such a perspective.
§ ESTIMATING TIMESCALES FROM NEURAL RECORDINGS
Neural timescales are estimated from different recording modalities, such as spiking activity<cit.>, LFP<cit.>, ECoG<cit.>, calcium imaging<cit.>, fMRI BOLD<cit.>, using a variety of estimation methods<cit.>.
Different modalities capture neural activity with distinct spatiotemporal resolutions (e.g., fast and local electrophysiological recordings versus slow and global fMRI BOLD), reflect different underlying physiological mechanisms (e.g., neuronal membrane potential versus metabolic processes), and are described by different mathematical models (e.g., point processes versus continuous signals).
At the same time, estimation methods for corresponding timescales rely on distinct mathematical assumptions that may affect the subsequent results.
While studies of neural timescales often concentrate on the similarity of findings across different modalities and methods, both the recording modality and the analysis method can introduce constraints and biases on estimated timescales that need to be carefully considered when assessing the generalisability of results.
§.§ The impact of recording modality on estimated timescales
One of the most robust observations across various recording modalities is the hierarchical organization of neural timescales, i.e., timescales being faster in primary sensory and motor areas, and slower in transmodal association areas.
This hierarchy has been observed in spiking activity<cit.>, LFP<cit.>, ECoG<cit.>, calcium imaging<cit.> and fMRI<cit.>.
However, despite the similar hierarchical organization, these observations have clear distinctions (<ref>a).
Timescales measured from different modalities are affected by the biophysical properties of recorded signals and physical constraints in the recording setup.
Spiking activity timescales are often in the range of tens to hundreds of milliseconds<cit.>, whereas fMRI BOLD timescales are in the range of a few seconds<cit.>.
For the same stimulus, the hemodynamic evoked response is generally slower than spiking activity<cit.>.
Moreover, the currently used slow fMRI scan rates may further limit the detection of faster timescales in neural activity<cit.>.
Similar issues can affect timescales measured from calcium imaging:
The time constant of calcium release, as well as the time constant of the calcium indicator and the imaging scan rate, can cause the same underlying neural dynamics to appear as having a different timescale<cit.>.
Thus, when interpreting their potential functional relevance, it is important to distinguish whether estimated timescale differences arise from underlying population dynamics, biophysics of the measurement process, or both.
On the other hand, neural timescales estimated from different recording modalities may reflect distinct underlying neurophysiological processes and computations that are of further interest.
fMRI BOLD is often associated with population activity <cit.>, hence, its timescales may reflect population spiking timescales.
Estimated timescales from LFP<cit.> and ECoG<cit.>; however, are approximately ten times faster than spiking activity in similar brain regions, which seems to be at odds with the notion that LFP and ECoG signals are slower aggregate signals than spiking activity<cit.>.
This observation suggests that timescales estimated from different modalities may arise from distinct but potentially correlated physiological processes, e.g., fast transmembrane and synaptic currents versus slow recurrent interactions<cit.>.
In fact, when explicitly accounting for multiple timescales in the estimation method<cit.>, both fast and slow timescales can be uncovered from spiking activity<cit.>.
Therefore, estimation methods that consider multiple timescales can potentially incorporate modality- or even behavior-specific time constants explicitly (as we discuss in the next section<cit.>).
The observed hierarchical organization of intrinsic timescales presents differences across studies, potentially stemming from differences across species and methodologies.
Unlike primate studies<cit.>, studies on the hierarchy of timescales in mouse cortex have reported inconsistent results.
Intrinsic timescales measured from calcium imaging<cit.> and fMRI<cit.> display a hierarchical structure from visual to prefrontal areas.
However, in spiking activity, such a hierarchy was initially observed only in timescales of stimulus-evoked activity <cit.> and not in spontaneous activity <cit.>, though recently it was shown that by analyzing longer periods of recorded neural activity, the hierarchical structure can also be recovered in spontaneous activity<cit.>.
These results suggest that multiple timescales might be present in spontaneous activity, but only some follow the anatomical hierarchy.
Moreover, even in primate studies, the hierarchical ordering of brain areas based on timescales does not exactly match across different modalities<cit.> (<ref>a).
Studying such subtle but consistent differences may help to better understand the inherent biophysical differences between various recording modalities, and cross-species differences in cortical architecture.
§.§ Different methods for estimating timescales
Methods for estimating timescales from neural activity usually quantify the decay rate of the signal's autocorrelation function.
However, each method relies on different theoretical assumptions and incorporates different features of neural dynamics (<ref>b).
For instance, they could explicitly include oscillatory components that are often present in neural activity and can be modulated by task variables, which can alter the autocorrelation shape. Here, we catalog existing methods along two axes: (i) the explicitness of their fitted model and their computational complexity, and (ii) the number of features they capture in data (<ref>b).
The first group of methods does not assume any strict mathematical form for the autocorrelation shape. These methods characterize the decay rate as, for example, the time lag where autocorrelation drops to 1/e<cit.>, the (half of) full width at half maximum<cit.>, or the integral of positive autocorrelation values<cit.>. We thus refer to these methods as “model-free” methods.
Computing such measures is straightforward, and they are typically estimated directly from the data.
However, such model-free methods do not capture the full shape of data autocorrelation. As a result, in the presence of complex temporal features in neural activity beyond a single exponential decay, different methods can give different results on the same data (for example, see <cit.>).
The second group of methods assumes a mathematical function for the shape of neural data autocorrelation to capture the relevant data features and estimate the timescale from the parameters of the fitted function (i.e., model-based).
The simplest assumption is that the autocorrelation of neural activity follows an exponential decay function, AC(t) = e^-t/τ, where decay time constant τ reflects the timescale of underlying neural dynamics.
This is equivalent to the assumption of obtaining a Lorentzian function as a power spectral density in the frequency domain, i.e., PSD(f) ∼ (f^2 + f_k^2) ^-1, where the knee frequency f_k is translated to timescale as τ = (2π f_k)^-1 <cit.>.
The exponential decay (Lorentzian function) can be further augmented with additional features to better capture the complexity of temporal dynamics.
For example, the autocorrelation of neural dynamics can exhibit more than one timescale<cit.>.
Very slow timescales can also arise from firing rate fluctuations across different trials.
Such a slow timescale can be accounted for by fitting the autocorrelation with an exponential decay function and an additional offset term corresponding to longer-term processes (e.g.,<cit.>, see<cit.> for a toolbox).
Alternatively, multiple timescales can be estimated by fitting the PSD with a mixture of Lorenzian functions<cit.>.
Neural activity can additionally contain oscillatory dynamics that appear as periodic fluctuations in the autocorrelation or peaks in the PSD, which can impact the estimated timescale or knee frequency, respectively.
To overcome this problem, methods such as Spectral Parameterization (or, “FOOOF”) remove the oscillatory components from the PSD before estimating the knee frequency<cit.>.
For example, oscillatory peaks can be modeled as a mixture of Gaussians and removed.
Then, the timescale is estimated by fitting an extended form of the Lorentzian function as PSD(f) = C (f^χ + f_k^χ)^-1, with a variable 1/f power-law exponent, allowing a more flexible fitting of the neural power spectra.
Finally, a more accurate estimation of timescales can be obtained using a generative process capturing the statistics of neural time series<cit.>.
Such methods are particularly powerful since they can explicitly incorporate modality-specific properties and task variables.
For instance, including task variables in a multi-timescale autoregressive process allows for disentangling intrinsic and task-induced timescales such as fluctuations induced by reward<cit.>.
Moreover, generative models can capture experimental limitations, such as short trial durations that can lead to systematic underestimation of timescales<cit.>.
The parameters of the generative process can be directly fitted to neural time series<cit.> (e.g., via least squares or maximum likelihood) or can be used to instantiate a generative model in Bayesian inference methods<cit.>.
Bayesian methods aim to uncover the whole set of parameters that is compatible with the data (i.e., the posterior distribution) instead of obtaining the single best-fitting value of timescales (or other parameters), allowing them to better deal with the stochasticity of neural activity.
For simple models, the posterior distribution can be computed analytically<cit.>, but for more complex models, simulation-based methods such as Approximate Bayesian Computation (e.g., abcTau<cit.>) and simulation-based inference (SBI<cit.>) can be applied to obtain approximate posterior distributions.
In such methods, synthetic time series are produced from an explicit generative (statistical or mechanistic) model mimicking similar statistics as neural recordings.
Then, by matching the summary statistics (e.g., autocorrelation, PSD, or time series) of synthetic data to the experimental data, we can estimate timescales using the parameters of the matched generative model.
The generative model and summary statistics can be chosen flexibly, allowing them to capture different data features.
It is possible to fit different generative models (e.g., models with different numbers of timescales or oscillatory components) and use Bayesian model comparison techniques to find the best models describing the neural data<cit.>.
Since the generative model can also incorporate modality-specific properties, such as timescale ranges, stochastic elements (e.g., spiking activity as a point process versus LFP as a continuous signal <cit.>), or even particular biophysical properties, it can be used to uncover possible models describing each modality.
Therefore, these methods can incorporate various mechanistic assumptions and data features when estimating timescales, but this comes with additional computational costs, both in running the simulations and during posterior inference.
§ MECHANISTIC MODELS OF TIMESCALES IN THE BRAIN
Mechanistic models, when constrained by experimental observations of neural timescales, can provide insights into how the diversity of timescales arises within and across different brain areas.
Such models have so far been used to explain the following experimentally observed properties of neural timescales: (i) the hierarchical structure of neural timescales across different brain areas<cit.>, (ii) the diversity of timescales within one brain area<cit.>, and (iii) the flexibility of timescales to change according to different task variables<cit.>.
Mechanisms for explaining the diversity of neural timescales are generally categorized into cellular and synaptic mechanisms on the level of individual neurons and network-mediated mechanisms through interactions between neurons (<ref>).
While these mechanisms define the “intrinsic” timescales of neurons, other processes, such as external input and neuromodulatory mechanisms, can dynamically modulate them in addition.
Moreover, different models capture neural timescales on different spatial scales of observations, e.g., timescales of single neurons versus population activity.
Thus, comparing these models can reveal how neural timescales across different scales and modalities can be related to each other.
[width=,colback=white,title=Box 1. Timescale as an estimated quantity vs. time constant as a parameter,colbacktitle=babyblueeyes,coltitle=black]
Neural timescales in the literature, for the most part, refer to the estimated decay time constant of measured neural activity, i.e., by describing the autocorrelation function of spike counts or LFP time series as ACF(t)=e^t/τ_fit. Methods discussed in Section <ref> generally aim to find the value of τ_fit (or multiple τ_fits) that best fit the data.
On the other hand, mechanistic (i.e. Hodgkin-Huxley) and functional (i.e. rate-based artificial recurrent neural networks) neural models in Sections <ref> and <ref> typically include a time constant parameter that describes how the impact of input into a neuron decays over time, e.g., τ_paramdV/dt = -(V(t)-V_rest) + I(t), where I(t) includes time-varying stimulus-driven, feed-forward, and recurrent inputs.
It is important to note that the former, estimated τ_fit does not necessarily need to correspond to the latter, underlying model parameter τ_param, and in general, do not. Section <ref> outlines multiple mechanisms, including non-time constant related ones such as network connectivity, that contribute to shaping τ_fit we observe in data.
§.§ Cellular and synaptic mechanisms
In computational models, timescales are often controlled by incorporating various biophysical time constants of cellular and synaptic processes in the dynamics of individual neurons that reflect heterogeneous properties of ionic channels and synapses<cit.>.
Most dynamical system models include a time constant in the dynamical equation of each neuron, interpreted as the membrane time constant (5-50 ms<cit.>) (Box 1).
Slower timescales can be implemented by introducing other biophysical features such as slow synaptic time constants<cit.>, timescales of short-term synaptic depression and facilitation <cit.>, or adaptation mechanisms<cit.>.
Spike-frequency adaptation, in particular, spans a wide range of fast and slow timescales following a power law, which is suggested to lead to efficient coding in individual cortical neurons driven by behaviorally relevant signals<cit.> and facilitate computations on temporally dispersed inputs <cit.>.
However, biophysical time constants at the single-neuron level cannot explain all the temporal structures of neural dynamics.
For instance, a Hodgkin-Huxley model of an isolated
neuron with detailed dynamics of membrane potential and ionic and synaptic currents cannot capture the full autocorrelation shape of neural activity recorded in prefrontal areas<cit.>.
Biophysical time constants alone define the autocorrelation timescale of neural activity—henceforth referred to as activity timescale (Box 1)—when a neuron is isolated, but when neurons are embedded in a network, their activity timescale may differ significantly from their biophysical time constants.
At the same time, distinct biophysical mechanisms may interact differently in the presence of network interactions compared to isolated neurons. In particular, while activity timescales of network-embedded neurons increase with the synaptic time constant<cit.>, their relationship to the adaptation time constant is less straightforward:
Analyzing similar models of randomly connected excitatory and inhibitory units, Beiran and Ostojic<cit.> reported that activity autocorrelation timescales do not depend on the adaption time constant, whereas Muscinelli et al.<cit.> report a positive relationship.
It is important to note, however, that the autocorrelation of neural activity in networks with adaptation does not follow a simple exponential decay. One possible explanation of the apparent disagreement in results could lie in the different methods used to measure timescales in each study, thus highlighting the need to understand how timescale estimation methods impact the conclusions we draw from empirical data and computational simulations.
In the next section, we review how incorporating network mechanisms, in addition to cellular and synaptic mechanisms, could help to better explain observations of diverse and longer neural timescales.
§.§ Network and connectivity mechanisms
Neurons in the brain are embedded within rich network structures<cit.> that affect the correlation patterns of neural dynamics<cit.>.
Therefore, a mechanistic understanding of neural timescales may require the context of interactions within a network.
Indeed, computational models have shown that the strength and structure of network connectivity can influence the autocorrelation timescale of neural activity.
Slower timescales arise from stronger<cit.> or more clustered (e.g., <cit.>) connectivity (for a review see <cit.>).
Strong recurrent connectivity in rate networks shifts the network's dynamical state closer to a critical transition, increasing neural timescales
<cit.>.
In spiking networks, clustered connectivity creates metastable dynamics, and noise-induced transitions between these states give rise to slow timescales<cit.>.
Metastable dynamics can also emerge in homeostatically regulated spiking networks<cit.>.
Such structured connectivity leads to multiple dominant timescales in local dynamics, creating autocorrelation shapes beyond a single exponential decay<cit.>.
Each timescale is created by neural interactions on different spatial scales, such that slower timescales arise from interactions on a larger spatial scale, consistent with similar observations and theories on the spatiotemporal scale of oscillatory brain activity<cit.>.
Introducing heterogeneity in connectivity strengths gives rise to distinct timescales in each network unit<cit.>.
In addition to recurrent interactions within one brain area, long-range connections between areas are essential for explaining the observed hierarchy of timescales<cit.>.
Recurrent inhibition also plays an important part in forming network-mediated neural timescales.
Strong inhibitory-to-inhibitory connections allow for slow neural timescales required for working memory maintenance in a recurrent network model<cit.>.
Moreover, in a detailed biophysical model of excitatory and inhibitory neurons, inhibition controls the stability and transitions of metastable states, giving rise to slow timescales<cit.>.
In this model, the full shape of neural autocorrelations in frontal areas is explained by a combination of cellular (i.e., after-hyperpolarization potassium and inhibitory GABA-B conductance) and network mechanisms.
In addition, recurrent inhibition together with membrane and synaptic time constants can account for distinct temporal dynamics in spiking activity and LFP<cit.>.
Overall, computational modeling results have implicated a combination of synaptic, cellular, and network mechanisms to be partially responsible for the emergence of diverse neural timescales in the brain and across different modalities.
While some of these mechanisms have found empirical support, either via testing different model predictions with experiments<cit.> or
through direct measurements and perturbations of the relevant physiological properties<cit.>,
and indirect and post-hoc correlations of neural timescales against, e.g., expression of ion channel and synaptic receptor genes (e.g.,<cit.>), continued interaction between computational predictions and experimental measurements will further shed light on the mechanistic factors underlying variations in intrinsic neural timescales.
§.§ Modulatory mechanisms for dynamically shaping timescales
The mechanisms discussed so far explain how a specific range of timescales can emerge in neural dynamics due to “static" properties of single neurons and networks.
However, neural timescales are functionally dynamic, and have been shown to adapt to task requirements<cit.>.
Here, we discuss how neuromodulatory processes, changes to feedforward and feedback inputs, and other processes can modulate cellular and network properties, and therefore the resulting activity timescales we measure.
Neuromodulators can impact the biophysical properties of individual neurons (e.g., the response time course) and their network interactions (e.g., gain modulation) (see <cit.> for more extended reviews of the relation between the two and their impact on cortical computation), both of which can alter neural timescales.
Synaptic time constants can be altered by neuromodulators such as acetylcholine (ACh)<cit.>, which can also modify the efficacy of synaptic interactions across the brain in a selective manner<cit.>.
An increase in ACh strengthens the thalamocortical synaptic efficacy by affecting nicotinic receptors.
At the same time, it weakens lateral cortical interactions while increasing the inhibitory drive by affecting muscarinic receptors.
Similarly, the different contribution of glutamatergic receptors (e.g., NMDA and AMPA) in feedforward and recurrent pathways affects the time course of neural population responses in distinct ways<cit.>.
External (e.g., sensory) inputs to a network can also modulate neural timescales through a variety of mechanisms.
Input can bring individual neurons to a high-conductance state (lower resistance), which in turn reduces the membrane time constant<cit.> facilitating the distinction of distant synaptic inputs (for a review, see <cit.>).
Moreover, in the presence of non-linear recurrent interactions, the input can modulate the global state of network dynamics<cit.> and alter activity timescales<cit.>.
The spatial and temporal scales (e.g., size and duration) of the stimulus can also distinctly impact fast and slow neural timescales.
As a simple example, the activity of a network that tracks external signals over time will trivially exhibit the same timescales.
Furthermore, the spatial extent of stimuli amplifies correlations at fast timescales while reducing them at slow timescales<cit.>.
In spiking networks, populations with fast (slow) intrinsic timescales respond stronger to faster (slower) components of the input<cit.>.
Thus, depending on the spatial or temporal profile of the input, different timescales might dominate the recorded neural activity.
The models discussed in this section suggest potential mechanisms underlying the diversity and flexibility of neural timescales.
However, for these models to be more faithful representations of neural circuits, they need to reproduce not only the experimentally observed timescales but also capture other aspects of neural dynamics (e.g., realistic firing rates).
Indeed, considering additional constraints can help to dissociate between various mechanisms that can explain the same range of neural timescales<cit.>.
In addition, constraining computational models that were not specifically designed to explain neural timescales with realistic timescale measurements can provide new insights into different circuit mechanisms.
For instance, constraining models to produce realistic timescales has been instrumental in developing models explaining the mechanisms underlying noise correlations observed in spiking activity<cit.>.
Given the relevance of neural timescales for brain computations and behavior, checking for realistic timescales (e.g., similar to realistic neural firing rates) in computational models can be used as a test for the plausibility of proposed models, and to contrast different mechanistic models which give rise to similar dynamical features, as we discuss in the next section.
§ COMPUTATIONAL BENEFITS OF THE DIVERSITY AND FLEXIBILITY OF TIMESCALES
As previously mentioned, recent experimental studies have consistently observed that timescales of both single-unit and population dynamics correlate with variables such as behavioral timescales and task performance, thus implicating neural timescales to be a signature of dynamical properties necessary for relevant computations <cit.>.
However, to make causal claims about the necessity and sufficiency of dynamics operating on different timescales for network computation, perturbation experiments are required to assess the functional impact of timescales over and above, e.g., firing rate modulations, which can be challenging in vivo.
Recently, task-performing artificial neural network (ANN) models offer a viable, complementary direction that circumvents the practical challenges of manipulating timescale-related variables in the in vivo brain, while enabling us to probe loss- or gain-of-function in experimental task paradigms.
In computational and cognitive neuroscience, these networks are typically designed to mimic the input-output behavior observed in animal or human participants performing a task<cit.>.
To match the target output, the parameters of ANNs are either “hand-crafted” when possible (e.g., for simple tasks) or “trained” using an optimization algorithm minimizing a loss function (e.g., for larger networks and more complex tasks).
The networks can then be further analyzed and perturbed to generate hypotheses for the mechanisms of computation at hand (as reviewed in, e.g., <cit.>).
As a result, ANNs have been used to ask whether mechanisms shaping dynamics, such as connectivity, are important for computation, and whether they are preserved between artificial and biological networks performing the same task.
Various types of ANNs—including spiking neural networks (SNN), rate-based recurrent neural networks (RNNs), and deep learning networks (DL)—can be leveraged to perform task-relevant computations while we explicitly measure, optimize for, and/or perturb their single-neuron and network timescales (Fig. <ref>a).
In this section, we review two broad categories of timescale-related investigations using ANNs: First, we discuss those that study neural timescales as an emergent characteristic of task-performing networks, including both recent ones employing task-optimized networks, as well as classical works on dynamical properties of “hand-tuned” networks. Second, we discuss recent studies that explicitly optimize timescale-relevant parameters to enhance computational capacity in artificial networks, including biologically plausible spiking and rate networks, and more abstract models in the context of machine learning.
§.§ Emergent timescales in task-performing neural networks
Computational neuroscience has had a long-standing interest in the repertoire of available timescales to elucidate how neural circuits can represent and perform cognitive tasks with complex temporal structures.
Classically, models were “hand-crafted”, i.e., parameters were manually selected or analytically determined to satisfy a priori constraints, and were intended to capture how neuronal circuits implement dynamics necessary for computation, such as delayed persistent activity for working memory<cit.> or slow evidence accumulation for decision-making<cit.>.
To understand the underlying mechanisms, related theoretical investigations focused on how tailored structures in recurrent network connectivity and its eigenspectrum shape the dynamical landscape or phase portrait of the network<cit.> (Box 2), with special interests in, e.g., fixed points, line attractors, and limit cycles (Fig. <ref>b), which could be leveraged for memory<cit.>, input integration<cit.>, and timing<cit.> respectively (see <cit.> for a recent review on attractor models of neural computation).
Crucially, the timescales of the latent dynamical system near and on these low-dimensional manifolds are emergent properties of interest, since they govern, for example, how quickly and how stably a network can integrate new input while remembering or forgetting old ones, filter out fast noisy fluctuations to sense slower signals, and receive inputs of different frequencies.
Concurrently, related research on reservoir computing, including liquid state machines <cit.> and echo state networks <cit.>, similarly considered the activity timescales of RNNs from the perspective of latent dynamical systems, since they require long-term but not divergent (or exploding) dynamics at the edge-of-chaos <cit.>.
While reservoir networks typically have trained input and output weights, the recurrent weights are fixed due to the challenges of propagating error over many time steps, and much attention was dedicated to investigating how they can be thoughtfully initialized to maximize the network's computational capacity (similar to <cit.>).
Particularly relevant was how input-driven chaos in such networks can be tuned to represent a flexible repertoire of timescales (or temporal filters) for time-dependent computations <cit.>.
[width=,colback=white,title=Box 2. Activity timescales of a linear network,colbacktitle=babyblueeyes,coltitle=black, float, floatplacement=t]
As an illustrative example, we review how to calculate the activity timescale (τ_fit) of a linear recurrent network (i.e., linear dynamical system). Consider the system
dx(t)/dt = Ax(t)
where x(t) is the state vector, e.g., neural activity at time t, and A is a square connectivity matrix.
To acquire the timescales of the system dynamics, we solve for the eigenvalues of the matrix A, λ_i, through the characteristic equation, Au_i = λ_i u_i.
For a stable decaying mode, i.e., non-exploding, the real part of the associated eigenvalue must be negative, and its decay timescale is defined as
τ_i = -1/Re(λ_i).
The dominant activity timescale of the system (i.e., τ_fit) is then typically determined by the mode whose eigenvalue has the largest real part.
Furthermore, for a linear network composed of units with fixed membrane time constant τ_param, the system dynamics follow
τ_paramdx(t)/dt = Ax(t).
Assuming τ_param to be the same for all neurons, the activity timescale is scaled accordingly,
τ_i = -τ_param/Re(λ_i).
Extensions to networks with further biological constraints, e.g., random networks obeying Dale's law, can also be found <cit.>.
These earlier studies often considered task constraints from a theoretical and analytical perspective <cit.> (Fig. <ref>c, left), and they motivated recent investigations using task-optimized networks to tackle similar questions in more complex or analytically intractable networks.
The advancement of modern machine learning hardware accelerated network optimization strategies for RNNs (e.g., backpropagation-through-time <cit.> via stochastic gradient descent) and enabled a complementary approach:
instead of handcrafting networks with certain dynamical properties required for a task, the recurrent weights and other parameters of networks can be optimized directly to perform tasks or to mimic experimental recordings <cit.> (Fig. <ref>c, middle and right), and subsequently analyzed to identify the repertoire of neural dynamics important for the required computation (i.e., “opening the black box"<cit.>).
Consistent with earlier works on hand-crafted networks, analyses often reveal fixed points corresponding to different inputs or memories <cit.>, line attractors for tracking continuous environmental or behavioral variables <cit.>, and (quasi-)limit cycles or state space trajectories traversed at different speeds that correspond to the timing of network inputs or outputs <cit.>, while multi-task networks exhibit context- and task phase-dependent composition of the above “computational primitives” <cit.>.
In particular, efficient optimization of computational models has opened up recent investigations into the connection between observed neural activity and the underlying latent dynamical system.
By fitting models of neural population activity with latent states governed by, for example, switching linear dynamics, one could extract the time constants of dynamical objects of interest, such as approximate line attractors representing evidence or internal state accumulation <cit.>.
Moreover, one could directly compare the temporal dynamics of individual neurons' activity in experimental measurements with that of neurons in task-performing network models (Fig. <ref>d), both in rate-based and spiking recurrent neural networks, to dissect cellular, network, and learning-related mechanisms of computations underlying e.g., temporal scaling <cit.>, working memory <cit.>, and other motor or cognitive tasks <cit.>.
These ideas can be further extended to study networks endowed with more sophisticated and biologically-realistic constraints, such as Dale's law<cit.> and dendritic compartmentalization <cit.>, hypothesized neural computational mechanisms such as oscillatory phase coding<cit.>, as well as better analytical tractability (e.g., low-rank RNNs<cit.>) or improved expressiveness, computational power, and trainability <cit.>). Targeting the same sought-after dynamical repertoires, “hybrid” approaches can leverage more sophisticated weight initialization or regularization strategies to nudge the network toward line attractors or limit cycles after training <cit.>. Furthermore, since population activity timescales reflect the state of neural dynamics relative to critical transitions, timescales have also been used to study how the state of dynamics in spiking neuromorphic networks affect the performance of tasks with varying complexity<cit.>.
Altogether, these studies provide insights into specific tasks or network architectures underlying computations of interest from the perspective of network dynamics. Leaning on the notion that the geometry of neural population representation is intimately linked to (the timescales of) its dynamics and that the latter may be even more important than the former <cit.>, both classical and recent works lend support to the importance of the diversity of neural timescales for different computations. In other words, the temporal signatures of a dynamical system are a reflection of its phase portrait geometry, and constraining one to satisfy certain computational requirements often means constraining the other as well<cit.>. Nevertheless, it may be possible for networks to acquire similar low-dimensional geometries but vastly different timescales of dynamics. Therefore, we suggest explicit comparisons of in silico and in vivo timescales in task-performing networks as a straightforward yet synergistic research direction to provide explicit evidence for the necessity (or the lack thereof) of different neural timescales.
§.§ Optimization of timescale-related parameters
Another powerful way to use modern ANNs to study the effect of neural timescales on computation is to optimize timescale-related parameters in the network model, such as synaptic, membrane, and adaptation time constants, or more phenomenological variables, such as temporal receptive field size and context length. In a standard RNN, all neurons are given a single fixed time constant that is roughly analogous to the membrane time constant of biological neurons, which determines how quickly the effect of an input on that neuron decays. In reality, neurons across the brain have a range of membrane time constants due to varying properties such as dendritic morphology and the composition of conductances (see Section <ref>). As such, time constants can be individually optimized (similar to the connectivity weight matrices) <cit.>, or even parameterized to be input-dependent, arriving at gated network architectures (e.g., long short-term memory units, or LSTMs). Thus, by comparing to networks that are not endowed with diverse timescale parameters, such studies provide evidence for the potential necessity of different timescales in behaviorally relevant computations.
Several recent studies have shown that spiking networks trained to perform standard machine learning or cognitive tasks (e.g., delayed classification, working memory maintenance, etc.) can benefit from both the heterogeneity and explicit optimization of single-neuron or spike-frequency adaptation time constants <cit.>. Strikingly, the trained networks exhibit similar timescale distributions as membrane time constants<cit.> or activity timescales of individual neurons in analogous brain regions of animals performing the same task<cit.>. Furthermore, time constants do not necessarily need to match that of the task, but both longer and diverse time constants were found to be beneficial <cit.>. Similarly, RNNs with trainable time constants demonstrate improved task performance and memory capacity while the learned time constants mimic the hierarchy of timescales in the data-generating process<cit.>. With more bio-realistic RNNs, such as those with connectivity based on the measured connectome of the fruit fly visual system, training both the connection weights and single-neuron time constants resulted in emergent stimulus tuning similar to that of in vivo recorded neurons in the fly<cit.>.
Remarkably, multi-layer feed-forward deep neural networks trained to simply mimic the input-output relationship of a single morphologically accurate pyramidal neuron recover complex and spatially varying temporal filters <cit.>, further emphasizing the computational capacity provided by multiple timescales, even at the scale of single neurons. Recurrent architecture fitted to the same morphologically accurate pyramidal neuron allows for the explicit implementation of multiple timescales within a neuron's model and enables a single neuron with capabilities to solve complex long temporal dependencies tasks <cit.>.
On the other hand, diverse activity timescales can arise from network interactions instead of cellular properties (as discussed in the previous section). A recent study suggests that developing network-mediated diverse and long timescales may be a more robust mechanism for performing long memory tasks rather than trainable time constants <cit.>. Hence, future studies need to further distill the computational role of diverse cellular or network-mediated timescales.
These principles extend to more complex deep learning-based network architectures, providing functional benefits on more difficult tasks. Instead of trainable and fixed time constants, they can be further made to be input-dependent via trained gating weights (similar to gated RNNs in DL, such as GRUs and LSTMs), which improves modeling of sequences with multiple timescales <cit.>. Furthermore, initializing gating parameters to reflect the distribution of timescales in the data improves performance on tasks involving memorizing and operating on long sequences <cit.>. Leveraging this observation, endowing LSTM units with a distribution of effective timescales constructed to match the power-law temporal dependency of natural language improves network performance in language modeling, especially long-range dependencies <cit.>. In parallel, these “chrono initialized" LSTMs improved decoding of brain response to narrative text, as well as estimation of voxel-wise fMRI BOLD timescales along the cortical hierarchy in human subjects<cit.>, uniting potential principles of operation underlying both artificial and biological neural networks.
Finally, recent research in deep learning has shown an intriguing convergence of principles based on thoughtful inductive biases and initialization, as well as self-supervised pre-training, for improving performance on tasks with long-range dependencies <cit.>.
Although RNN-based architectures can model infinitely long temporal dependencies in theory (similar to IIR filters), simpler implementations based on finite-length temporal convolutions appeared to be more effective at acquiring long timescale memories <cit.>, though both architectures learn effective temporal context windows that reflect natural structures in data (e.g., speech <cit.>).
More recent architectures based on deep (linear) state space models (SSMs, e.g., S4 <cit.> and others <cit.>) enjoy the benefits of both recurrence and convolutions, outperforming Transformer-based architectures on long timescale tasks, which are hindered by quadratic complexity in time.
Later extensions capitalize on these insights to derive better parameterization or initialization strategies, resulting in competitive models with simpler implementations, such as Structure Global Convolution <cit.> and Linear Recurrence Units <cit.>.
On the other hand, it has been shown that both Transformer-based and SSM-based architectures can be drastically improved through self-supervised pre-training on the input sequence data, prior to downstream task training (e.g., labeled classification) on the same long-range tasks <cit.>.
These findings suggest that equipping deep learning models with inductive biases appropriate for long timescale tasks can be promising, but the exact mechanisms of these improvements—and whether they are related to intrinsic timescales in biological networks—remain unclear.
In summary, there is a growing interest in convergent approaches to tuning network models to produce emergent task-relevant dynamics and timescales, as well as explicit optimization of time-constant parameters in ANNs. These approaches appear to be a promising and collaborative avenue for research in neuroscience and machine learning aiming to understand and improve time-dependent computations in biological and artificial systems.
§ DISCUSSION
In this paper, we discussed how different computational methods help to turn empirical observation of diverse neural timescales into quantitative and testable hypotheses about brain dynamics:
Different timescale estimation methods can disentangle various components of neural activity, such as modality- or task-specific variables, from intrinsic dynamics, while
mechanistic models constrained by experimentally observed timescales, together with task-performing ANNs, can elucidate potential circuit mechanisms underlying brain dynamics and computations.
Integrating these directions, we here provide an outlook for future research and discuss open questions.
§.§ Methodological suggestions
Given the differences in assumptions and applicability of the various timescale estimation methods we outlined, it is important to be aware of their limitations when interpreting consistencies and differences in existing results, as well as when designing future methodological and empirical research questions. Since there is not one single correct method, it would be ideal if each dataset was analyzed using all possible methods—a decidedly unrealistic proposal. Short of that, a systematic and quantitative comparison of different timescale estimation methods would serve as an extremely useful reference for contextualizing current and new results, while guiding the development of novel methods that address shared shortcomings of existing methods. Complementary to method development, neural timescale can be a standardized metric for characterizing neural dynamics at multiple scales, for which a curated database of existing estimates across brain regions, recording modalities, species, and task conditions—similar to databases of single-neuron electrophysiological measurements <cit.>—would be invaluable.
§.§ Models and data, an inseparable loop
Computational models suggest that various cellular and network mechanisms could alter timescales in the brain. Therefore, timescale-constrained models can be used as a great tool to provide mechanistic insights into brain dynamics and computations.
It is important to note again that activity timescales of neurons may differ significantly from time-constant parameters implemented in the models (e.g., membrane or synaptic time constants) (Box 1).
Hence, it is crucial to match the activity timescales of the models to those of the data.
For models with fewer mechanisms or free parameters, it is possible to derive the activity timescales analytically<cit.>.
However, in the brain, multiple mechanisms interact together to shape timescales, and studying their interactions analytically is not straightforward.
In such cases, qualitative matching, e.g., generating a similar order of hierarchy of timescales<cit.> or a grid search over multiple model parameters<cit.> are used to fit the model timescales to data.
Nevertheless, such approaches may be inadequate when the parameter space is high-dimensional and the interaction between parameters is unknown. Fitting detailed mechanistic models with high-dimensional parameter space is possible using optimization or inference procedures such as genetic algorithms<cit.>, Approximate Bayesian Computation (ABC)<cit.> and simulation-based inference (SBI)<cit.>.
In particular, since ABC and SBI estimate the posterior distribution of parameters, they identify degeneracies in parameter space and uncover potential relations between different parameters, which helps us better understand how combinations of mechanisms can give rise to observed neural timescales.
Even with often imperfect (or, misspecified) models in neuroscience, when combined with inference methods that appropriately account for such errors<cit.>, fitting mechanistic models to neural activity across multiple modalities<cit.> could provide a holistic understanding of neural timescales across different scales of brain dynamics and may lead to new predictions that can be tested experimentally.
§.§ Timescales seed further synergies between neuroscience and ML
State-of-the-art machine learning tools have been readily adopted to interpret neural and behavioral data over multiple timescales. Conversely, the works discussed in Section <ref> further attempt to build a theoretical bridge between principles of dynamic neural computation and deep and/or recurrent artificial neural networks. Here, we make two broad remarks that can guide future research.
First, viewing network timescale as the temporal signature of the geometry of its dynamical landscape, it stands to reason that optimizing the network to achieve a certain task-conducive geometry (e.g., line attractor or limit cycle) may be reformulated as optimization for a certain repertoire or diversity of timescales, as recent works have demonstrated a relationship between a network's trainability to the stability of its dynamics (i.e., Lyapunov exponent)<cit.>. This has implications for RNN-based machine learning models, as “pretraining" for certain measures of network dynamic can be done without explicit task instructions<cit.>, and can also provide potential insights for how neural circuits organize themselves during early development without input<cit.>.
Second, there is a growing interest in the emerging field of NeuroAI to develop models that not only can perform cognitive and behavioral tasks, but also produce realistic neural activity<cit.>. While developing such models may seem challenging, matching models' activity timescales to that of neural data can serve as a first step in building task-optimized models with realistic activity fluctuations. Timescale constraints may further alter the solution landscape of such models to what can be more relevant for understanding brain computations.
At the same time, modern ML models such as large language models or deep reinforcement learning models can facilitate studying the role of neural timescales in naturalistic tasks occurring over longer timescales than what can be studied in common laboratory-constrained experiments. Indeed, a recent study on freely behaving animals reported a different organization of neural timescales across brain areas than what has been found during resting state<cit.>. Therefore, timescales can serve as a key to open new avenues where ML tools can expedite uncovering mechanisms underlying brain computation.
§.§ Overlooked timescales and outlook
Much of systems and cognitive neuroscience has been based on a specific set of tasks, often with a trial structure between 1 to 10 seconds long, which artificially limits observed neural and behavioral timescale to a small range of the spectrum. For instance, spontaneous animal behavior in natural environments exhibits correlations over a long range of timescales that cannot be measured in common experimental settings <cit.>. On the other hand, processes such as the release of synaptic vesicles occur on a sub-second timescale <cit.> that are not detectable with many measurement tools. Furthermore, different physiological processes may be involved in shaping neural timescales that are often ignored when studying the mechanisms underlying neural timescales. For example, astrocytes have slow intracellular calcium fluctuations that may mediate slow neural modulations <cit.>. Astrocytes may also play a significant role in neural computations, as recently was shown that neuron–astrocyte networks can perform similar computations to complex machine learning models like Transformers <cit.>.
How processes across different timescales interact may be of tremendous interest: does sub-second dynamics of synaptic vesicles play a role in shaping how, for example, memory engrams are formed over days and months, beyond the mean activity? On the other hand, do longer timescale fluctuations beyond a designated static state like age or disease conditions, such as circadian or seasonal rhythms, impact moment-to-moment neural dynamics? To investigate these questions, we likely need to leverage a combination of different experimental paradigms in the lab and naturalistic settings <cit.>, together with long-term and high temporal resolution neural recording <cit.>, as well as novel data analysis and modeling tools to parse the interaction between and across the different orders of magnitude timescales.
§.§ Acknowlegments
This work was supported by the German Research Foundation (DFG) through Germany’s Excellence Strategy (EXC-Number 2064/1, PN 390727645) (RZ, JHM, RG) and SFB1233 (PN 276693517) (JHM), the German Federal Ministry of Education and Research (Tübingen AI Center, FKZ: 01IS18039) (AL, JHM, RG), the Sofja Kovalevskaja Award from the Alexander von Humboldt Foundation endowed by the Federal Ministry of Education and Research (RZ, AL), the European Union (ERC, “DeepCoMechTome”, ref. 101089288) (JHM), the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 101030918 (AutoMIND) (RG).
The authors thank Ana Manea and Jan Zimmermann for providing the timescales of fMRI data (<ref>a), Bradley Voytek and the speakers and participants of the Cosyne 2022 workshop on https://www.rdgao.com/blog/2022/03/28/“Mechanisms, functions, and methods for diversity of neuronal and network timescales" for valuable discussions,
and https://scidraw.io/Scidraw, https://en.wikipedia.org/wiki/Dopaminergic_pathways#/media/File:Dopaminergic_pathways.svgWikipedia and Pixel perfect from https://www.flaticon.com/Flaticon for providing the basis icons for some of the illustrations.
naturemag
|
http://arxiv.org/abs/2409.02644v1 | 20240904122027 | Conformal Prediction in Dynamic Biological Systems | [
"Alberto Portela",
"Julio R. Banga",
"Marcos Matabuena"
] | stat.ML | [
"stat.ML",
"cs.LG",
"q-bio.QM"
] |
Conformal Prediction in Dynamic Biological Systems
Alberto Portela
Computational Biology Lab
MBG-CSIC, Spanish National Research Council
Pontevedra, Spain
Julio R. Banga
Computational Biology Lab
MBG-CSIC, Spanish National Research Council
Pontevedra, Spain
Marcos Matabuena
Department of Biostatistics
Harvard University
Cambridge, MA, USA
September 9, 2024
=======================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Uncertainty quantification (UQ) is the process of systematically determining and characterizing the degree of confidence in computational model predictions. In the context of systems biology, especially with dynamic models, UQ is crucial because it addresses the challenges posed by nonlinearity and parameter sensitivity, allowing us to properly understand and extrapolate the behavior of complex biological systems. Here, we focus on dynamic models represented by deterministic nonlinear ordinary differential equations. Many current UQ approaches in this field rely on Bayesian statistical methods. While powerful, these methods often require strong prior specifications and make parametric assumptions that may not always hold in biological systems. Additionally, these methods face challenges in domains where sample sizes are limited, and statistical inference becomes constrained, with computational speed being a bottleneck in large models of biological systems. As an alternative, we propose the use of conformal inference methods, introducing two novel algorithms that, in some instances, offer non-asymptotic guarantees, enhancing robustness and scalability across various applications. We demonstrate the efficacy of our proposed algorithms through several scenarios, highlighting their advantages over traditional Bayesian approaches. The proposed methods show promising results for diverse biological data structures and scenarios, offering a general framework to quantify uncertainty for dynamic models of biological systems.The software for the methodology and the reproduction of the results is available at https://zenodo.org/doi/10.5281/zenodo.13644870https://zenodo.org/doi/10.5281/zenodo.13644870.
Contact: mailto:[email protected]@hsph.harvard.edu; mailto:[email protected]@csic.es
§ INTRODUCTION
In the field of systems biology, we utilize mechanistic dynamic models as tools to analyze, predict and understand the intricate behaviors of complex biological processes <cit.>. These models are typically composed of sets of deterministic nonlinear ordinary differential equations, and are designed to provide a quantitative understanding of the dynamics which would be difficult to achieve through other means. In particular, this mechanistic approach offers several advantages over data-driven approaches <cit.>. First, it can generate more accurate predictions and can be applied to a broader range of situations. Second, it provides a deeper understanding of how the system works thanks to its mechanism-based nature, making it easier to interpret the reasons behind its behavior. Finally, it requires less data for training because it is based on established theories and principles that describe the underlying processes. Overall, mechanistic models can help in understanding the dynamics of biological systems, in predicting their behaviour under different conditions, in generating testable hypotheses, and in identifying knowledge gaps.
However, these benefits come with a trade-off. As the number of elements (species) and unknown variables in the system increases, the model becomes significantly more complex in terms of number of parameters and non-linear relationships. This complexity can make it difficult to interpret the model's results <cit.>, and damages its identifiability, i.e. the ability to uniquely determine the unknown parameters in a model from the available data. As the complexity increases with more species and unknown parameters, achieving full identifiability and observability becomes more difficult <cit.>. As a result, developing a reliable dynamic mechanistic model can be a demanding and error prone task, requiring considerable expertise and the use of comprehensive and systematic protocols <cit.>. Additionally, highly detailed models can lead to more uncertain predictions <cit.>. The uncertainty in parameters and their identifiability impacts model predictions, thus ideally we should be able to characterize such impact in an interpretable manner, aiming to make useful predictions despite poor identifiability <cit.>. Therefore, quantifying this uncertainty and how it affects different system states, a process known as Uncertainty Quantification (UQ) <cit.>, is an open and fundamental challenge <cit.>.
UQ plays a key role in enhancing the reliability and interpretability of mechanistic dynamic models <cit.>. It helps in understanding the underlying uncertainties in the model parameters and predictions, thereby improving the model’s predictive power and its utility in decision-making processes <cit.>. A lack of proper UQ can result in models that are too confident in their predictions, which can be misleading.
Different approaches for uncertainty quantification and robustness analysis in the context of systems biology have been reviewed elsewhere <cit.>. Roughly speaking, we can distinguish between Bayesian and frequentist methods <cit.>.
Lately, the most predominant approach in the literature is to use Bayesian methods, which treat model parameters as random variables. Bayesian methods can perform well even with small sample sizes, especially when informative priors are used. Frequentist methods often require larger sample sizes to achieve reliable estimates. In practice, Bayesian approaches require parametric assumptions to define likelihood equations and the specification of a prior to derive an approximate posterior distribution of parameters for approximate and analytical methods <cit.>. This process is crucial for model estimation and inference.
However, Bayesian approaches often demand significant computational resources. Moreover, in the particular case of systems of differential equations, it is also typical to encounter identifiability issues, which result in multimodal posterior distributions that are challenging to handle in practice. In fact, although non-identifiabilities poses challenges to both frequentist and Bayesian sampling approaches, the latter can be especially susceptible to convergence failures <cit.>. Prediction profile likelihood methods and variants <cit.> provide a competitive alternative by combining a frequentist perspective with a maximum projection of the likelihood by solving a sequence of optimization problems. However, they can be computationally demanding when a large number of predictions must be assessed.
Although various uncertainty quantification approaches are utilized in the domain of systems biology, comparative assessments of the strengths and weaknesses of state-of-the-art methods remain scarce. <cit.> recently presented a systematic comparison of four methods: Fisher information matrix (FIM), Bayesian sampling, prediction profile likelihood and ensemble modelling. The comparison was made considering case studies of increasing computational complexity. This assessment revealed an interplay between their applicability and statistical interpretability. The FIM method was not reliable, and the prediction profile likelihood did not scale up well, being very computationally demanding when a large number of predictions had to be assessed. An interesting trade-off between computational scalability and accuracy and statistical guarantees was found for the ensemble and Bayesian sampling approaches. The Bayesian method proved adequate for less complex scenarios; however, it faced scalability challenges and encountered convergence difficulties when applied to the more intricate problems. The ensemble approach presented better performance for large-scale models, but weaker theoretical justification.
Therefore there is a clear need of UQ methods with both good scalability and strong theoretical statistical properties. Recently, in the literature of statistics and machine learning research, the use of conformal prediction <cit.> to quantify the uncertainty of model outputs has become increasingly popular as an alternative to Bayesian methods and other asymptotic approximations <cit.>. One of the successful aspects of this methodology in practice is the non-asymptotic guarantees that ensure the coverage of prediction regions is well-calibrated, at least from a global (marginal) perspective, as reviewed by <cit.>. However, to the best of our knowledge, their use in the systems biology and dynamical systems literature is not widespread, despite their expected promising properties for examining and making predictions in complex biological systems.
Considering systems biology applications, we must account for the typically limited number of observations. In order to increase the statistical modelling efficiency, we focus on conformal predictions based on the estimation of the conditional mean regression function <cit.>, and the semi-parametric distributional location-scale regression models <cit.>. To exploit the specific structure of location-scale regression models and maximize the limited available information, we propose two algorithms based on the jackknife methodology (see for example <cit.>). Recently, conformal prediction has also been extended to accommodate general statistical objects, such as graphs and functions that evolve over time, which can be very relevant in many biological problems <cit.>.
The main contributions of our work are:
* We propose and explore two conformal prediction algorithms for dynamical systems, specifically designed to optimize statistical efficiency when the measurement error of the biological system is homoscedastic. Specifically:
* The first algorithm focuses on achieving a specific quantile of calibration for each dimension of the dynamical system, making it more flexible when the homoscedasticity assumption is violated in any given coordinate.
* The second algorithm is specially designed for large dynamical systems. The core idea is to perform a global standardization of the residuals, deriving a global quantile of calibration to return the final prediction regions.
* We illustrate both algorithms using several case studies of increasing complexity and evaluate their performance in terms of statistical efficiency and computation time respect to traditional uncertainity quantification Bayesian methods.
§ METHODOLOGY
§.§ Modeling framework and notation
We consider dynamic models described by deterministic nonlinear ordinary differential equations (ODEs),
ẋ(t) = f(x,θ,t ,x(t_0))
x(t_0) = x_0
y(t) = g(x(t),θ,t),
in which x(t) ∈ℝ^n_x is the vector of state variables at time t, y(t) ∈ℝ^n_y is the vector of observables at time t, and θ∈ℝ^n_θ is the vector of unknown parameters.
The vector field f: ℝ^n_x×ℝ^n_θ×ℝ↦ℝ^n_x and the mappings g: ℝ^n_x×ℝ^n_θ×ℝ↦ℝ^n_y and x_0: ℝ^n_θ↦ℝ^n_x are possibly nonlinear.
The calibration of ODE models requires the estimation of parameter vector θ from theoretical observations of the output y(t) at n times, denoted as t_1, t_2, …, t_n. The total number of measurements is n × n_y. In practice, we observe a perturbed multidimensional random variable
ỹ= [ ỹ_1,1 ỹ_1,2 ⋯ ỹ_1,n; ỹ_2,1 ỹ_2,2 ⋯ ỹ_2,n; ⋮ ⋮ ⋱ ⋮; ỹ_n_y,1 ỹ_n_y,2 ⋯ ỹ_n_y,n ], and maybe we apply over ỹ a specific data transformation to convert the underlying model in homocedasticity in the transformated space. To be more precise, each random observation from the random vector ỹ, as defined through the probabilistic model introduced below:
h_k(ỹ_k,i,λ) = h_k(y_k(t_i),λ) + ϵ_k(t_i)= h_k(g_k(x(t_i),θ),λ) + ϵ_k,i,
i = 1, …, n; k = 1, …, n_y, λ∈ℝ^n_y,
where, for each coordinate of dynamical system k = 1, …, n_y, ϵ_k ∈ℝ^n denotes the measurement random noise of mean equal to zero, and h_k(·, λ) is the specific transformation function that is an increasing real-valued function depending on a shape parameter λ. Trivial examples of such functions include the logarithmic function, log(·), which is commonly used in the dynamical systems literature due to its equivalence with the log-normal probabilistic model <cit.>.
For identifiability purposes, model (<ref>) assumes that the variance of the random error ỹ_k,i is solely a function of the regression function g_k(x(t_i), θ). This assumption encompasses specific heteroscedastic cases, such as the log-normal model mentioned earlier, where the signal-noise ratio is homoscedastic in the transformed space but not in the original space.
Model (<ref>) is a generalization of the Box-Cox transformation models, adapted for dynamical systems, and is defined in the regression literature by <cit.>. The authors refer to this semi-parametric transformation family as the transform-both-sides (TBS) model.
For simplicity, we assume that the measurement random noise is independent across the different temporal points t_i i=1,…, n, follows a normal distribution, and is homoscedastic across dimensions, i.e., ϵ_k,i = ϵ_k(t_i) ∼𝒩(0, σ_k^2), where k = 1, …, n_y. σ_k(t_i) denotes the standard deviation for k state of the dynamical systems in the temporal instant t_i. Here, we also assume that h_k(s,λ) = s for all s ∈ℝ, meaning that h_k is the identity function, or that we are modeling the ODE systems directly from the original observations collected from the dynamical system. For simplicity, throughout this manuscript, we present all the modelling steps of the algorithms directly in terms of the regression function g(·, ·) to aid the reader. The transformed version of the different algorithms introduced here only involves applying the specified data transformation to the original sample y and then running the non-transformed algorithm in the transformed (or image) space.
The maximum likelihood estimate (MLE) of the vector of unknown parameters θ, denoted as θ, for any dataset 𝒟_n can be found by minimizing the log-likelihood function,
θ= _θ∈ℝ^n_θℒ(θ;𝒟_n) = 1/2∑^n_y_k=1∑^n_i=1[ log(2πσ_k^2) + ( ỹ_k,i - g_k(x(t_i),θ)/σ_k)^2 ],
which is the predominant optimization approach in the field of dynamical biological systems. Another popular approach, when no external information about the probabilistic mechanism of the random error ϵ is available, is to use the minimization of the mean squared error as the optimization criterion to estimate the parameters of the regression function g.
§.§ Conformal prediction for dynamical systems
Here, our objective is to present the development of new algorithms of conformal predictors for the class of dynamical systems stated above. The key idea is to consider the solution of the dynamical systems as the regression function g (for example, the conditional mean function) and the biological signals observed as corrupted by a measurement error ϵ. Then, using the residuals, it is possible to derive prediction regions using different conformal inference strategies. The underlying challenge in this type of regression is that the signal of the time series is observed in only a few n observations. Using full conformal methods, which require fitting hundreds or even thousands of models with n+1 observations, is not advisable due to the prohibitive computational cost. This cost arises from the need to obtain model parameters through the mathematical optimization of large and high-dimensional systems of differential equations.
On the other hand, the split conformal methods (we fit each model component using random, disjoint splits) are not a good strategy from a statistical efficiency perspective when typically n < 20 in this type of biological problems. To mitigate the limitations of full and split conformal, we propose to use two new specific methods of conformal prediction for dynamical systems that increase statistical efficiency in different scenarios and are based on the application of jackknife techniques <cit.>.
Given a new random observation Y_n+1= g(x(t_n+1),θ)+ϵ_n+1, independent and identically distributed (i.i.d) with respect to 𝒟_n (the original sample of n observations collected from the dynamical system), and for a predefined confidence level α∈ [0,1], the goal of predictive inference is to provide a prediction region C^α(·) ⊂ℝ^n_y such that ℙ(Y_n+1∈ C^α(t_n+1)| T=t_n+1) = 1 - α, where the probability event is conditioned to the temporal instant t_n+1. We assume for practical purposes that such prediction region exists and is unique.
Conformal prediction <cit.>
is a general uncertainty quantification framework that, independent of the underlying regression function g employed, provides non-asymptotic marginal (global) guarantees of the type ℙ(Y_n+1∈C^α(t_n+1)) ≥ 1 - α. In this case, the probability ℙ is over the random sample 𝒟_n ∪ (t_n+1,Y_n+1). In the literature, there are three variants of conformal predictions related to the partitioning of the original random sample 𝒟_n <cit.>: full, split, and jackknife conformal.
To determine whether a single prediction for the fixed pair (t_n+1, Y_n+1) is within the prediction region for any confidence level α∈ [0, 1], full conformal prediction requires using all n+1 observations and estimating the regression model g. For each fixed pair (t_n+1, Y_n+1), this involves optimizing the parameter vector θ and evaluating whether the point lies inside or outside the prediction region by numerically approximating the predictions across a large-scale grid of observations, both in the temporal domain and in the states space. Such a procedure is computationally intensive, often requiring the estimation of hundreds of models, making it impractical for applications involving dynamical systems, especially in high-dimensional settings.
As a alternative, split conformal is valid for obtaining prediction regions for any new data point (t_n+1,Y_n+1), and often divides the sample into ⌊ n/2 ⌋ observations to estimate the function g and the remaining n-⌊ n/2 ⌋ to obtain the quantile for calibration to derive the final predictive regions.
Finally, the jackknife approach is an intermediate method for making predictions for any data point (t_n+1,Y_n+1) and derive the prediction regions without sacrificing statistical efficiency. The jackknife approach requires the fitting of n predictive models, excluding in the i-th iteration the i^th observation. Here, in order to improve the robustness of the statistical properties of conformal jackknife, we derive two algorithms based on <cit.> and the , a new jackknife conformal strategy proposed in <cit.>.
§.§ Algorithms
Algorithms <ref> and <ref> describe the core steps of our conformal UQ strategies for dynamical systems. In both algorithms, the first step involves excluding each i-th observation and fitting the regression functions to obtain the jackknife residuals.
In the first algorithm, , we apply a version of conformal in each coordinate of the dynamical system, introducing flexibility in the modeling in cases where uncertainty shape varies differently across each dimension. However, the theoretical convergence rates are often slower and require a larger number of observed data points in comparison with our second algorithm.
To alleviate this issue and create a more efficient algorithm in some special homoscedastic cases, the second algorithm, , uses the hypothesis that the model is homoscedastic along each coordinate. In its second step, we standardize the residuals using a prior estimation of the standard deviation. Then, we consider the global quantile of calibration. Finally, by re-scaling with the specific standard deviation, we obtain the final prediction interval.
Further implementation details (as a Matlab package) are given in the Supplementary Information.
§.§ Theory
For this theorem, and all results that follow, all probabilities are stated with respect to the distribution of the training data points (t_1, Y_1), …,(t_n, Y_n) and the test data point (t_n+1, Y_n+1) drawn i.i.d. from an arbitrary distribution P, and we assume implicitly that the regression method g is invariant to the ordering of the data–invariant to permutations. We will treat the sample size n ≥ 2 and the target coverage level α∈[0,1] as fixed throughout.
Theorem: The conformal jacknife algorithms and satisfies
ℙ(Y_n+1∈C^α(t_n+1))
}≥ 1-2 α.
Consequence from <cit.>. The target in the inequality is 1-α that is reached often except in some non-trivial mathematical examples.
§ RESULTS
To assess the performance of the Uncertainty Quantification methods outlined in the previous section, we apply it to four case studies based on dynamic models of increasing complexity (Table 1 shows a summary): (i) a simple logistic growth model, (ii) a Lotka-Volterra classical model considering 2 species, (iii) the well-known α-pinene kinetic model, widely used in parameter estimation studies, and (iv) the challenging NFKB signalling pathway. For each one of these models we formulated several parameter estimation problems generating synthetic data sets considering different scenarios for data cardinality and noise levels. All problems are fully observed except the NFKB case study. Due to space constraints, details about the models and the different estimation problems are given in the Supplementary Information.
Below, we present and discuss these case studies where we have considered synthetic datasets that have been generated considering a noise model described by Equation (<ref>). For simplicity, we assumed the errors to be normally distributed, centered around the noise-free data sample, and we have adopted a homoscedastic model, i.e. the variance remains constant across each dimension of the dataset. More specifically,
ỹ_k,i∼𝒩(y_k(t_i), σ_k^2), i = 1, …, n; k = 1, …, n_y,
where, σ_k=ϵ·μ_k, with μ_k=∑_i=1^ny_k(t_i)/n capturing the mean value of state k, and ϵ representing the percentage of added noise. The ODE parameters θ are estimated by MLE equations for gaussian data as is specified in the Equation (<ref>). Our results are compared with a Bayesian method (STAN), as described in the Supplementary Information.
§.§ Case I: Logistic growth model
To evaluate the performance of our methods on this case study, we considered various scenarios with different noise levels (0%, 1%, 5% and 10%) and dataset sizes (10, 20, 50 and 100 data points). For each combination of noise level and dataset size, we generated 50 different synthetic datasets, totaling 800 unique datasets. By generating multiple datasets for each scenario, we were able to obtain a robust estimate of the methods' behavior and assess their consistency across different realizations of the data.
The comparative analysis of the logistic growth model, as shown in Figure <ref>, highlights the robustness of the proposed methods and compared to conventional methodologies such as the Bayesian approach implemented with STAN. For a 10-point synthetic dataset with a 10 percent noise level, the predictive regions obtained by both conformal methods showed good coverage without requiring prior calibration of the models, unlike the Bayesian approach. Moreover, both and yield predictive regions comparable to those generated by the jackknife+ method; however, in this particular case, the method shows superior performance.
In terms of computational efficiency, the conformal methods proved to be marginally faster than STAN, even for a problem of this small size, with differences on the order of a few seconds. This makes them more suitable for real-time applications.
To examine the marginal coverage ℙ(Y_n+1∈C^α(X_n+1)) for α=0.05, 0.1, 0.5 of our first algorithm , see Figure <ref>
for different signal noises and sample sizes. The figure indicates the good empirical performance of our algorithm, achieving the desired nominal level in expectation as is expected for the non-assympotic guarantees of conformal prediction methods.
§.§ Case II: Lotka-Volterra model
For this case study we generated datasets with the same noise levels (0%, 1%, 5% and 10%) as in the previous example and three different sizes (30, 60 and 120 points). Additionally, for each combination of noise level and dataset size, we generated 50 different synthetic datasets, resulting in a total of 600 unique datasets.
Figure <ref>
shows the results in a 30-point Lotka-Volterra
dataset, indicating that the predictive regions generated by
the conformal methods and STAN are similar in terms of
coverage. However, as in the previous case, and
offer the advantage of not requiring extensive hyperparameter
tuning, while also being more computationally efficient. In this
particular example, while the bayesian method obtains results
within a timeframe on the order of minutes, both conformal
methods achieve this in a significantly shorter span, on the order of seconds.
§.§ Case III: Isomerization of α-Pinene
The dataset generation procedure for this case study mirrored that used for the Logistic model, employing the same noise levels and dataset sizes. Although we generated synthetic datasets to assess the method's behavior, we illustrated this behavior with a real dataset from <cit.>.
Figure <ref>
shows the resulting regions of the isomerization of α-Pinene by applying the different algorithms to the 9-point real dataset. The results are once again consistent between both conformal algorithms and closely align with the regions obtained using STAN. In terms of computational cost, the conformal algorithms are notably more efficient, requiring less than a minute to compute the regions, whereas the Bayesian approach takes several minutes.
§.§ Case IV: NFKB signaling pathway
The dataset generation process was similar to the one used in the previous case studies. The results of applying different algorithms to a 13-point synthetic dataset are illustrated in the Supplementary Information. As shown, both methods based on conformal inference yield results that are in close agreement with each other. However, in this case, the computational cost difference becomes particularly significant. While the and algorithms compute the regions within a few minutes, the Bayesian approach requieres several hours to achieve the same task.
§ DISCUSSION
In this study, we present two algorithms, and , using conformal methods to perform uncertainty quantification. These methods allow the computation of prediction regions for nonlinear dynamic models of biological systems. We assume that the signal-to-noise ratio is homoscedastic in the measurements or any transformation of the original data. We successfully compared the performance of these new methods with Bayesian approaches using a set of problems of increasing complexity. The main conclusions from the numerical results are summarized below.
Our algorithms were significantly faster than the Bayesian method (STAN) for the case studies examined. Our methods, which do not require tuning of hyperparameters, performed well in agreement with the Bayesian approach for smaller case studies and larger datasets (more than 50 temporal points). However, for high-dimensional biological systems, as illustrated with the NFKB case study, our conformal methods exhibited better accuracy, while STAN encountered convergence issues.
Furthermore, our methods, although not based on specified regression models, achieved good marginal coverage due to their non-asymptotic properties. In contrast, Bayesian methods, exemplified by STAN, showed a more critical impact of poor calibration on marginal coverage, especially for small sample sizes, due to the lack of non-asymptotic properties.
Our study also revealed that obtaining good coverage properties with a Bayesian method requires careful tuning of the prior, which can be challenging even for well-known small problems and may be very difficult for new, larger problems arising in real applications. We encountered convergence issues with the MCMC strategy in STAN, likely due to the multimodal nature of posterior distributions and identifiability issues, consistent with previous reports <cit.>.
A primary limitation of our new methods is that the prediction regions might occasionally take negative values when observed states are very close to zero, which may be unrealistic from a mechanistic perspective. This issue is observed in both the STAN Bayesian implementation and the conformal methods presented here. One potential cause is the assumption of homoscedasticity (i.e., consistent signal-to-noise ratio across the entire domain). Moreover, the underlying model may not be correctly specified across all parts of the domain.
A straightforward solution to this issue is to apply a prior data transformation, such as the log-normal transformation, which allows for modeling heteroscedastic scenarios, or a more general family of possible transformations introduced in the model (<ref>). To enhance the usability of our proposed methods within the scientific community, we have made the code and data from this study available in a public repository. Moving forward, we plan to offer updates, including optimized code for high-performance computers, novel validations, and automatic transformation approaches for various error structures in the modeling.
Overall, our study presents a new framework for uncertainty quantification in dynamical models using conformal prediction methods. This framework provides an alternative to classical Bayesian methods. Notably, our new methods are computationally scalable, which is crucial for large biological models. From a mathematical statistics perspective, they offer non-asymptotic guarantees and avoid the technical difficulties of calibrating prior functions necessary in Bayesian statistics. For future work, we suggest exploring conformal quantile algorithms for massive dynamical biological systems <cit.>. These algorithms typically provide better conditional coverage than other conformal algorithms <cit.> and do not require the assumption of symmetric random errors. However, applying quantile conformal algorithms in practice may require collecting more temporal observations of dynamical systems, which might not always be feasible in real-world scenarios.
§ ACKNOWLEDGMENTS
JRB acknowledges support from grant PID2020-117271RB-C22 (BIODYNAMICS) funded by MCIN/AEI/10.13039/01100011033, and from CSIC grant PIE 202470E108 (LARGO). The authors also wish to thank Javier Enrique Aguilar Romero for his assistance with the use of Stan. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript
§ APPENDIX - SUPPLEMENTARY MATERIAL
§ MATLAB IMPLEMENTATION OF THE ALGORITHMS
We implemented our and algorithms in Matlab. Parameter estimations were formulated as the minimization of a least squares cost function subject the dynamics (described by the model ODEs) and parameter bounds. These problems are non-convex and were solved using a global hybrid method, enhanced scatter search () due to its good performance an robustness <cit.>. is available in Matlab as part of the optimization toolbox <cit.>. Our code also has dependencies with the the Optimization Toolbox and the Parallel Computing Toolbox.
The software for the methodology and the reproduction of the results is available at https://zenodo.org/doi/10.5281/zenodo.13644870https://zenodo.org/doi/10.5281/zenodo.13644870. All computations were carried out on a PC DELL Precision 7920 workstation with dual Intel Xeon Silver 4210R processors.
§ COMPARISON WITH A BAYESIAN METHOD
Bayesian methods are a classical approach for performing automated uncertainty quantifications by estimating the posterior distribution P(θ|𝒟_n), where θ represents the parameter of interest and 𝒟_n = {X_i}_i=1^n denotes the observed data. The key components in Bayesian analysis are the prior distribution P(θ), which encapsulates our initial beliefs about θ, and the likelihood function P(𝒟_n |θ), which represents the probability of observing the data 𝒟_n given the parameter θ.
In many practical scenarios, computing the posterior distribution analytically is challenging. Markov Chain provide Monte Carlo (MCMC) methods provides a general and powerful techniques used to estimate the posterior distribution by generating samples from it. Notable MCMC algorithms include Metropolis-Hastings and Gibbs sampling.
Nowadays, there are general software tools available for implementing Bayesian inference and MCMC methods. One such tool is STAN. To use STAN, one writes a model in the STAN modeling language, which involves defining the data, parameters, and model (i.e., prior and likelihood). STAN can be seamlessly integrated with through the package <cit.>, allowing users to perform Bayesian analyses within the R environment. The package provides functions to compile STAN models, fit them to data, and extract samples for posterior analysis. Our implementations of the different case studies are also available in the Zenodo link above.
§ CASE STUDIES
§.§ Case I: Logistic growth model
As our initial case study we considered the well-known logistic model <cit.>, governed by a single differential equation with two unknown parameters. This model is frequently used in population growth and epidemic spread modeling.
ẋ = rx(1-x/K).
Here, r represents the growth rate, and K denotes the carrying capacity. The initial condition considered in the generation of the datasets was x(0)=10. Additionally, the values of the parameters used were r=0.1 and K=100. The initial condition is assumed to be known across all case studies considered. Since this logistic model has an analytical solution, it facilitates the comparison of our methods' performance with other established conformal methods for algebraic models, such as the jackknife+ <cit.>.
To evaluate the performance of our methods on this case study, we considered various scenarios with different noise levels (0%, 1%, 5% and 10%) and dataset sizes (10, 20, 50 and 100 data points). For each combination of noise level and dataset size, we generated 50 different synthetic datasets, totaling 800 unique datasets. By generating multiple datasets for each scenario, we were able to obtain a robust estimate of the methods' behavior and assess their consistency across different realizations of the data.
The comparative analysis of the logistic growth model, as shown in Figure <ref>, highlights the robustness of the proposed methods and compared to conventional methodologies such as the Bayesian approach implemented with STAN. For a 10-point synthetic dataset with a 10 percent noise level, the predictive regions obtained by both conformal methods showed good coverage without requiring prior calibration of the models, unlike the Bayesian approach. Moreover, both and yield predictive regions comparable to those generated by the jackknife+ method; however, in this particular case, the method shows superior performance.
In terms of computational efficiency, the conformal methods proved to be marginally faster than STAN, even for a problem of this small size, with differences on the order of a few seconds. This makes them more suitable for real-time applications.
To examine the marginal coverage ℙ(Y_n+1∈C^α(X_n+1)) for α=0.05, 0.1, 0.5 of our first algorithm , see Figure <ref>
for different signal noises and sample sizes. The figure indicates the good empirical performance of our algorithm, achieving the desired nominal level in expectation.
[!htbp]
Numerical results corresponding to the comparative analysis of the Logistic model predictive regions. This table presents the 95% lower and upper predictive bounds (LPB and UPB, respectively) obtained for each time point (t) in a 10-point dataset subjected to 10% noise. The observed data (y) and the true state value (x_nom) are also shown. The results are reported for four methodologies: the proposed and methods, the original jackknife+ approach, and a Bayesian method implemented with STAN.
2cCUQDyn1 2cCUQDyn2 2cJackknife+ 2cSTAN
4-56-78-910-11
t y x_nom LPB UPB LPB UPB LPB UPB LPB UPB
0 1.000e+01 1.000e+01 1.000e+01 1.000e+01 1.000e+01 1.000e+01 1.000e+01 1.000e+01 1.000e+01 1.000e+01
10 2.226e+01 2.320e+01 7.698e+00 4.440e+01 7.213e+00 4.357e+01 9.493e+00 4.507e+01 5.545e+00 4.756e+01
20 6.066e+01 4.509e+01 3.092e+01 7.100e+01 3.318e+01 6.954e+01 3.140e+01 7.043e+01 2.904e+01 7.781e+01
30 7.327e+01 6.906e+01 5.593e+01 9.594e+01 5.916e+01 9.552e+01 5.578e+01 9.442e+01 5.324e+01 1.021e+02
40 9.123e+01 8.585e+01 7.284e+01 1.095e+02 7.453e+01 1.109e+02 7.296e+01 1.083e+02 7.015e+01 1.138e+02
50 8.895e+01 9.428e+01 7.816e+01 1.149e+02 8.116e+01 1.175e+02 7.888e+01 1.143e+02 7.681e+01 1.198e+02
60 9.703e+01 9.782e+01 7.997e+01 1.167e+02 8.357e+01 1.199e+02 8.105e+01 1.169e+02 7.905e+01 1.215e+02
70 1.004e+02 9.919e+01 8.055e+01 1.177e+02 8.441e+01 1.208e+02 8.181e+01 1.189e+02 7.917e+01 1.232e+02
80 1.174e+02 9.970e+01 8.073e+01 1.183e+02 8.468e+01 1.210e+02 8.207e+01 1.198e+02 7.982e+01 1.229e+02
90 1.135e+02 9.989e+01 8.079e+01 1.185e+02 8.477e+01 1.211e+02 8.215e+01 1.201e+02 7.953e+01 1.234e+02
100 9.390e+01 9.996e+01 8.081e+01 1.185e+02 8.480e+01 1.212e+02 8.218e+01 1.203e+02 8.060e+01 1.230e+02
§.§ Case II: Lotka-Volterra model
As a second case study, we considered a two species Lotka-Volterra model <cit.>, often referred to as the predator-prey model. This model provides a fundamental framework for studying the dynamics between two interacting species. In its simplest form, it describes the interactions between a predator species and a prey species through a set of coupled differential equations with four unknown parameters:
ẋ_1 = x_1(α-β x_2),
ẋ_2 = -x_2(γ-δ x_1).
Here, x_1 and x_2 represent the populations of the prey and predator, respectively. The parameters α, β, γ and δ are positive constants representing the interactions between the two species. Specifically, this parameters dictate the growth rates and interaction strengths, capturing the essence of biological interactions such as predation and competition. The initial conditions considered in the generation of the datasets were x(0)=(10,5). Additionally, the values of the parameters used were α=γ=0.5 and β=δ=0.02.
For this case study we generated datasets with the same noise levels (0%, 1%, 5% and 10%) as in the previous example and three different sizes (30, 60 and 120 points). Additionally, for each combination of noise level and dataset size, we generated 50 different synthetic datasets, resulting in a total of 600 unique datasets.
Figure <ref>
shows the results in a 30-point Lotka-Volterra dataset, indicating that the predictive regions generated by the conformal methods and STAN are similar in terms of coverage. However, as in the previous case, and offer the advantage of not requiring extensive hyperparameter tuning, while also being more computationally efficient. In this particular example, while the bayesian method obtains results within a timeframe on the order of minutes, both conformal methods achieve this in a significantly shorter span, on the order of seconds.
§.§ Case III: Isomerization of α-Pinene
As a third case study, we examined the α-pinene isomerization model. The isomerization process of α-pinene is significant in industry, especially in the production of synthetic fragrances and flavors. These complex biochemical reactions can be effectively modeled using a system of five differential equations with five unknown parameters. The resulting kinetic model has been a classical example in the analysis of multiresponse data <cit.>. The kinetic equations encapsulate the transformation dynamics of α-pinene into its various isomers through a series of reaction steps:
ẋ_1 =-(p_1+p_2)x_1,
ẋ_2 =p_1x_1,
ẋ_3 =p_2x_1-(p_3+p_4)x_3+p_5x_5,
ẋ_4 =p_3x_3,
ẋ_5 =p_4x_3-p_5x_5.
In the equations above, each p_i∈ [0,1], i=1,…,5 represents a different rate of reaction, defining the conversion speed from one isomer to another. The initial conditions considered in the generation of the datasets were x(0)=(100,0,0,0,0). Additionally, the values of the parameters used were p=(5.93e-05, 2.96e-05,2.05e-05,2.75e-04,4.00e-05). The dataset generation procedure for this case study mirrored that used for the Logistic model, employing the same noise levels and dataset sizes. Although we generated synthetic datasets to assess the method's behavior, we illustrated this behavior with a real dataset from <cit.>.
Figure <ref>
shows the resulting regions of the isomerization of α-Pinene by applying the different algorithms to the 9-point real dataset. The results are once again consistent between both conformal algorithms and closely align with the regions obtained using STAN. In terms of computational cost, the conformal algorithms are notably more efficient, requiring less than a minute to compute the regions, whereas the Bayesian approach takes several minutes.
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
[!htbp]
Parameter estimation comparison for the first three case studies. Since the parameter estimation process is identical for both algorithms proposed in this paper, for simplicity, we will refer to the parameters obtained with them as CUQDyn. For the case study on the isomerization of α-pinene, a real dataset was considered, and thus the nominal parameters for this problem are unknown.
=0pt
2@c@Logistic 4@c@Lotka-Volterra 5@c@α-Pinene
2-34-78-12
r K α β γ δ p_1 p_2 p_3 p_4 p_5
True 1.000e-01 1.000e+02 5.000e-01 2.000e-02 5.000e-01 2.000e-02 NA NA NA NA NA
STAN 1.152e-01 1.021e+02 4.992e-01 2.004e-02 5.000e-01 2.001e-02 5.998e-05 2.797e-05 1.860e-05 2.826e-04 5.366e-05
CUQDyn 1.112e-01 1.027e+02 4.984e-01 2.001e-02 5.001e-01 2.000e-02 6.307e-05 2.841e-05 1.609e-05 2.729e-04 4.365e-05
§.§ Case IV: NFKB signaling pathway
The Nuclear Factor Kappa-light-chain-enhancer of activated B cells (NFKB) signaling pathway plays a key role in the regulation of immune response, inflammation and cell survival. This pathway is activated in response to various stimuli, including cytokines, stress and microbial infections, leading yo the transcription of target genes involved in immune and inflammatory responses.
Here we consider the dynamics of this pathway as described by a system of differential equations <cit.>:
İK̇K̇ṅ = kprod - kdeg · IKKn
- Tr · k1 · IKKn,
İK̇K̇ȧ = Tr · k1 · IKKn - k3 · IKKa
- Tr · k2 · IKKa · A20 - kdeg · IKKa
- a2 · IKKa · IkBa + t1 · IKKaIkBa
- a3 · IKKa · IkBaNFkB
+ t2 · IKKaIkBaNFkB,
İK̇K̇i̇ = k3 · IKKa + Tr · k2 · IKKa · A20
- kdeg · IKKi,
İK̇K̇ȧİk̇Ḃȧ = a2 · IKKa · IkBa - t1 · IKKaIkBa,
İK̇K̇ȧİk̇ḂȧṄḞk̇Ḃ = a3 · IKKa · IkBaNFkB
- t2 · IKKaIkBaNFkB,
ṄḞk̇Ḃ = c6a · IkBaNFkB - a1 · NFkB · IkBa
+ t2 · IKKaIkBaNFkB - i1 · NFkB,
ṄḞk̇Ḃṅ = i1 · kv · NFkB - a1 · IkBan · NFkBn,
Ȧ2̇0̇ = c4 · A20t - c5 · A20,
Ȧ2̇0̇ṫ = c2 + c1 · NFkBn - c3 · A20t,
İk̇Ḃȧ = -a2 · IKKa · IkBa
- a1 · IkBa · NFkB
+ c4a · IkBat - c5a · IkBa - i1a · IkBa
+ e1a · IkBan,
İk̇Ḃȧṅ = -a1 · IkBan · NFkBn + i1a · kv · IkBa
- e1a · kv · IkBan,
İk̇Ḃȧṫ = c2a + c1a · NFkBn - c3a · IkBat,
İk̇ḂȧṄḞk̇Ḃ = a1 · IkBa · NFkB - c6a · IkBaNFkB
- a3 · IKKa · IkBaNFkB
+ e2a · IkBanNFkBn,
İk̇ḂȧṅṄḞk̇Ḃṅ = a1 · IkBan · NFkBn
- e2a · kv · IkBanNFkBn,
ċġėṅṫ = c2c + c1c · NFkBn - c3c · cgent.
In agreement with the scenario considered by <cit.>, we assume that the available measurements are determined by the observation function g: ℝ^15→ℝ^6, which is defined as follows:
g(·) = (NFkBn(·), IkBa(·) + IkBaNFkB(·), A20t(·),
IKKn(·) + IKKa(·) + IKKi(·), IKKa(·), IkBat(·)).
Out of the system of 15 equations, only 6 observables, defined by the function g, are available. The parameter values used in the generation of the datasets are as follows:
a1 = 5e-01, a2 = 2e-01, t1 = 1e-01,
a3 = 1e+00, t2 = 1e-01, c1a = 5e-07,
c2a = 0e+00, c3a = 4e-04, c4a = 5e-01,
c5a = 1e-04, c6a = 2e-05, c1 = 5e-07,
c2 = 0e+00, c3 = 4e-04, c4 = 5e-01,
c5 = 3e-04, k1 = 2.5e-03, k2 = 1e-01,
k3 = 1.5e-03, kprod = 2.5e-05, kdeg = 1.25e-04,
kv = 5e+00, i1 = 2.5e-03, e2a = 1e-02,
i1a = 1e-03, e1a = 5e-04, c1c = 5e-07,
c2c = 0e+00, c3c = 4e-04.
It should be noted that in this problem, which involves 29 unknown parameters and 15 state variables, we only have access to 6 observable outputs. This discrepancy between the number of parameters and the available observables presents a challenge in the context of parameter identifiability, and is very commmon in systems biology applications. Identifiability refers to the ability to uniquely determine the model parameters based on the available data. When a system lacks identifiability, inferring unique parameter values from observable data becomes challenging, if not impossible. However, as mentioned in the introduction, by characterizing the impact of this lack of identifiability using appropriate uncertainty quantification (UQ) methods, it might still be possible to make useful predictions.
As shown, both methods based on conformal inference yield results that are in close agreement with each other. However, in this case, the computational benefits of our conformal strategies are especially pronounced. The and algorithms can compute the regions in just a few minutes, whereas the Bayesian approach requires several hours to accomplish the same task.
[Aldridge et al., 2006]albridge-burke-lauffenburger-sorger:2006
Aldridge, B., Burke, J., Lauffenburger, D., and Sorger, P. (2006).
Physicochemical modelling of cell signalling pathways.
Nature Cell Biology, 8(11):1195–1203.
[Angelopoulos et al., 2023]angelopoulos2023conformal
Angelopoulos, A. N., Bates, S., Lei, J., Fannjiang, C., Romano, Y., and Candès, E. (2023).
Conformal prediction: A gentle introduction.
Foundations and Trends® in Machine Learning, 16(4):494–591.
[Babtie and Stumpf, 2017]Babtie2017-rs
Babtie, A. C. and Stumpf, M. P. H. (2017).
How to deal with parameters for whole-cell modelling.
Journal of the Royal Society Interface, 14(133).
[Baker et al., 2018]Baker2018
Baker, R. E., Peña, J.-M., Jayamohan, J., and Jérusalem, A. (2018).
Mechanistic models versus machine learning, a fight worth fighting for the biological community?
Biology Letters, 14(5):20170660.
[Barber et al., 2021]10.1214/20-AOS1965
Barber, R. F., Candès, E. J., Ramdas, A., and Tibshirani, R. J. (2021).
Predictive inference with the jackknife+.
The Annals of Statistics, 49(1):486–507.
[Bayarri and Berger, 2004]Bayarri2004
Bayarri, M. J. and Berger, J. O. (2004).
The interplay of Bayesian and frequentist analysis.
Statistical Science, 19(1):58–80.
[Begoli et al., 2019]begoli2019need
Begoli, E., Bhattacharya, T., and Kusnezov, D. (2019).
The need for uncertainty quantification in machine-assisted medical decision making.
Nature Machine Intelligence, 1(1):20–23.
[Box et al., 1973]box1973some
Box, G., Hunter, W., MacGregor, J., and Erjavec, J. (1973).
Some problems associated with the analysis of multiresponse data.
Technometrics, 15(1):33–51.
[Carroll and Ruppert, 1984]carroll1984power
Carroll, R. J. and Ruppert, D. (1984).
Power transformations when fitting theoretical models to data.
Journal of the American Statistical Association, 79(386):321–328.
[Cedersund, 2012]cedersund:2012
Cedersund, G. (2012).
Conclusions via unique predictions obtained despite unidentifiability—new definitions and a general method.
FEBS Journal, 279:3513–3527.
[Cedersund, 2016]cedersund2016prediction
Cedersund, G. (2016).
Prediction uncertainty estimation despite unidentifiability: an overview of recent developments.
In Geris, L. and Gomez-Cabrero, D. (Eds.), Uncertainty in Biology, pages 449–466. Springer.
[Cedersund and Roll, 2009]cedersund-roll:2009
Cedersund, G. and Roll, J. (2009).
Model-based evaluation and comparison of potential explanations for given biological data.
FEBS Journal, 276(4):903–922.
[Coveney et al., 2016]Coveney2016
Coveney, P. V., Dougherty, E. R., and Highfield, R. R. (2016).
Big data need big theory too.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2080):20160153.
[DiStefano III, 2015]distefano2015dynamic
DiStefano III, J. (2015).
Dynamic Systems Biology Modeling and Simulation.
Academic Press.
[Egea et al., 2014]egea2014meigo
Egea, J. A., Henriques, D., Cokelaer, T., Villaverde, A. F., MacNamara, A., Danciu, D.-P., Banga, J. R., and Saez-Rodriguez, J. (2014).
MEIGO: An open-source software suite based on metaheuristics for global optimization in systems biology and bioinformatics.
BMC Bioinformatics, 15(1):136.
[Eriksson et al., 2019]Eriksson2019
Eriksson, O., Jauhiainen, A., Maad Sasane, S., Kramer, A., Nair, A. G., Sartorius, C., and Hellgren Kotaleski, J. (2019).
Uncertainty quantification, propagation and characterization by Bayesian analysis combined with global sensitivity analysis applied to dynamical intracellular pathway models.
Bioinformatics, 35(2):284–292.
[Geris and Gomez-Cabrero, 2016]geris2016uncertainty
Geris, L. and Gomez-Cabrero, D., editors (2016).
Uncertainty in Biology: A Computational Modeling Approach.
Springer.
[Guo et al., 2020]guo2020package
Guo, J., Gabry, J., Goodrich, B., and Weber, S. (2020).
Package ‘rstan’.
R package version 2.21.2. Available at <https://cran.r-project.org/web/packages/rstan/>.
[Hass et al., 2016]hass2016fast
Hass, H., Kreutz, C., Timmer, J., and Kaschek, D. (2016).
Fast integration-based prediction bands for ordinary differential equation models.
Bioinformatics, 32(8):1204–1210.
[Hines et al., 2014]hines2014determination
Hines, K. E., Middendorf, T. R., and Aldrich, R. W. (2014).
Determination of parameter identifiability in nonlinear biophysical models: A Bayesian approach.
Journal of General Physiology, 143(3):401–416.
[Hinkley, 1979]Hinkley1979
Hinkley, D. (1979).
Predictive likelihood.
The Annals of Statistics, 7(4):718–728.
[Ingalls, 2013]ingalls2013mathematical
Ingalls, B. P. (2013).
Mathematical Modeling in Systems Biology: An Introduction.
MIT Press.
[Kaltenbach et al., 2009]kaltenbach2009systems
Kaltenbach, H.-M., Dimopoulos, S., and Stelling, J. (2009).
Systems analysis of cellular networks under uncertainty.
FEBS Letters, 583(24):3923–3930.
[Kirk et al., 2015]kirk2015systems
Kirk, P. D., Babtie, A. C., and Stumpf, M. P. (2015).
Systems biology (un)certainties.
Science, 350(6259):386–388.
[Kreutz et al., 2012]kreutz2012likelihood
Kreutz, C., Raue, A., and Timmer, J. (2012).
Likelihood-based observability analysis and confidence intervals for predictions of dynamic models.
BMC Systems Biology, 6(1):120.
[Lei et al., 2018]lei2018distribution
Lei, J., G'Sell, M., Rinaldo, A., Tibshirani, R. J., and Wasserman, L. (2018).
Distribution-free predictive inference for regression.
Journal of the American Statistical Association, 113(523):1094–1111.
[Liepe et al., 2014]liepe2014framework
Liepe, J., Kirk, P., Filippi, S., Toni, T., Barnes, C. P., and Stumpf, M. P. (2014).
A framework for parameter estimation and model selection from experimental data in systems biology using approximate Bayesian computation.
Nature Protocols, 9(2):439–456.
[Linden et al., 2022]Linden2022
Linden, N. J., Kramer, B., and Rangamani, P. (2022).
Bayesian parameter estimation for dynamical models in systems biology.
PLOS Computational Biology, 18(10):e1010651.
[Lipniacki et al., 2004]lipniacki-paszek-brasier-luxon-kimmel:2004
Lipniacki, T., Paszek, P., Brasier, A., Luxon, B., and Kimmel, M. (2004).
Mathematical model of NFκB regulatory module.
Journal of Theoretical Biology, 228:195–215.
[Lugosi and Matabuena, 2024]lugosi2024uncertainty
Lugosi, G. and Matabuena, M. (2024).
Uncertainty quantification in metric spaces.
arXiv preprint arXiv:2405.05110.
[Massonis et al., 2022]Massonis2022
Massonis, G., Villaverde, A. F., and Banga, J. R. (2022).
Improving dynamic predictions with ensembles of observable models.
Bioinformatics, 39(1).
[Matabuena et al., 2024]matabuena2024conformal
Matabuena, M., Ghosal, R., Mozharovskyi, P., Padilla, O. H. M., and Onnela, J.-P. (2024).
Conformal uncertainty quantification using kernel depth measures in separable Hilbert spaces.
arXiv preprint arXiv:2405.13970.
[Mitra and Hlavacek, 2019]Mitra2019
Mitra, E. D. and Hlavacek, W. S. (2019).
Parameter estimation and uncertainty quantification for systems biology models.
Current Opinion in Systems Biology, 18:9–18.
[Mikovičá and Hatzimanikatis, 2010]Mikovi2010
Mikovičá, L. and Hatzimanikatis, V. (2010).
Modeling of uncertainties in biochemical reactions.
Biotechnology and Bioengineering, 108(2):413–423.
[Murphy et al., 2024]Murphy2024
Murphy, R. J., Maclaren, O. J., and Simpson, M. J. (2024).
Implementing measurement error models with mechanistic mathematical models in a likelihood-based framework for estimation, identifiability analysis, and prediction in the life sciences.
Journal of the Royal Society Interface, 21(210).
[National Research Council et al., 2012]National_Research_Council2012-wq
National Research Council, Division on Engineering and Physical Sciences, Board on Mathematical Sciences and Their Applications, and Committee on Mathematical Foundations of Verification, Validation, and Uncertainty Quantification (2012).
Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification.
National Academies Press.
[Plank and Simpson, 2024]plank2024structured
Plank, M. J. and Simpson, M. J. (2024).
Structured methods for parameter inference and uncertainty quantification for mechanistic models in the life sciences.
arXiv preprint arXiv:2403.01678.
[Prybutok et al., 2022]Prybutok2022-kz
Prybutok, A. N., Cain, J. Y., Leonard, J. N., and Bagheri, N. (2022).
Fighting fire with fire: Deploying complexity in computational modeling to effectively characterize complex biological systems.
Current Opinion in Biotechnology, 75:102704.
[Puy et al., 2022]Puy2022-yc
Puy, A., Beneventano, P., Levin, S. A., Lo Piano, S., Portaluri, T., and Saltelli, A. (2022).
Models with higher effective dimensions tend to produce more uncertain estimates.
Science Advances, 8(42):eabn9450.
[Quenouille, 1956]quenouille1956notes
Quenouille, M. H. (1956).
Notes on bias in estimation.
Biometrika, 43(3/4):353–360.
[Raue et al., 2013]raue2013joining
Raue, A., Kreutz, C., Theis, F. J., and Timmer, J. (2013).
Joining forces of Bayesian and frequentist methodology: A study for inference in the presence of non-identifiability.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 371(1984):20110544.
[Romano et al., 2019]romano2019conformalized
Romano, Y., Patterson, E., and Candès, E. (2019).
Conformalized quantile regression.
Advances in Neural Information Processing Systems, 32.
[Sesia and Candès, 2020]sesia2020comparison
Sesia, M. and Candès, E. J. (2020).
A comparison of some conformal quantile regression methods.
Stat, 9(1):e261.
[Shafer and Vovk, 2008]shafer2008tutorial
Shafer, G. and Vovk, V. (2008).
A tutorial on conformal prediction.
Journal of Machine Learning Research, 9(3):371–421.
[Sharp et al., 2022]sharp2022parameter
Sharp, J. A., Browning, A. P., Burrage, K., and Simpson, M. J. (2022).
Parameter estimation and uncertainty quantification using information geometry.
Journal of the Royal Society Interface, 19(189):20210940.
[Siegfried et al., 2023]siegfried2023distribution
Siegfried, S., Kook, L., and Hothorn, T. (2023).
Distribution-free location-scale regression.
The American Statistician, 77(4):345–356.
[Simpson and Maclaren, 2023]Simpson2023
Simpson, M. J. and Maclaren, O. J. (2023).
Profile-wise analysis: A profile likelihood-based workflow for identifiability analysis, estimation, and prediction with mechanistic mathematical models.
PLOS Computational Biology, 19(9):e1011515.
[Smith, 2013]Smith2013
Smith, R. C. (2013).
Uncertainty Quantification: Theory, Implementation, and Applications.
Society for Industrial and Applied Mathematics.
[Streif et al., 2016]streif2016robustness
Streif, S., Kim, K.-K. K., Rumschinski, P., Kishida, M., Shen, D. E., Findeisen, R., and Braatz, R. D. (2016).
Robustness analysis, prediction, and estimation for uncertain biochemical networks: An overview.
Journal of Process Control, 42:14–34.
[Tsoularis and Wallace, 2002]Tsoularis2002
Tsoularis, A. and Wallace, J. (2002).
Analysis of logistic growth models.
Mathematical Biosciences, 179(1):21–55.
[Vanlier et al., 2013]vanlier-tiemann-hilbers-vanriel:2013
Vanlier, J., Tiemann, C., Hilbers, P., and van Riel, N. (2013).
Parameter uncertainty in biochemical models described by ordinary differential equations.
Mathematical Biosciences, 246:305–314.
[Villaverde and Banga, 2014]villaverde2014reverse
Villaverde, A. F. and Banga, J. R. (2014).
Reverse engineering and identification in systems biology: Strategies, perspectives, and challenges.
Journal of the Royal Society Interface, 11(91):20130505.
[Villaverde et al., 2019]villaverde2018benchmarking
Villaverde, A. F., Froehlich, F., Weindl, D., Hasenauer, J., and Banga, J. R. (2019).
Benchmarking optimization methods for parameter estimation in large kinetic models.
Bioinformatics, 35(5):830–838.
[Villaverde et al., 2022]Villaverde2022protocol
Villaverde, A. F., Pathirana, D., Fröhlich, F., Hasenauer, J., and Banga, J. R. (2022).
A protocol for dynamic model calibration.
Briefings in Bioinformatics, 23(1):1–12.
[Villaverde et al., 2023]Villaverde2023_comparisonUQ
Villaverde, A. F., Raimúndez, E., Hasenauer, J., and Banga, J. R. (2023).
Assessment of prediction uncertainty quantification methods in systems biology.
IEEE/ACM Transactions on Computational Biology and Bioinformatics, 20(3):1725–1736.
[Wangersky, 1978]wangersky1978lotka
Wangersky, P. J. (1978).
Lotka-volterra population models.
Annual Review of Ecology and Systematics, 9:189–218.
|
http://arxiv.org/abs/2409.02549v2 | 20240904091139 | A Sequential Decision-Making Model for Perimeter Identification | [
"Ayal Taitler"
] | cs.AI | [
"cs.AI"
] |
Department of Industrial Engineering and Management
Ben-Gurion University Of The Negev, Israel
A Sequential Decision-Making Model for Perimeter Identification
Ayal Taitler
===============================================================
firstpage
§ ABSTRACT
Perimeter identification involves ascertaining the boundaries of a designated area or zone, requiring traffic flow monitoring, control, or optimization. Various methodologies and technologies exist for accurately defining these perimeters; however, they often necessitate specialized equipment, precise mapping, or comprehensive data for effective problem delineation. In this study, we propose a sequential decision-making framework for perimeter search, designed to operate efficiently in real-time and require only publicly accessible information. We conceptualize the perimeter search as a game between a playing agent and an artificial environment, where the agent's objective is to identify the optimal perimeter by sequentially improving the current perimeter. We detail the model for the game and discuss its adaptability in determining the definition of an optimal perimeter. Ultimately, we showcase the model's efficacy through a real-world scenario, highlighting the identification of corresponding optimal perimeters.
Keywords: Perimeter identification, Markov decision Process, Reinforcement Learning.
§ INTRODUCTION
In urban transportation systems, effective traffic management is paramount to ensuring safety, reducing congestion, and enhancing overall mobility. A critical aspect of traffic engineering involves accurately identifying perimeters within which traffic conditions are to be monitored, controlled, or optimized. The challenge of perimeter identification in traffic engineering revolves around delineating the boundaries of specific areas or regions where traffic characteristics are of particular interest. These areas may span from city centers and neighborhoods to highway sections and intersections. Precise and fast perimeter identification is crucial for deploying targeted traffic management solutions tailored to the unique needs and challenges of each defined zone.
There are primarily two categorizations for perimeter identification: static and dynamic. Static identification usually focuses on analyzing the connectivity and parameters of the network, forming partitions typically based on optimization or clustering techniques. For instance, jiang2023partitioning (2023) employed a Graph Convolutional Network (GCN) <cit.> to integrate spatial connectivity and traffic variables, subsequently identifying three levels of clusters. liu2019new introduced a weighted average between flow and speed, constructing a MILP (Mixed-Integer Linear Programming) model to identify the major skeleton of the perimeter.
Dynamic identification involves addressing changing traffic measures over time. A popular approach is to update the historical perimeter boundaries based on new incoming data. For example, ji2014empirical applied boundary adjustment using a dynamic mechanism for the city of Shenzhen, China. saeedmanesh2017dynamic identified clusters via the snake clustering algorithm saeedmanesh2016clustering and skeleton via MILP, subsequently refining them through another MILP and dynamic clustering.
In this work, we formulate the perimeter identification problem as a sequential decision-making model based on congestion heat map images that can be obtained in real-time from sources such as Google Maps (GMaps) <cit.>. Our model decomposes the global optimization inherent in the perimeter identification problem into discrete sequential steps, each aimed at guiding a "playing" agent toward the optimal solution. This approach enables the solving agent to comprehend the problem's structure and facilitates the transfer of understanding between identification problems. Additionally, it allows for tracking dynamic changes in the perimeter as part of its gameplay.
§ METHODOLOGY
The Markov Decision Process (MDP) <cit.> is a widely adopted approach for modeling sequential decision-making processes. Here, we model the perimeter identification problem as a game between two players. The first player is the environment, representing the current state of congestion, the existing perimeters, and how they can be altered. The second player is the agent, whose goal is to determine the best possible perimeter. The agent executes a sequence of actions, with each action altering the state of the environment, i.e., the active perimeter.
§.§ MDPs Background
An finite-horizon MDP is a tuple ⟨ S,A,R,T⟩, where S is the state space; A is the action space; T(s'|s,a) is the probability that the system transitions to state s' given state s and action a; and R(s,s') is the immediate reward function. The fact that T(s'|s,a) is a function of s, s' and a only (and not the previous trajectory of the system) is referred to as the Markov property. We denote the horizon of the process as H.
A policy π:S →𝒫(A) is a mapping from the state space to the set of probability distributions over the action space. The probability of taking action a given state s is denoted by π(s|a). The optimal policy is denoted by π^*(s|a). The state–action value function Q^π(s,a) of a policy π, namely the expected total reward starting from a state s, taking action a, and following π afterward, is defined as the value of
π:
Q^π(s,a) = E_π[ ∑_k=0^Hr_t+k+1 | s_t=s, a_t=a] = R(s,s') + ∑_s∈ ST(s'|s,a)∑_a'∈ Aπ(a'|s')Q^π(s',a')
The optimal state-action-value function Q^*(s,a) of π^*, namely the expected total reward starting from a state s, taking action a, and acting optimally afterwards is defined as
Q^*(s,a) = R(s,s') + max_a'∑_s∈ ST(s'|s,a)Q^*(s',a')
Full evaluation of the optimal state-action value function Q^* requires intractable computation over the whole action and state spaces. Thus, an approximate version to estimate Q, known as Q-learning <cit.>, try to find the optimal policy by fitting the Q-function with temporal difference (TD) learning (that during each training iteration perturbs the Q-function toward the one-step bootstrapped estimation that is identified by the Bellman function):
Q'(s,a) ← (1-α)Q(s,a) + α(R(s,s') + max_a' Q(s',a'))
where α is referred to as the learning rate.
§.§ The Markov Model
To define the perimeter identification model as a sequential model, we will frame it as a search problem. The objective of the search is to locate the intersections, acting as the vertices of the perimeter. Here, we make the assumption that the perimeter shape is convex. This assumption is not overly restrictive, as the search process can be carried out iteratively to identify the perimeter as a combination of convex shapes. The MDP that defines the search problem is as follows:
States S The state of the system comprises the current selection of intersections, which determines a convex hull defining the perimeter. Clearly, intersections within the set residing inside the convex hull do not influence the shape of the perimeter but rather define a distinct state within the state space.
Actions A The actions in the game are straightforward: either add a new intersection to the set of intersections or remove an existing intersection from the current set. Once more, adding a new intersection that lies within the induced convex hull of the current state will transition the problem to a new state, yet it will not alter the actual convex hull defining the perimeter.
Transition T(s'|s,a) The transition function represents the "dynamics" of the game, defining how the system's state will transition from a particular state given an action. Therefore, in this scenario, the transition function merely modifies the set of currently selected intersections. The convex hull induced by the set of intersections is a direct representation of it, hence it is unnecessary for defining the state. However, it may prove useful for a more detailed state representation, as will be discussed in Section <ref>.
Reward R(s,s') The reward represents the effectiveness of the transition, indicating how much closer the optimization has come to achieving the optimal perimeter. Therefore, the reward signifies the additional portion of the perimeter beyond the previous perimeter. Specifically, the reward corresponds to the area added to the perimeter, which is proportional to the congestion in that area. To prevent the optimization from simply covering the entire field of view, penalties for adding non-congested areas are subtracted as a regularization measure for the optimization. The explicit reward expression is given in equation (<ref>).
R(s,s') = V(s')-V(s)= 1/β(∑_p ∈ CH(s')w_p - λ1_{w_p=0} - ∑_p ∈ CH(s)w_p - λ1_{w_p=0})
where β is a normalization term to scale the reward, e.g., the area of the frame. λ is the designer chosen regularization term for non-congested areas, CH(s) is the convex hull of the heat map of state s, and p represents a pixel belonging to the convex hull.
Lastly, w_p is the weight or level of congestion, of pixel p.
§.§ States and Actions Representations
State representation is a critical aspect of this model. In a simple representation of the state as an ordered set of intersections, the playing agent attempts to optimize the perimeter based solely on the reward. That is, all information about the game – such as the appearance of congested areas and the topological changes resulting from adding intersection i to the state – is inferred through the reward feedback. Consequently, altering the target area or even the distribution of congestion yields a different reward signal with the same state and action spaces, requiring a learning agent to essentially start from scratch in finding the optimal perimeter. A more elaborate state representation, capable of capturing topological properties, can be beneficial for generalization or knowledge transfer between different levels of congestion or even cities, facilitating the automatic discovery of perimeters.
The actions in the game play a significant role, both in terms of computational complexity and the quality of the perimeter. Too many actions may lead to an intractable optimization problem, while too few may limit the representation quality of the perimeter by the selected intersections. Additionally, the selected actions should undergo a connectivity analysis to ensure that the identified perimeter is a connected component in the city network. In the context of this work, we only demonstrate the optimization game using a basic set of actions to illustrate the technical merits of the model.
§ RESULTS AND DISCUSSION
R0.45
< g r a p h i c s >
Toronto downtown heat-map.
We used here Google Maps <cit.> as it gives congestion heat maps over a chosen area map.
The heat map layer is updated online based on traffic data collected from sensors and drivers in real time. Thus, it is useful both for static and dynamic perimeter identification. Here we focus on a identifying a single perimeter, and show how it can be easily identified for a given heat map using the MDP presented in section <ref>. For the playing agent we used a Q-Learning <cit.> agent implementing the equation in (<ref>). We look at downtown Toronto, which is a highly congested area. The heat map of downtown Toronto is given in figure <ref>.
Given the selected area's image, the heat map layer and the intersections can be identified using classical image processing. The information can also be extracted using other tools to get an accurate network of intersections, which serves as the actions in the game. Here, to illustrate the effectiveness of the game and optimization we extracted everything from the image directly.
We conducted three games for our Q-Learning agent.
* Conservative game: The objective of this game was to identify the smallest perimeter encompassing solely the congested areas, excluding congested forks with uncongested areas. Therefore, in this game, the agent received significant penalties for including uncongested areas. In this case, the regularization term was set to λ=10.
* Balanced game: The objective here was to strike a balance between incorporating uncongested areas and including congested areas. In essence, the agent could include uncongested areas if doing so enabled it to encompass more of the congestion within the perimeter. The regularization term was set to λ=1 in this case.
* Non-conservative game: The aim here was to incentivize the agent to incorporate all congestion, even if it entailed including non-congested regions. In this case, the regularization term was set to λ=0.1.
In all three games, the initial state was the same random state, chosen to demonstrate the varying evolution based on the optimization requirements. The results for the three games are presented in Figure <ref>. It can be observed that the agent was able to quickly identify the appropriate perimeters for different regularization terms. This highlights the flexibility of the model in easily adapting to various needs.
At this point, it's important to note that the convex hulls found are not the actual perimeters; rather, they are convex hulls with intersections as their vertices. To generate an actual perimeter, a deterministic post-processing step is necessary, based on the network's connectivity and edges of the convex hull.
§ CONCLUSIONS
We have introduced a sequential decision-making model for identifying perimeters in congested regions. This model frames the identification problem as a game, where a player selects the vertices of a convex hull to define the perimeter. Essentially, the proposed game solves the optimization task of finding the optimal perimeter. Additionally, we have introduced a regularization term, serving as a control mechanism for the model designer to adjust and fine-tune perimeter requirements directly into the optimization process.
By formulating the optimization as a sequential model, we enable the utilization of learning tools. This approach facilitates knowledge transfer and generalization of perimeters across different cities. Furthermore, it allows for tracking a dynamic perimeter without the need to solve new optimization problems from scratch.
|
http://arxiv.org/abs/2409.02103v1 | 20240903175718 | A broad set of solar and cosmochemical data indicates high C-N-O abundances for the solar system | [
"Ngoc Truong",
"Christopher R. Glein",
"Jonathan I. Lunine"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.GA",
"astro-ph.HE",
"astro-ph.SR"
] |
0000-0003-2689-3102]Ngoc Truong*
0000-0002-2161-4672]Christopher R. Glein
Space Science Division, Southwest Research Institute
6220 Culebra Rd, San Antonio, TX 78238, USA
0000-0003-2279-4131]Jonathan I. Lunine
Department of Astronomy & Carl Sagan Institute, Cornell University
122 Sciences Dr, Ithaca, NY 14850, USA
Ngoc Truong
[email protected]
§ ABSTRACT
We examine the role of refractory organics as a major C carrier in the outer protosolar nebula and its implications for the compositions of large Kuiper belt objects (KBOs) and CI chondrites. By utilizing Rosetta measurements of refractory organics in comet 67P/Churyumov-Gerasimenko, we show that they would make up a large fraction of the protosolar C inventory in the KBO-forming region based on the current widely adopted solar abundances. However, this would free up too much O to form water ice, producing solid material that is not sufficiently rock-rich to explain the uncompressed density of the Pluto-Charon system and other large KBOs; the former has been argued as the most representative value we have for the bulk composition of large KBOs (Barr & Schawmb 2016, Bierson & Nimmo 2019). This inconsistency further highlights the solar abundances problem - an ongoing challenge in reconciling spectroscopically determined heavy element abundances with helioseismology constraints. By employing a new dataset from solar CNO neutrinos and solar wind measurements of C, N, and O, we show that the uncompressed density of the Pluto-Charon system can be reproduced over a wide range of scenarios. We show that a lack of sulfates in Ryugu and Bennu samples implies a lower amount of water ice initially accreted into CI chondrite parent bodies than previously thought. These data are found to be consistent with the solar C/O ratio implied by the new dataset. Our predictions can be tested by future neutrino, helioseismology, and cosmochemical measurements.
§ INTRODUCTION
The Sun’s heavy element abundances play a crucial role in numerous areas of astronomy. For example, they serve as benchmarks for understanding other stars’ elemental compositions (Hinkel et al. 2014; Jofré et al. 2019). Though the abundances of heavy elements (all elements other than H, He) in the Sun are believed to be known to much better accuracy than in other stars, significant uncertainties remain (Basu & Antia 2008). Traditionally, spectroscopic observations of the solar photosphere have been used to estimate the abundances of heavy elements in the top layer of the Sun following the pioneering works of Russell 1929 and Goldberg et al. 1960. However, the analysis of spectral lines to derive solar abundances is subject to uncertainties stemming from radiative transfer models, overlapping of spectral lines between different elements and elemental opacities within the Sun’s interior, among other complications (Jofré et al. 2019). We refer the reader to Hinkel et al. 2014; Allende Prieto 2016; Jofré et al. 2019 for further details on the nuances in determining stellar abundances.
Early spectral analyses often employed 1D, local thermodynamic equilibrium (LTE) radiative transfer models (Anders & Grevesse 1989 - AG89; Grevesse & Sauval 1998 - GS98), which despite their simplicity, agreed well with constraints inferred from helioseismology, such as the base of the convective zone (R_CZ), the sound speed and density profiles, and the He abundance at the top surface of the Sun (Y_S) (Basu & Antia 2008). Since 2005, the introduction of 3D models has significantly decreased the estimated abundances for several key elements (i.e., C, O), thus the relative mass fraction of heavy elements with respect to hydrogen Z/X to 0.0165 ± 0.0011 (Asplund et al. 2005 - AGS05) (here, X, Y, Z are the mass fractions of H, He, and heavy elements, respectively). Monte Carlo simulations involving 10,000 solar models that utilize the solar abundances determination from AGS05 predict a surface He abundance Y_S = 0.2292 ± 0.0037 and R_CZ = (0.7280 ± 0.0037)×R_⊙ (R_⊙: the radius of the Sun) (Bahcall et al. 2005, 2006). These results, however, are inconsistent with observations from helioseismology, in which (Y_S =0.2485 ±0.0034, R_CZ = (0.7133 ±0.0005)×R_⊙, Basu & Antia 2004, 2008). In contrast, those observations can be reproduced in solar models with the older, higher heavy element abundances data from AG89 or GS98, i.e.,Y_S = 0.2425 ±0.0042and R_CZ = (0.7133 ±0.0035)×R_⊙ in models using GS98 data (Basu & Antia 2008). Subsequent 3D solar models by Asplund et al. 2009 – AGSS09 and Asplund et al. 2021 – AGS21 slightly revised the Z/X value upward to be in the range of0.0187 ±0.0009, but this range is still well below older determinations from AG89 (0.0274 ±0.0016) or GS98 (0.0231 ±0.0018). While the disagreement in the more recent datasets is less severe than in AGS05, they are still in conflict with helioseismic data (Serenelli et al. 2009; Vinyoles et al. 2017).
Numerous attempts have been made to reconcile the low-Zsolar models with helioseismic data (Christensen-Dalsgaard et al. 2009; Serenelli et al. 2011; Bailey et al. 2015); however, they often improve some constraints but sacrifice their consistency with others (Haxton et al. 2013; Vinyoles et al. 2017). Among those ideas, opacity adjustments of certain heavy elements (e.g., iron) have been proposed based on transmission opacity experiments as one of the most promising potential solutions to resolve the problem (Bailey et al. 2015; Nagayama et al. 2019). However, a recent radiation burn-through experiment does not support a large increase in the opacity of iron at conditions close to the base of the solar convective zone and suggests that it agrees with current predictions from plasma opacity theory (Hoarty et al. 2023). Alternatively, it has been suggested that the accretion of metal-poor disk gases (due to planetary formation) onto the Sun’s convective envelope might dilute its protosolar composition (Haxton et al. 2013; Kunitomo & Guillot 2021; Kunitomo et al. 2022). While it is an interesting idea attempting to reconcile spectroscopic, helioseismic and neutrino constraints, challenges remain (Haxton et al. 2013). As an example, the timescale for the presence of gas in the protoplanetary disk is typically estimated to be a few Myrs (Wang et al. 2017), whereas the early Sun’s convective boundary moves outward in response to the growing radiative zone over a longer period of ∼30Myr (Haxton et al. 2013). As a result, if the accretion of metal-poor disk gases occured early when most of the solar envelope was still convective, any effect would have been negligible (Haxton et al. 2013). Recent developments in 3D, non-LTE models, such as those by Magg et al. 2022, have revised upwards the abundances of elements like C, N, and O. Nevertheless, further efforts are still needed to reconcile these results with helioseismology observations (Buldgen et al. 2023).
Another source of valuable information on solar abundances is CI carbonaceous chondrites (Cameron 1973, 1982; Lodders 2003). The abundance pattern of many elements, except the most volatile ones (i.e., C, N, O, noble gases), show a close correlation with photospheric abundances (Lodders 2021). However, as Lodders 2020, 2021 has pointed out and quantified, heavy elements have gravitationally settled from the outer convective zone into the Sun’s interior over the lifetime of the Sun; therefore, the bulk composition of the Sun (and thus the rest of the solar system abundances), or protosolar abundances on the so-called astronomical scale (relative to10^12hydrogen atoms), needs to be corrected from the contemporary photospheric abundances. Since the effect is not resolvable for individual elements with current data, in practice, the typical approach that has been taken is to adopt a constant settling factor (usually∼10-23 %) for all heavy elements (except Li). Note, however, that on the cosmochemical scale (normalized to10^6silicon atoms), the contemporary abundances of heavy elements stay the same as the protosolar abundances (Lodders 2020, 2021).
In planetary and exoplanetary sciences, the abundances of heavy elements relative to the protosolar values are used as a diagnostic set to probe the formation conditions of planets (Owen et al. 1999; Öberg et al. 2011, Mousis et al. 2021). In particular, the solar C/O ratio and the resulting speciation of O among various forms of C (i.e.,CO, refractory organics or organic matter, etc.) influences the composition of pebbles/planetesimals, thereby affecting the accretion of volatiles that can be represented in present planetary atmospheres (Pekmezci et al. 2019). The signatures of protosolar nebula chemistry are also likely to have left marks on more primitive bodies that formed in the outer solar system. These bodies include CI chondrites (Kruijer et al. 2017; Desch et al. 2018; Alexander 2019) and Kuiper belt objects (KBOs), leftovers of the era of planet formation. A more oxidized solar nebula, in which C was predominately present asCOandCO_2, has often been invoked to account for the relatively low fraction of water ice initially accreted into CI chondrite parent bodies (∼28-30 % by weight, Alexander 2019). However, this constraint needs to be revisited, as analyses of returned samples from asteroid Ryugu and early data from asteroid Bennu both indicate the apparent absence of sulfates and suggest that CI sulfates are artifacts from the oxidation of indigenous metal sulfides in air (Nakamura et al. 2023; Yokoyama et al. 2023; King et al. 2024; Lauretta et al. 2024). This finding suggests that previous estimates of water ice content initially accreted into CI chondrites based on the presence of sulfates (e.g., Alexander 2019) may represent an upper limit. In light of this new insight, the implications to the protosolar nebula chemistry and protosolar abundances will be reassessed in section 3.2.
The compositions of KBOs could serve as another record of conditions in the early outer solar system. Their densities may reflect the amount of water ice that was present. There may have been a common grain density for mixtures of solids in the KBO-forming region. In this case, while KBOs come in a range of sizes and masses, porosity would explain the density variations between small and large KBOs (Bierson & Nimmo 2019). Some small, low-density KBOs may be fragments of a differentiated parent body, as it was suggested for collisional families like Eris-Dysnomia (Brown et al. 2007). Alternatively, this trend could also be produced by the pebble accretion and streaming instability model if accreted pebbles were relatively rich in silicates (Cañas et al. 2024). The large KBOs are generally rock-rich with a canonical rock mass fraction of 70 % (Nimmo et al. 2017; Ortiz et al. 2017; Bierson & Nimmo 2019; Kiss et al. 2019); they have often been referred to as a representative composition of the solid materials in the outer protosolar nebula (McKinnon & Mueller 1988; Simonelli et al. 1989; McKinnon et al. 2017, 2021). Furthermore, KBOs that were formed via low-velocity collisions, with impact speeds similar to escape speeds, would make large primary/moon companions similar to Pluto/Charon or Orcus/Vanth, and should retain their primordial compositions (Canup 2005; Barr & Schwamb 2016; Canup et al. 2021; McKinnon et al. 2021). In contrast, collisional families like Eris-Dysnomia could be subjected to more energetic collisions that strip away their outer ice shells, leading to an increase in their rock mass fraction and bulk density (Brown et al. 2007; Barr & Schwamb 2016; McKinnon et al. 2021). Neptune’s moon Triton is believed to be a captured Kuiper belt object (Goldreich et al. 1989; McKinnon & Leith 1995; Agnor & Hamilton 2006) with an uncompressed density∼1900 kgm^-3, more rock-rich even than the Pluto-Charon system (McKinnon et al. 1995, Wong et al. 2008). The capture process may have altered its original volatile composition (Goldreich et al. 1989); and subsequent tidal heating could have modified a more ice-rich primordial Triton, potentially from an original Pluto-like density (∼1800 kgm^-3) to the current density (Barr & Schwamb 2016).
Thus, the uncompressed density of the Pluto-Charon system, which was precisely measured by the New Horizons mission (1800 ±20kgm^-3, 3σerror bar, Brozović et al. 2015, McKinnon et al. 2017, Nimmo et al. 2017) has been argued as the most representative value we have for the bulk composition of large KBOs (Barr & Schwamb 2016, Bierson & Nimmo 2019, McKinnon et al. 2021). The rock-rich nature of KBOs has been commonly cited as evidence signifying their formation in a CO-rich protosolar nebula (McKinnon & Mueller 1988; Johnson & Lunine 2005; Wong et al. 2008), assuming a solar C/O ratio of∼0.5, close to the current widely adopted value from AGSS09 (∼0.54). While some previous works considered refractory organics as one of the C carriers in the protosolar nebula (Simonelli et al. 1989; Pollack et al. 1994) that could be inherited by icy moons (Miller et al. 2019; Néri et al. 2020; Reynard & Sotin 2023), quantitative calculations of their effects have not yet been performed in detail for KBOs. As Jupiter-family comets are thought to originate in the Kuiper belt (Levison & Duncan 1997; Volk & Malhotra 2008; Nesvorný et al. 2017), their solid organic abundances may reflect the inventory of refractory organics in the KBO-forming region of the protosolar nebula. The Rosetta mission to comet 67P/Churyumov-Gerasimenko (67P/C-G) – a Jupiter-family comet provided a unique opportunity to sample such primordial materials at low flyby/orbital speeds (<10 ms^-1), likely preserving their original refractory compositions (Fray et al. 2016; Bardyn et al. 2017). In light of the new data from comet 67P/C-G, in the next section, we perform mass balance calculations to quantify the effects of solid organics on the uncompressed densities of KBOs under the current widely adopted solar abundances reported in Lodders 2021 (Table 1).
Abundances 2|c|Present 2|c|Protosolar
Element N(E) ±σ N(E) ±σ
H 3.09× 10^10 2.52× 10^10
He 2.59× 10^9 1.2× 10^8 2.51× 10^9 1.2× 10^8
C 9.12× 10^6 1.8× 10^5 9.12× 10^6 1.8× 10^5
N 2.19× 10^6 9700 2.19× 10^6 9700
O 1.66× 10^7 1.3× 10^5 1.66× 10^7 1.3× 10^5
Mg 1.03× 10^6 4.4× 10^4 1.03× 10^6 4.4× 10^4
Si 1.00× 10^6 3.4× 10^4 1.00× 10^6 3.4× 10^4
Al 81820 6110 81820 6110
Ca 57239 4500 57234 4500
Ni 48670 2940 48670 2940
Fe 8.72× 10^5 3.8× 10^4 8.72× 10^5 3.8× 10^4
S 4.37× 10^5 2.6× 10^4 4.37× 10^5 2.6× 10^4
Table 1. The current values adopted by the community for the present (left) and protosolar (solar system, right) abundances of major elements that determine the nature of solids formed in the outer solar system (Lodders 2021). These values are based on the cosmochemical abundance scale where the number of silicon atoms is set to one million (n_Si=10^6). The present paper reexamines values for carbon, nitrogen, and oxygen (see Table 2).
§ KBOS UNCOMPRESSED DENSITIES AND THEIR FORMATION: INSIGHTS FROM COMETARY DATA
To assess the effects of C and O protosolar abundances on the compositions of KBOs, their uncompressed densities are calculated as a function of carbon partitioning (Equation 1, below). Uncompressed densities reflect the densities of the mixture of materials that make up a planetary body, after adjusting for the effect of gravity on the observed density. We model the density here, as it is the most reliable observable that is available for KBOs. We assume that the major C carriers in the outer protosolar nebula wereCO,CO_2and refractory organics (C_100H_104O_30N_3.5), the latter were measured in comet 67P/C-G’s dust (Bardyn et al. 2017; Isnard et al. 2019).
On the cosmochemical scale (relative to silicon, Table 1), the molar abundance of refractory organics in the KBO-forming region (n_C-org) is constrained by the ratio of C/Si measured in comet 67P/C-G’s dust (5.5_-1.2^+1.4, Bardyn et al. 2017). The C/Si ratio in comet Halley (C/Si∼5) is similar, although the flybys of the Vega-1, Vega-2 and Giotto spacecraft were performed at much higher speeds (Kissel et al. 1986; Kissel & Krueger 1987; Jessberger et al. 1988). Oxygen is partitioned among anhydrous silicates, metal oxides, refractory organics,CO,CO_2andH_2O(Equations 2 and 3, below). To represent rock, we use a generic composition consisting of componentsSiO_2 + MgO + Al_2O_3 + CaO + FeS + Ni, with any remaining iron assumed to be present either asFe(in metallic alloy),FeO(in silicates) orFeO_1.33(equivalent toFe_3O_4, magnetite). In our model, we consider anhydrous silicates (ρ_r
∼3360 kgm^-3, Wong et al. 2008); metal sulfide+oxide phases (ρ_met
∼4800 kgm^-3, Wong et al., 2008) or metal sulfide+native metal phases (ρ_met
∼6300 kgm^-3, based on the cosmochemical abundances of Fe, S, Ni in Table 1). Except in the case ofFeO_1.33, we consider these mixtures as primordial materials that formed KBOs to calculate the uncompressed densities. While metallic iron oxidation by water vapor in the protosolar nebula would not be efficient due to sluggish kinetics (Fegley 2000), we adoptFeO_1.33as an end-member case because Pluto’s and Charon’s deep interiors may have experienced aqueous alteration during differentiation, and iron metal could have been oxidized to magnetite (McKinnon et al. 2017). Here, silicate minerals in the interiors of Pluto/Charon today may be hydrated (McKinnon et al. 2017); however, phyllosilicates can be considered as a combination of anhydrous minerals and structural water in hydrated silicate matrices, resulting in a similar overall density for the mixture (Waite et al. 2017). It is therefore appropriate to use the anhydrous equivalent to estimate the water budget and the overall density of the Pluto-Charon system. Following the method described in Johnson et al. 2012, we assume that water ice (ρ_w∼940 kgm^-3, Sloan & Koh 2007) andCO_2ice (ρ_CO_2∼1680 kgm^-3, Fard & Smith 2024) are the primary condensed volatile ices in the outer protosolar nebula. SomeCOcould have been incorporated into KBOs by being trapped in a water ice matrix such as clathrate hydrates (in a cool nebula) or directly condensed asCOice (in a cold nebula). For both cases, we set the amount ofCOincorporated into KBOs to be∼3 % by mole relative to water ice, as typically observed in cometary comae (Harrington Pinto et al. 2022), and the rest ofCOis assumed to be in the gaseous form in the protoplanetary disk (Prinn & Fegley 1989; Krijt et al. 2020). We note that this probably represents an endmember scenario, asCOice does not appear to be abundant on the surface of Pluto, Sedna, Gonggong, Quaoar, Eris, Makemake (Glein & Waite 2018; Emery et al. 2024; Grundy et al. 2024). Since clathrates have similar densities as water ice (Sloan & Koh 2007), we assume that only the latter scenario (whereCOis present as a pure ice) would affect the overall uncompressed densities of KBOs. The contribution ofCOice withρ_CO∼880 kgm^-3(Luna et al. 2022) for the cold nebula scenario is considered in addition to the above components.
Equations [1], [2], and [3] shown below can be solved together, which gives the atomic abundances of different elements partitioned among these material components. Finally, the resulting uncompressed density of KBOs can be calculated based on the last equation [4]. In the set of equations below, we introduce two observational parameters,r_org: fraction of protosolar C in refractory organicsr_org = n_C-org /n_C; andr_CO_2: the relative molar abundance ofCO_2to water in comets∼12 %, including Jupiter-family comets and Oort cloud comets (Harrington Pinto et al. 2022). TotalCOin the protosolar nebula, including gaseousCO,COtrapped in water ice and condensedCOice, is designated asn_CO.
The mass balance equations are:
[1]:n_C = n_CO + n_C-org + n_CO_2 = n_CO + n_C-org + r_CO_2n_H_2O[2a]:n_O = 2 n_Si[SiO_2] + 1.5 n_Al[Al_2O_3] + n_Ca[CaO] + n_Mg[MgO] + n_Fe[FeO] + n_O[org] + n_O[CO] + n_O[CO_2] + n_O[H_2O][2b]:n_O = 2 n_Si[SiO_2] + 1.5 n_Al[Al_2O_3] + n_Ca[CaO] + n_Mg[MgO] + 1.33 n_Fe[Fe_3O_4] + n_O[org] + n_O[CO] + n_O[CO_2] + n_O[H_2O][3]:n_O[CO_2] = 2n_CO_2 = 2r_CO_2n_H_2O;n_O[CO] = n_CO;n_O[org] = f_O[org]r_orgn_C, wheref_O[org]is the O/C ratio in refractory organics (∼0.3 for comet 67P/C-G).
The mixture's density can be calculated via:
[4]:ρ = (f_r/ρ_r + f_met/ρ_met + f_org/ρ_org + f_ices/ρ_ices)^-1, in whichf_iis the mass of each material component over the total mass, andf_i = μ_in_i/Σμ_in_i(μ_iis the molecular mas of each component). Here,f_ices/ρ_icesis a general term that refers to ices that include water ice,CO_2ice, andCOice. The density of the ice mixture is calculated using an equation of the same form as Equation 4.
Our results show a similar trend compared to the previous study by Wong et al. 2008 that the densities of KBOs should decrease with an increase in refractory organics. This trend arises because the oxygen-to-carbon ratio in organics (O/C∼0.3, Bardyn et al. 2017) is less than that inCOorCO_2; therefore, a higher fraction of protosolar C in refractory organics reduces the availability of C forCOandCO_2formation. This would free up more O to form water ice and lower the uncompressed density. As noted above, aqueous alteration of metallic iron to magnetite (i.e., during the differentiation of Pluto/Charon) would take up some amount of oxygen in rock, therefore resulting in a slight increase in the density of the Pluto-Charon system. Nonetheless, even when taking into account the 3σuncertainty in the uncompressed density of the Pluto-Charon system (1800±20 kgm^-3) (Brozović et al. 2015, McKinnon et al. 2017, Nimmo et al. 2017), the effect is not significant to increase the predicted density up to the lower limit of the measured range. In the cold nebula scenario whereCOice is incorporated, the overall density is slightly lower than in scenarios whereCOis trapped in water ice. This difference is due to the lower density ofCOice (∼880 kgm^-3) compared with that of water ice (∼940 kgm^-3). Surface observations of Pluto, however, suggest much lessCOice (CO/H_2O∼4× 10^-6, Glein & Waite 2018) than the amount considered in this calculation (∼3% relative to water, analogous to comets), so this is most likely an endmember scenario. In summary, we suggest that based on current values of the protosolar abundances, the predicted densities would be insufficient to explain the uncompressed density of the Pluto-Charon system (∼1800 kgm^-3) (McKinnon et al. 2017, 2021).
§ REVISED SOLAR ABUNDANCES: NEW CONSTRAINTS FROM SOLAR NEUTRINOS AND SOLAR WIND WITH IMPLICATIONS TO KBOS AND CI CHONDRITES
§.§ Data from the Sun
In Section 1, we discussed the ongoing challenge in reconciling photospheric heavy element abundances with helioseismology constraints. Consequently, additional independent measurements are critical to test these models. Solar neutrinos from the carbon-nitrogen-oxygen cycle were predicted to be generated by the seminal work of Hans Bethe (Bethe 1939) and were recently detected by the Borexino neutrino observatory with a significance of∼7σ(Agostini et al. 2020a). This opens a new door to constrain the solar abundances of C, N through neutrino flux measurements. These low-energy neutrino fluxes (ϕ(i)) from the CNO cycle in the Sun’s interior are sensitive to the Sun’s temperature profile, which is controlled by the proton-proton (p-p) chains (Haxton & Serenelli 2008). Consequently, ifϕ(i_1) ∼ T^(x1)andϕ(i_2) ∼ T^(x2), one can form the weighted ratioϕ(i_1)/ϕ(i_2)^x1/x2that is nearly independent of temperature (Serenelli et al. 2013). Thus, this analysis process breaks the degeneracy between opacity/composition and its variations that affect the interior’s temperature profile (Agostini et al. 2020b). The flux of the^8_Bneutrino is highly sensitive to the solar core’s temperature (ϕ(^8_B) ∼ T^24, Villante & Serenelli 2021); thus, it has been used to probe the Sun’s interior temperature (Appel et al. 2022). Because the p-p chains control the temperature profile, the weighted ratio between^15_Oand^8_Bneutrino fluxes retains a linear dependence on the total surface abundance of C and N (after accounting for the efficiency of elemental diffusion) (Agostini et al. 2020b, Appel et al. 2022). As a result, this quantity can be directly compared with spectroscopic observations of the photosphere (Appel et al. 2022).
New findings suggest an increase in the total C+N abundance in the photosphere, with (C+N)/H∼ (5.81_-0.94^+1.22)× 10^-4, which display a∼ 2σtension with the low-metallicity solar model (Appel et al. 2022, Basilico et al. 2023). Since C is much more abundant than N in the Sun, most of that increase is likely attributed to the C abundance, with any reasonable changes in N imparting a secondary effect. The large error bars reflect uncertainties in the CNO neutrino fluxes – a challenging measurement given the physical nature of the neutrino. Despite this shortcoming, additional constraints from the water abundances in primitive solar system objects can be considered with appropriate caution to obtain further insights into the likely range of the C abundance. As demonstrated in the previous section (Section 2), the C abundance must be sufficiently higher than the currently adopted value to raise the density of large KBOs into the observational range; however, it must not be too high to generate KBOs with too little water. Here, we adopt an overallσvalue that matches the lower error bar of the solar neutrino measurements (to prevent a lack of water in KBOs); this gives (C+N)/H∼ (5.81_-0.94^+0.94)× 10^-4for our subsequent calculations. As we demonstrate in Section 3.2, this constraint can be refined further by testing a range of plausible C/O ratios that would explain the water abundances in primitive outer solar system objects.
We combine this new result with a recent analysis of element abundances from Genesis solar wind data to estimate additional constraints on the protosolar abundances (Heber et al. 2021). To derive the abundances of N and O, it is necessary to consider fractionation from the photosphere to solar wind. Elemental fractionation is known to correlate with the first-ionization potential (FIP) of the elements; as a result, elements with low-FIP are overabundant in solar wind (Pilleri et al. 2015). As a first-order estimation, here we consider the elemental ratios of two elements with similar FIP values to be similar to their corresponding ratio in the photosphere. For N (FIP∼14.53 eV, Heber et al. 2021), we consider Kr to be the closest analog with a FIP value∼13.99 eV (Heber et al. 2021). In addition to having the most similar FIP to N, the solar Kr abundance can be reasonably well constrained based on the smoothness of the odd-even nuclide patterns and established abundances for neighboring nuclides from CI chondrites (Wiens et al. 1991). As a result, we can use the N/Kr ratio from the Genesis data with the Kr/H ratio from Lodders 2021 to derive the contemporary photospheric abundance of (N/H)∼(0.93 ± 0.18) × 10^-4. While this is slightly higher than the value in Lodders 2021∼ (0.710 ± 0.003) × 10^-4, it is consistent with a recent determination of N abundance from the 3D model by Magg et al. 2022 who estimated (N/H)∼ (0.95 ± 0.18) × 10^-4. Regardless, the N abundance has a minor impact on the overall metallicity. Combining this N/H with the (C+N)/H ratio from the solar neutrino data gives a present-day photospheric value for (C/H)∼ (4.88 ± 0.96) × 10^-4.
The next step is to constrain the oxygen abundance. Based on solar wind data from the Genesis mission, the fractionation model by Laming et al. 2017 (JML+17) suggests an increase in O/H compared with commonly adopted values. However, the Genesis data used in JML+2017 have been revised toward a decrease in both H and O fluences (Heber et al. 2021). Since H is much more abundant than O, this leads to an increase in O/H (the data point at HV+21, Fig. 2). Since O and H have similar FIPs (∼13.6 eV, Heber et al. 2021), the fractionation between them from the photosphere to solar wind can be assumed to be minimal. To be more conservative given our assumption of a lack of O/H fractionation, here, we adopt a mid-range value for O that would satisfy both the previous analysis of Lamming et al. 2017 and the revised Genesis data, which gives a present-day O/H∼ (6.68 ± 0.49) × 10^-4. This range also overlaps with the more recent O values determined from spectroscopy by Magg et al. 2022 (Fig. 2). We recognize that our approach in constraining the O abundance is simplistic, which is why it is tested against independent data from KBOs and CI chondrites (see Section 3.2). Future solar neutrino experiments such as the Sudbury Neutrino Observatory + (The SNO+ collaboration 2021) and the Jiangmen Underground Neutrino Observatory – JUNO (Conroy 2024), where the latter is set to commence operations at the end of 2024, will enable further testing. To derive protosolar abundances from present-day photospheric values, we follow the approach described in Lodders 2020, 2021 to account for the settling effect of heavy elements. We converted photospheric abundances to protosolar values using a constant settling factor of∼23% for all heavy elements, as adopted in Lodders 2020, 2021.
While estimates of the solar abundances have varied over time, spectroscopic measurements have shown an upward revision of the protosolar C/O ratio from 0.43±0.05 in AG89 to 0.59±0.08 in AAG21 and 0.62±0.09 in EM+22 (Fig. 3). As noted in AAG21, the Sun has a higher C/O ratio than most solar twins regardless of age, which could be a result of the Sun having migrated rapidly outward since birth from a denser, more heavy element-rich region in the Milky Way (Nieva & Przybilla 2012). In our subsequent calculations, we will adopt the average values of the C and O abundances in Table 2 (thus a C/O∼0.73) as our nominal case and explore its implications for the compositions of KBOs and CI chondrites. We will also test a range of C/O ratios that could satisfy the observational densities from large KBOs (Appendix A). This ratio should be sufficiently above the currently adopted value to raise the density of KBOs to fall within the observed range; however, it must not be too high to produce KBOs with too little water. As we demonstrate in Figs. A1 and A2, the solar C/O is most likely within the range of 0.73±0.10. The upper limit of our proposed C/O ratio is also consistent with the previous work by Pekmezci et al. 2019, who found that a negligible amount of water would exist in planetesimals under a C/O of 0.81, assuming that the entire protosolar C inventory is inCO. Under that assumption and given a C/O of 0.83, our calculation also arrives at a similar result. If some refractory organics contribute to the total protosolar C budget, this will allow the presence of water, though the amount is still insufficient to explain Pluto-Charon's density (Fig. A2).
While C, N and O abundances went down starting with AGS05, the recent model by EM+22 revised those values upward. In the current work, the abundance of O is within the range of values in the classical high-metallicity model by GS98, and the currentZ/Xis coincidentally consistent with values from AG89 and GS98 (Fig. 4). Our relatively highZ/Xmay lead to better agreement with constraints from helioseismology than the current low-metallicity models (AGS05, AGSS09, AGS21). We note that the currentZ/Xdepends on the abundance of neon, and a recent analysis using updated atomic data has revised this value upward since GS98 (Young 2018). As neon does not appear in photospheric spectra and is largely lost in CI chondrites, it has been typically determined from the radiation originating in the high temperature regions of the Sun (i.e., corona, transition region, flares) (Young 2018) or solar wind (Huss et al. 2020). We followed the approach in Lodders 2021 to estimate the Ne abundance based on the Ne/O ratio in the transition region of the quiet Sun (0.24±0.05) from Young 2018 and the oxygen abundance recommended here. The Ne/O ratio has also been revised upward from the adopted value∼0.178±0.035 in GS98 to 0.24±0.05 (Young 2018), which may partly explain the higher value of ourZ/Xcompared with GS98.
We also note that not only the total heavy element abundances but also the ratios of these elements influence constraints from helioseismology, such as the depth of the convective zone and the speed of sound. If the values we recommended here could in future work lead to a better agreement with helioseismic constraints, then the mass fractions of hydrogen, helium, and heavy elements (X,Y, andZ, respectively) can be determined using anXorYdetermination from helioseismology, given ourZ/Xvalue andX + Y + Z = 1. Previous solar models found that the estimatedXfor the Sun (0.7389±0.0034) appears to be fairly independent ofZ(Basu & Antia 2004, 2008), so we use this value ofXto calculateYandZ. GivenZ/X= 0.0251±0.0013 (Table 2), the corresponding values forYandZare:Y = 0.2426 ± 0.0035,Z = 0.0185 ± 0.0010. The helium mass fractionYfrom our model is similar to the helium abundance from solar models using GS98 heavy element abundances (0.2425±0.0042) (Basu & Antia 2008). Indeed, both values overlap with the surface helium abundance determination from helioseismology (0.2485±0.0034, Basu & Antia 2004, 2008). In order to estimate the corresponding protosolar values, we adopt the protosolar helium mass fraction determination from solar models by Serenelli & Basu 2010,Y_proto = 0.2780 ± 0.0060. Given(Z/X)_proto = 0.0308 ± 0.0016(Table 2), the corresponding values forX_protoandZ_protoare:X_proto = 0.7004 ± 0.0059,Z_proto = 0.0216 ± 0.0011.
Abundances 2|c|Present 2|c|Protosolar
Element N(E) ±σ N(E) ±σ
H 3.09× 10^10 2.52× 10^10
He 2.54× 10^9 3.8× 10^7 2.50× 10^9 5.8× 10^7
C 1.51× 10^7 1.8× 10^6 1.51× 10^7 1.8× 10^6
N 2.87× 10^6 6.0× 10^5 2.87× 10^6 6.0× 10^5
O 2.07× 10^7 1.5× 10^6 2.07× 10^7 1.5× 10^6
Mg 1.03× 10^6 4.4× 10^4 1.03× 10^6 4.4× 10^4
Si 1.00× 10^6 3.4× 10^4 1.00× 10^6 3.4× 10^4
Al 81820 6110 81820 6110
Ca 57239 4500 57234 4500
Ni 48670 2940 48670 2940
Fe 8.72× 10^5 3.8× 10^4 8.72× 10^5 3.8× 10^4
S 4.37× 10^5 2.6× 10^4 4.37× 10^5 2.6× 10^4
Table 2. Present (left) and protosolar abundances (right) estimated in this work from solar neutrinos and solar wind data for C, N, and O. The error bar for C is determined by the uncertainties reported from the solar neutrino and solar wind measurements, with additional constraints from a range of C/O ratios that can satisfy the observational densities of large KBOs (Appendix A). As heavy elements have gravitationally settled from the outer convective zone into the Sun’s interior over the lifetime of the Sun, the protosolar abundances relative to 10^12 hydrogen atoms (N(E)/N_H-proto) need to be corrected from the present abundances relative to 10^12 hydrogen atoms (N(E)/N_H-present) (Palme et al. 2014; Lodders 2020, 2021). The atomic ratio between He and H is given by: N(He)/N(H) = Y/4X where the “4” is shorthand for the ratio of the atomic weights of He and H. Here, the present-day mass fractions of hydrogen, helium, and heavy elements (X, Y, and Z, respectively) are: X = 0.7389 ± 0.0034, Y = 0.2426 ± 0.0035, Z = 0.0185 ± 0.0010 and the corresponding protosolar values are: X_proto = 0.7004 ± 0.0059, Y_proto = 0.2780 ± 0.0060, Z_proto = 0.0216 ± 0.0011.
§.§ Independent tests based on KBOs and CI chondrites
We repeat the calculation from Section 2 to see how much the predicted density of Pluto-Charon changes with our new solar C/O ∼ 0.73. This is shown in Fig. 5. A notable difference compared with Fig. 1 is a reduction in the fraction of protosolar C as refractory organics in the KBO-forming region (20-30 AU, McKinnon et al. 2021), which is attributed to the overall increase in the protosolar C abundance. The overall increase of the C/O ratio also results in less O available to form water, thereby favoring the accretion of refractory components over water ice in KBOs. Both effects lead to an increase in the overall densities compared to results in Fig. 1.
Given the updated protosolar abundances and a range for the abundance of refractory organics constrained by comet 67P/C-G data, the predicted densities are found to be consistent with the 3σ uncompressed density of the Pluto-Charon system. Within the given range of refractory organics inventory in protosolar C, our model can also predict KBOs that are more rock-rich than the Pluto-Charon system, up to 1900 kg m^-3- the uncompressed density of Triton today (Wong et al. 2008). Given the origin and evolution of Triton, which may include volatiles that were lost post-capture as discussed in Section 1, Triton’s case can be considered as an upper limit for the uncompressed density of large KBOs. These findings do not depend on the adopted value of the settling factor. As discussed in Section 3.1, heavy element settling in the Sun only affects the protosolar abundances relative to hydrogen; the relative ratios among the heavy elements are approximately constant. Therefore, on the cosmochemical scale (relative to silicon), the contemporary abundances of heavy elements stay the same as the protosolar abundances (Lodders 2020, 2021).
Comets are also ice-rich objects, so in principle, they could provide a related test of the solar oxygen abundance if their total O/Si ratios can be measured. However, the bulk density cannot serve as a reliable indicator since comets contain substantial porosity (Pätzold et al. 2019). Nevertheless, the refractory-to-ice mass ratio (R/I) depends on the elemental abundance of oxygen, and it can be constrained using observational data. Under this updated set of solar abundances data, we predict the R/I ratio within comet nuclei to be approximately ∼1.9-2.5 (Fig. B1). Our mass balance calculation can be seen as a refinement of the model of Greenberg 1998. While comet 67P/C-G is an evolved comet and it is challenging to derive the R/I ratio for the nucleus, our predicted value falls within the range of plausible values (∼1-6) derived by various investigations on Rosetta (Choukroun et al. 2020). The R/I ratio implied by our solar C/O ratio could be further tested by a future comet sample return mission such as CAESAR (Hayes et al. 2020).
The uncompressed densities of the the major icy satellites of Jupiter and Saturn (Ganymede, Callisto, Titan), with values near ∼ 1500 kg m^-3 (Wong et al. 2008), correspond to more ice-rich compositions than in the large KBOs. As discussed in earlier work (e.g., Simonelli et al. 1989, Wong et al. 2008), the range in density among icy satellites requires different carbon chemistry and/or significant fractionation of ice and rock in subnebulae around giant planets, where these regular satellites are thought to have formed (Lunine & Stevenson 1982; Canup & Ward 2002). In principle, a more reducing CH_4-rich (or very organic-rich) subnebula could produce more water-rich materials as has been suggested by equilibrium chemistry models under conditions in the subnebulae of giant planets (i.e., Lunine & Stevenson 1982; Prinn & Fegley 1989). It has also has been hypothesized that the formation of giant planets led to a more oxygen- or water-rich subnebula than the protoplanetary disk, depending on the relative rates at which different types of planetesimals dissolved in the early envelopes of giant planets (Podolak et al. 1988; Simonelli et al. 1989; Mosqueira & Estrada 2003). Alternatively, the outward diffusion and recondensation of water vapor from dehydrated chondritic materials in the Jovian subnebula would allow condensation of significant amounts of ice in the formation region of Ganymede and Callisto (Mousis et al. 2023). For the Saturnian icy satellites, there remains the possibility that they were formed as collisional products of primordial differentiated satellites earlier in Saturn’s history (Canup & Ward 2006; Ćuk et al. 2016). As a result, significant redistribution of ice and rock might have occurred, and satellite compositions might reflect the resulting fragments from past collisions.
We further tested the new solar abundances (Table 2) with compositional data from CI chondrites. Since they are thought to have formed in the outer solar system at the greatest heliocentric distance among chondrites (Kruijer et al. 2017; Desch et al. 2018), the inventory of primordial water in their parent bodies can be expected to reflect the protosolar oxygen abundance. In order to quantify the primordial amount of water ice accreted in the CI chondrite parent body, one must account for various sources as many elements like Fe, S, and P can be oxidized by water during aqueous alteration. Previous estimates considered phyllosilicates, carbonates, iron oxides, phosphates, and sulfates as secondary mineral products of aqueous alteration in CI chondrites (Alexander 2019). However, the indigenous nature of sulfates in chondrites has been questioned and a terrestrial origin from oxidation of sulfides has also been proposed (i.e., Gounelle & Zolensky 2001, 2014; Airieau et al. 2005). Recent missions such as Hayabusa-2 and OSIRIS-REx returned samples from asteroids Ryugu and Bennu, respectively, and analyses of these samples provide new insights into the composition of primitive solar system bodies. Analyses of Ryugu samples suggest a similar mineralogy to CI chondrites with abundant phyllosilicates, carbonates, iron sulfide, and magnetite; this indicates low temperature, reducing, alkaline aqueous alteration, rather than the oxidizing conditions required for sulfate formation (Yokoyama et al. 2023). Abundant carbonate and the bulk abundances and isotopic ratios of titanium and chromium suggest that Ryugu formed in the outer solar system, beyond the water andCO_2snowlines (Nakamura et al. 2023; Yokoyama et al. 2023). A notable distinction from CI chondrites is Ryugu’s lack of sulfates and ferrihydrite (Yokoyama et al. 2023). Rather than sulfates, sulfur appears to be present in the form of sulfide in pyrrhotite (Yokoyama et al. 2023; Nakamura et al. 2023). This apparent absence, along with the offset inΔ^17_Obetween Ryugu and CI chondrites suggest that large abundances of sulfates in carbonaceous chondrites are due to terrestrial contamination (Yokoyama et al. 2023). Similar to Ryugu, early analyses of Bennu samples also indicate the presence of phyllosilicates, Fe-sulfides, magnetite, and carbonates with no evidence for sulfate (Lauretta et al. 2024, King et al. 2024). Hence, previous estimates of accreted water ice content in CI chondrites (e.g., Alexander 2019) based on the presence of sulfates may now be seen as too large. This finding highlights the importance of recent asteroid sample return missions in refining our knowledge of the initial compositions of primitive solar system bodies.
In light of the new data from Ryugu and Bennu, we revisited the previous calculations of Alexander, 2019 to more accurately constrain the total amount of excess O initially accreted as water ice (O_ex) in the CI parent body. Excess O is defined to be the amount of oxygen that is above what is needed to charge-balance metal cations in accreted anhydrous minerals. Here, we consider water (including interlayerH_2O, and structuralH_2OandOHin phyllosilicates) (Yokoyama et al. 2023) and carbonates, iron oxide, and phosphate as products of aqueous alteration (Alexander 2019) to calculateO_ex. Although sulfur is also found in refractory organics in CI chondrites (Alexander et al. 2017) and Ryugu samples (Yamaguchi et al. 2023, Yoshimura et al. 2023, Takano et al. 2024), its abundance in organics is significantly lower compared to that in iron sulfides (Alexander et al. 2017). Given that sulfur in refractory organics constitutes 1-3 atoms per 100 carbon atoms (Alexander et al. 2017) and considering the abundance of refractory organics in CI chondrites and Ryugu samples (∼2-3 wt%, Yokoyama et al. 2023), sulfur in organics could represent only∼0.7-4 % of the total sulfur. Therefore, for this calculation, we adopted the approach proposed by Alexander 2019, assuming that the majority of sulfur was initially accreted into chondrites asFeS, with the remaining iron predominantly in metallic form. We calculate the mass of water consumed (m_w) when Fe was fully oxidized to magnetite through the reaction3Fe + 4H_2O → Fe_3O_4 + 4H_2. Here,m_w = 1.33 μ_w (m_Fe-bulk/μ_Fe - m_S/μ_S) ∼ 39 g kg^-1of CI chondrites, givenm_Fe-bulk = 185 g kg^-1of CI,m_S = 53.5 g kg^-1of CI (Table 1 in Alexander 2019). The equivalent amount of O in water consumed by iron oxidation is given in Table 3 by scaling the molar mass of water (18 gmol^-1) with that of oxygen (16 gmol^-1). This amount is much lower than Alexander’s 2019 estimate of 71.2 gkg^-1of CI, which was computed by assuming that a large fraction of sulfur ( 91 %) was in the form of sulfate, leading to most iron being oxidized to iron oxides.
Yet, the amount of reacted water should be treated as a lower limit for the amount of accreted water because it might be possible that the amount of water that was present exceeded the chemical capacity of the rock. Some unreacted water could have been trapped within the interior cracks and voids between mineral grains and might have been lost during the subsequent evolution of the parent body. While there are various factors that influence pore volume (i.e., degassing of other volatiles, space weathering, impact, etc.), we consider an endmember scenario in which the volume consisting of present-day cracks and voids after aqueous alteration was formerly filled with liquid water to derive an upper limit on the inventory of accreted water (Table 3). For Ryugu samples, this volume fraction is∼12±4% (Tsuchiyama et al. 2024), which we consider the upper value (16%) to derive an upper bound on the amount of water that was physically removed after aqueous alteration. It can be calculated as:
[5]:m_w/m_r = ρ_w-lϕ/ρ_r(1-ϕ)Hereϕis the porosity in present-day cracks and voids,ρ_w-l∼1000 kgm^-3is the density of liquid water,ρ_ris the dry grain density, taken to be∼3500 kgm^-3(Macke et al. 2011),m_ris the mass of accreted anhydrous rock (g) per kg of CI∼700 gkg^-1(Table 1 in Alexander 2019). The exclusion of sulfates from our calculations, based on their lack of detection in Ryugu and Bennu samples, markedly decreases the estimatedO_ex(Table 3). This adjustment results in a roughly∼30-50% decrease in the previously estimated value of 259.2 gkg^-1from Alexander 2019.
Alternatively, the initial amount of water ice accreted into CI chondrite parent bodies can be constrained based on the volume ratio between liquid water and rock just before the onset of aqueous alteration, assuming that all pore space prior to aqueous alteration was filled with water (Bouquet et al. 2021). While the rocky component today is aqueously altered, the porosity within the phyllosilicate matrix would reflect the volume expansion of anhydrous silicates to phyllosilicates; that volume was previously occupied by liquid water prior to the onset of aqueous alteration. An upper limit case can be considered by assuming that the total porosity within the phyllosilicate matrix and present-day cracks and voids was all filled with water. Measurements from Ryugu samples indicate a total porosity of (42±8) % (Nakamura et al. 2022, Tsuchiyama et al. 2024). For only one CI sample that was measured, the total porosity is slightly lower (∼34.9±2.1 %, Macke et al. 2011), but is consistent within the error bar of Ryugu samples measurements. Using the upper end of the error bar for the total porosity (∼50 %) and Equation 5, we derive an upper limit for the amount of water initially present prior to the onset of aqueous alteration. In this case, the equivalent O initially accreted as water ice is∼178 gkg^-1of CI. This value falls within the range of the total excess O derived by considering the total amount of water consumed during aqueous alteration and the amount of excess water lost post-alteration (Table 3). This consistency between two estimation methods typically used in the literature provides additional support for the results presented here.
Reservoir of O Abundances (g per kg of CI)
O in carbonate 0 - 4.8^(a)_
O in phyllosilicate H_2O 107.6 - 118.8^(a,b)_
O in Fe oxide 35.0
O in phosphate 1.3^(a)_
O in unreacted H_2O 0 - 31.1
Total excess O (O_ex) 143.9 - 191.0
Table 3. Data used for calculating the total excess O initially accreted as water ice assuming that the rocky components of CI chondrites were anhydrous with all S inFeS, all remaining Fe in metal, and all P in phosphide.^(a)_Alexander 2019;^(b)_Yokoyama et al. 2023. Since carbonate formation may not require O derived from water (ifCO_2was primordial: e.g.,CaO + CO_2→CaCO_3), we adopt a range for carbonate from 0 to 4.8 gkg^-1of CI, the latter value was reported in Alexander 2019. For phyllosilicateH_2O, more recent measurements of CI chondrites from Yokoyama et al. 2023 are considered; the datum used in Alexander 2019 is within that range. The average value for O in phosphate was reported in Alexander 2019.
We are now ready to see whether our solar abundances yield a primordial water inventory consistent with the range constrained by CI chondrite and Ryugu data (Table 3). We construct a mass balance model similar to our previous calculations for KBOs but modified to account for CI-specific compositions. Here, we consider a set of accreted materials including anhydrous silicates, native metals, refractory organics, and ices (including water ice andCO_2ice, as CI chondrites show evidence of having formed beyond the water andCO_2snowlines (Nakamura et al. 2023). UnlikeCO_2,COis assumed to be a gas in the region of the protosolar nebula where CI chondrites formed (as suggested by their relatively low D/H ratios in water, e.g., Alexander et al. 2012, Piani et al. 2023), and therefore is not directly accreted by these chondrites in solid form. For refractory organics, we consider insoluble organic matter (IOM) found in CI chondrites, which has an average composition ofC_100H_80O_20N_4(Alexander et al. 2017). The carbon content within refractory organics in CI chondrites ranges between 2.0 wt.% to 3.3 wt.% (20-33gC_orgper kg of CI,m_C-org), as reported in Alexander et al. 2007 and Yokoyama et al. 2023. With 107 g of silicon (m_Si) per kg of CI chondrite (Alexander 2019), the abundance of refractory organic carbon (C_org-CI) on the cosmochemical scale (relative to silicon) can be calculated as:C_org-CI = m_Cμ_Si/m_Siμ_Cwhereμ_Siandμ_Care the molar masses of Si (28 gmol^-1) and C (12 gmol^-1), respectively. In this calculation, we adopt a relative molar abundance of accretedCO_2(ice) to water ice (r_CO_2) of 1-4 % based on measured carbonate abundances in CI chondrites (Alexander et al. 2015). Ryugu samples are considered more pristine and free from terrestrial contamination than CI chondrites are (Yada et al. 2022, Nakamura et al. 2023). However, they also underwent modification due to space weathering (Nishiizumi et al. 2022, Yokoyama et al. 2023), as further discussed in Appendix C. Consequently, estimating theCO_2/H_2Oratio based on the total hydrogen and carbonate content remains difficult (Alexander et al. 2015). Given that Ryugu formed beyond theCO_2snowline and appears to contain more carbonates than CI chondrites (Yokoyama et al. 2023), we adopted aCO_2/H_2Oratio range of 0.10-0.14, which is typically observed in comets (Harrington Pinto et al. 2022). This ratio may be refined through ongoing and future studies aimed at measuring theCO_2/H_2Oratio in fluid inclusions (e.g., Tsuchiyama et al. 2021, Zolensky et al. 2022). We solve Equations 1-3 with these parameters to predict from our model the initial amount of water ice in CI chondrite and Ryugu parent bodies relative to silicon (n_H_2O) in Equation 2; this value is equivalent to the number of moles expressed asO_ex(n_O_ex).
Figure 6 shows the predicted abundance of water ice accreted into CI chondrite and Ryugu parent bodies based on the new set of solar abundances. A higher C/O ratio as proposed in this work leads to a decrease in water content in planetesimals in the region of the solar nebula where CI chondrites formed; this is consistent with the downward revision of the amount of water ice accreted into CI chondrites, given the most up-to-date constraints from Ryugu and Bennu samples. The revised solar abundances predict the compositions of outer solar system objects ranging from CI chondrites to Kuiper belt objects – representing a vast region of the outer solar system. This consistency further supports the new data from solar neutrinos and solar wind measurements. The present interpretation can be further tested with future data from ongoing analyses of Bennu samples, which early results suggest to be more organic-rich than Ryugu or CI chondrites (Lauretta et al. 2024). Given a bulk C abundance of∼4.6 wt.% in which 90% of the C is in refractory organics (Lauretta et al. 2024), we predict that the total amount of water ice initially accreted into Bennu’s building blocks is∼210gkg^-1. Due to the higher abundance of refractory organics, we suggest that Bennu samples should be more water-rich than CI chondrites and/or contain higher porosity.
Lastly, it has not escaped our attention that the initial water/rock ratio, thus the water abundance itself, in principle can also be inferred by considering oxygen isotopic exchange during the parent body aqueous alteration process (Clayton & Mayeda 1999; Brearley 2006; Fujiya 2018). Even though any extra water would have escaped, its abundance would have left its mark on the oxygen isotopic composition of the altered rock. This method, however, is subject to other complications, most notably the unknown oxygen isotope ratio in the initial water and the poorly defined oxygen isotope ratios in the anhydrous minerals prior to the onset of aqueous alteration. Thus, additional assumptions are required (Clayton & Mayeda 1999). Although some anhydrous grains survived the alteration process, it is not clear that representative oxygen isotope values in CI chondrites can be well-defined due to significant grain-to-grain variations (Leshin et al. 1997, Clayton & Mayeda 1999). Similar to previous findings in CI chondrites (Leshin et al. 1997), Ryugu's anhydrous minerals display a bimodal distribution in oxygen isotope ratios (Kawasaki et al. 2022), which underscores the difficulty. Given these complexities, it may be most useful to flip the problem and instead use our predicted water-to-rock mass ratio to estimate the primordial oxygen isotope ratios in water, which can be expressed asδ_w^18 = 34.0-44.8‰ andδ_w^17 = 17.7-23.4‰ relative to the isotopic standard of Vienna Standard Mean Ocean Water (Appendix D). These predictions could potentially be tested by a future sample return mission from comet Hartley 2, as it has water with a D/H ratio similar to that of CI chondrites (Hartogh et al. 2011).
§ CONCLUDING REMARKS
This work argues for refractory organics as one of the main C carriers in the outer protosolar nebula in the KBO-forming region. Incorporating their presence is not just a mere compositional detail, but indeed, they significantly influence the chemical makeup of the nebula and the building blocks that formed planetary bodies. By utilizing Rosetta COSIMA data for comet 67P/C-G, we show that under the current widely adopted solar abundances, refractory organics would make up a very large fraction of the protosolar C inventory in the KBO-forming region. As a consequence, this would free up more O to form water ice, making the Pluto-Charon system underdense compared to the observed value. By employing an updated set of solar abundances derived from solar neutrinos and solar wind measurements, we obtain a higher C/O which is consistent with the observed Pluto-Charon system value, which has been argued as the most representative value we have for the bulk composition of large KBOs (Barr & Schwamb 2016, Bierson & Nimmo 2019, McKinnon et al. 2021). Within the range of uncertainties, these values also generally overlap with a recent photospheric determination of heavy element abundances from the 3D, non-LTE radiative transfer model by Magg et al. 2022. An increase in the abundances of heavy elements may support interpretations from helioseismology. Furthermore, from these revised abundances we estimate the refractory-to-ice mass ratio in cometary nuclei to be∼1.9-2.5, which is within the plausible range from Rosetta. A future comet sample return mission could test this prediction. In addition, we test this new dataset with compositional data from CI chondrites, considering that analyses of Ryugu and Bennu samples both show an apparent absence of sulfates (Nakamura et al. 2023, Yokoyma et al. 2023, King et al. 2024). We show that a lack of sulfates significantly reduces the inferred quantity of water ice accreted into CI chondrite parent bodies. This is consistent with the predicted amount of water ice in planetesimals with the carbon speciation of CI chondrites based on the new dataset of solar abundances.
As the protosolar abundances control the chemistry of the protosolar nebula and the composition of planetary building blocks, this revision will have significant implications to understand heavy element abundances in the atmospheres of giant planets and the links to their origin and evolution (Helled & Lunine 2014; Teanby et al. 2020; Mousis et al. 2021). For example, the global water abundances in Jupiter’s atmospheric envelope remains uncertain with crucial implications for Jupiter’s origin (Helled et al. 2022). While interior models and thermochemical modelling generally favor a low water abundance in the envelope (Helled et al. 2022, Cavalié et al. 2023), Juno’s measurements at a single latitude near the equator suggest super-solar enrichments (Li et al. 2020, 2024). Based on a broad set of solar and cosmochemical data, our proposed increase in the solar carbon-to-oxygen C/O ratio implies that planetesimals, may contain less water ice, which supports the possibility that water is depleted in Jupiter’s interior, or at least less enriched relative to solar than is carbon.
Our predictions can be tested through several future efforts: 1) Upcoming solar neutrino experiments, like those at the SNO+ (Sudbury Neutrino Observatory) and the Jiangmen Underground Neutrino Observatory – JUNO (Conroy 2024), along with helioseismology models; 2) Current analyses of Bennu samples, which we anticipate finding a higher water content and/or greater porosity compared with CI chondrites; 3) Future comet sample return missions, which should reveal a refractory-to-ice mass ratio of approximately1.9-2.5on comets, and potentially provide precise values for the oxygen isotope ratios of primordial water, particularly if the targets of these missions include comet Hartley 2. The foreseeable future holds great promise for new models and data to provide a more coherent understanding of the Sun's internal composition. As the Sun’s heavy element abundances serve as benchmarks for understanding other stars’ elemental compositions, this will have significant implications to understand the formation and evolution of other stars and planetary systems, and even further, to gain a broader perspective of galactic chemical evolution.
Acknowledgment: N.T thanks Conel Alexander, Scott Bolton, Rob Ebert, Kelly Miller, Olivier Mousis, Charity Phillips-Lander, Danna Qasim, Darryl Seligman, Xinting Yu and Juno’s Origins Working Group for various discussions, which inspired this current work. N.T. also acknowledges J. Tran Thanh Van and Rencontres du Vietnam for the early introduction and inspiration to neutrino and particle physics. N.T and C.R.G acknowledge funding support from SwRI’s Internal Research & Development program (grant 15-R6321) and the Heising-Simons Foundation (grant 2023–4657). We thank the reviewer for helpful suggestions which improved the paper.
§ INFLUENCE OF C/O RATIO ON THE DENSITY OF KBOS
Similar to the calculations presented in Sections 2 and 3.2, here we investigate the influence of different solar carbon-to-oxygen (C/O) ratios on the predicted density of KBOs. Fig A1 and A2 show the two endmember scenarios, in which the corresponding C/O values (0.63, Fig A1 and 0.83, Fig A2) touch the lower bound and upper bound limits for the density of large KBOs. This constraint enables refinement of the estimated solar C abundance and the solar C/O from the solar neutrino measurements and solar wind data (Fig 3, main text).
§ PREDICTED REFRACTORY-TO-ICE MASS RATIO IN COMETS
§ RYUGU BULK ABUNDANCES VS CI CHONDRITES
Here, we compile the latest data on the average composition of Ryugu samples for 67 elements (Naraoka et al. 2023, Yokoyama et al. 2023) and compare it with data for CI chondrites from Lodders 2021. The full dataset is available at: < https://doi.org/10.5281/zenodo.13345297>. Among these data, the effects of space weathering on surface and subsurface samples are also evaluated.
Surface samples stored in Chamber A were collected during the first touchdown on Ryugu’s surface on February 21, 2019. Samples in Chamber C were collected near an artificial crater (diameter of ∼ 14 m) on Ryugu’s surface. They are possibly ejecta from the north side of the crater by the second touchdown on July 11, 2019 (Tsuda et al. 2020). Ryugu's surface is bombarded by cosmic rays, UV radiation (Nishiizumi et al. 2022), and experiences thermal cycling during diurnal and seasonal illumination cycles (Kitazato et al. 2021). The ^10_Be concentrations in all Chamber C samples are the same but are lower than those of the Chamber A samples, suggesting that they were shielded at a depth of 0.7-1.3 m and were ejected from the lower portions of the crater (Nishiizumi et al. 2022).
Given these considerations, our compiled data mostly consists of analyzed results from the subsurface samples, except for elements where subsurface data have not yet been reported in the literature. Fig. C1 shows the average abundances of elements per 10^6 Si atoms in Ryugu samples and CI chondrites (Lodders 2021). The Ryugu samples are mostly similar to CI chondrites. Ryugu samples are more chemically pristine than CI chondrites as they have not experienced terrestrial contamination, including the alteration of organics and phyllosilicate structures, the adsorption of terrestrial water, and the formation of sulfates and ferric hydroxides (Nakamura et al. 2023, Yokoyama et al. 2023, Takano et al. 2024). However, due to a combination of impact heating, solar heating, solar wind irradiation, and long-term exposure to the ultrahigh vacuum of space, the interlayer water in Ryugu's phyllosilicates may have been partially lost to space (Yokoyama et al. 2023). These factors likely contribute to the differences in H, N, and O between Ryugu samples and CI chondrites.
§ THE INITIAL OXYGEN ISOTOPE ABUNDANCES IN CI CHONDRITE WATER
In this section, we use the classic oxygen isotopes exchange model presented in Clayton & Mayeda 1999. In that model, oxygen isotopes can be exchanged during the hydration reaction of olivine and pyroxene to serpentine written as: Mg_2SiO_4 + MgSiO_3 + 2 H_2O → Mg_3Si_2O_5(OH)_4. Given x: mole of O in water per mole of O in initial rock, f: fraction of initial rock altered, δ_w^i and δ_w^f are the initial and final isotopic compositions of water, δ_r^i is the initial isotopic composition of rock (olivine and pyroxene), and δ_s^f is the final isotopic composition of rock (serpentine), the isotopic mass balance equation is:
[C1]: 7xδ_w^i + 7δ_r^i = (7x-2f)δ_w^f + 7(1-f)δ_r^i + 9fδ_s^f
Our predicted water-to-accreted rock mass ratio (W/R) can be converted to x (moles of O in water/mole of O in initial rock) as:
[C2]: m_w/m_r = μ_wn_w/μ_rn_r = 3.5xμ_w/μ_r
The factor 3.5 appears as there are 3.5 moles of oxygen per 1 mole of the simplified starting rock. Here, μ_w = 18 g mol^-1, μ_Mg_1.5SiO_3.5 = 120 g mol^-1.
Rearranging Equation [C1], the initial isotopic composition of water can be calculated using:
[C3]: δ_w^i = δ_w^f + (f/7x)(9 δ_s^f - 2δ_w^f -7δ_r^i).
As discussed in Section 3.2, it is challenging to determine the isotopic composition of anhydrous olivine and pyroxene for CI chondrites, as they show large grain-to-grain variations and follow a bimodal distribution (Leshin et al. 1997, Kawasaki et al. 2022). Despite this, a common procedure is to adopt an average value for the initial isotopic composition of the anhydrous grains: δ_r^18 = 4.8 ‰ and δ_r^17 = 1.8 ‰ (Leshin et al. 1997, Clayton & Mayeda 1999). We adopt the rest of the input data as measured in Ryugu samples, for serpentine: δ_s^18 = 18.6 ‰ and δ_s^17 = 9.2 ‰, and for the final water that equilibrated with altered rock: δ_w^18 = 1.0 ‰ and δ_w^17 = 0.3 ‰ (Yokoyama et al. 2023). As almost all of the initial rock was altered in CI chondrites, we assume f = 1. Our model's lower and upper bounds for the water-to-rock mass ratio ratio can be calculated from the initial oxygen accreted as water ice in Table 3; this gives W/R = 0.23-0.30 and the corresponding x = 0.43–0.57. Thus, the initial isotopic compositions of water is deduced to have been: δ_w^18 = 34.0-44.8‰ and δ_w^17 = 17.7-23.4‰. These constraints satisfy the lower limit values derived by Clayton & Mayeda 1999 (δ_w^18≥ 24.1 ‰, δ_w^17≥ 15.2 ‰), who attempted to fit a single isotopic composition of a water reservoir to three separate carbonaceous chondrite groups: CI, CM and CR.
§ REFERENCES
Agnor, C. B., & Hamilton, D. P. 2006, Nature, 441, 192
Agostini, M., Altenmüller, K., Appel, S., et al. 2020a, Nature, 587, 577
Agostini, M., Altenmüller, K., Appel, S., et al. 2020b, Eur Phys J C, 80, 1091
Airieau, S. A., Farquhar, J., Thiemens, M. H., et al. 2005, GeCoA, 69, 4167
Alexander, C. M. O., Fogel, M., Yabuta, H., & Cody, G. D. 2007, GeCoA, 71, 4380
Alexander, C. M. O., Bowden, R., Fogel, M. L., et al. 2012, Science, 337, 721
Alexander, C. M. O., Bowden, R., Fogel, M. L., & Howard, K. T. 2015, Meteorit. Planet. Sci., 50, 810
Alexander, C. M. O., Cody, G. D., De Gregorio, B. T., Nittler, L. R., & Stroud, R. M. 2017, Geochemistry, 77, 227
Alexander, C. M. O. 2019, GeCoA, 254, 277
Allende Prieto, C. 2016, Living Rev Sol Phys, 13, 1
Anders, E., & Grevesse, N. 1989, GeCoA, 53, 197
Appel, S., Bagdasarian, Z., et al. 2022, Phys Rev Lett, 129, 252701
Asplund, M., Grevesse, N., & Sauval, A. J. 2005, 336, 25
Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, Annu. Rev. Astron. Astrophys., 47, 481
Asplund, M., Amarsi, A. M., & Grevesse, N. 2021, A&A, 653, A141
Bahcall, J. N., Basu, S., Pinsonneault, M., & Serenelli, A. M. 2005, ApJ, 618, 1049
Bahcall, J. N., Serenelli, A. M., & Basu, S. 2006, ApJS, 165, 400
Bailey, J. E., Nagayama, T., Loisel, G. P., et al. 2015, Nature, 517, 56
Bardyn, A., Baklouti, D., Cottin, H., et al. 2017, MNRAS, 469, S712
Barr, A. C., & Schwamb, M. E. 2016, MNRAS, 460, 1542
Basilico, D., Bellini, G., et al. 2023, Phys Rev D, 108, 102005
Basu, S., & Antia, H. M. 2004, ApJ, 606, L85
Basu, S., & Antia, H. M. 2008, Phys. Rep., 457, 217
Bethe, H. A. 1939, Phys Rev, 55, 434
Bierson, C. J., & Nimmo, F. 2019, Icarus, 326, 10
Bochsler, P. 2007, A&A, 471, 315
Bouquet, A., Miller, K. E., Glein, C. R., & Mousis, O. 2021, A&A, 653, A59
Brearley, A. J. 2006, in Meteorites and the Early Solar System II, ed. Lauretta, D.S, McSween, H. Y. (Tucson, AZ: Univ. Arizona Press)
Brown, M. E., Barkume, K. M., Ragozzine, D., & Schaller, E. L. 2007, Nature, 446, 294
Brozović, M., Showalter, M. R., Jacobson, R. A., & Buie, M. W. 2015, Icarus, 246, 317
Buldgen, G., Eggenberger, P., Noels, A., et al. 2023, A&A, 669, L9
Cameron, A. G. W. 1973, Space Sci Rev, 15, 121
Cameron, A. G. W. 1982, Elemental and Nuclidic Abundances in the Solar System, https://ui.adsabs.harvard.edu/abs/1982ena..conf...23C
Cañas, M. H., Lyra, W., Carrera, D., et al. 2024, Planet. Sci. J, 5, 55
Canup, R. M., & Ward, W. R. 2002, AJ, 124, 340
Canup, R. M. 2005, Science, 307, 546
Canup, R. M., & Ward, W. R. 2006, Nature, 441, 834
Cavalié, T., Lunine, J., & Mousis, O. 2023, Nat Astron, 7, 678
Canup, R. M., Kratter, K. M., & Neveu, M. 2021, in: The Pluto System after New Horizons, ed. Stern, A. et al. (Tucson, AZ: Univ. Arizona Press)
Choukroun, M., Altwegg, K., Kührt, E., et al. 2020, Space Sci Rev, 216, 44
Christensen-Dalsgaard, J., Mauro, M. P. D., Houdek, G., & Pijpers, F. 2009, A&A, 494, 205
Clayton, R. N., & Mayeda, T. K. 1999, GeCoA, 63, 2089
Conroy, G. 2024, Nature, 627, 715
Ćuk, M., Dones, L., & Nesvorný, D. 2016, ApJ, 820, 97
Desch, S. J., Kalyaan, A., & Alexander, C. M. O. 2018, ApJS, 238, 11
Emery, J. P., Wong, I., Brunetto, R., et al. 2024, Icarus, 414, 116017
Fard, K. B., & Smith, I. B. 2024, Icarus, 410, 115895
Fegley, B. 2000, Space Sci. Rev., 92, 177
Fray, N., Bardyn, A., Cottin, H., et al. 2016, Nature, 538, 72
Fujiya, W. 2018, EPSL, 481, 264
Glein, C. R., & Waite, J. H. 2018, Icarus, 313, 79
Goldberg, L., Muller, E. A., & Aller, L. H. 1960, ApJS, 5, 1
Goldreich, P., Murray, N., Longaretti, P. Y., & Banfield, D. 1989, Science, 245, 500
Gounelle, M., & Zolensky, M. E. 2001, Meteorit. Planet. Sci., 36, 1321
Gounelle, M., & Zolensky, M. E. 2014, Meteorit. Planet. Sci., 49, 10
Greenberg, J. M. 1998, A&A, 330
Grevesse, N., & Sauval, A. J. 1998, Space. Sci. Rev, 85, 161
Grundy, W. M., Wong, I., Glein, C. R., et al. 2024, Icarus, 411, 115923
Harrington-Pinto, O. H., Womack, M., Fernandez, Y., & Bauer, J. 2022, Planet Sci J, 3, 247
Hartogh, P., Lis, D. C., Bockelée-Morvan, D., et al. 2011, Nature, 478, 218
Haxton, W. C., & Serenelli, A. M. 2008, ApJ, 687, 678
Haxton, W. C., Robertson, R. G. H., & Serenelli, A. M. 2013, Annu. Rev. Astron. Astrophys., 21
Hayes, A. G., Jr., Nakamura-Messenger, K., Squyres, S. W., et al. 2020, AGU 2020, P089
Heber, V. S., McKeegan, K. D., Steele, R. C. J., et al. 2021, ApJ, 907, 15
Helled, R., Lunine, J. I. 2014, MNRAS, 441, 3
Helled, R., Stevenson, D. J., Lunine, J. I., et al. 2022, Icarus, 378, 114937
Hinkel, N. R., Timmes, F. X., Young, P. A., Pagano, M. D., & Turnbull, M. C. 2014, AJ, 148, 54
Hoarty, D. J., Morton, J., Rougier, J. C., et al. 2023, Phys. Plasmas, 30, 063302
Huss, G. R., Koeman-Shields, E., Jurewicz, A. J. G., et al. 2020, Meteorit. Planet. Sci., 55, 326
Isnard, R., Bardyn, A., Fray, N., et al. 2019, A&A, 630, A27
Jessberger, E. K., Christoforidis, A., & Kissel, J. 1988, Nature, 332, 691
Jofré, P., Heiter, U., & Soubiran, C. 2019, Annu. Rev. Astron. Astrophys. 57, 571
Johnson, T. V., & Lunine, J. I. 2005, Nature, 435, 69
Johnson, T. V., Mousis, O., Lunine, J. I., & Madhusudhan, N. 2012, ApJ, 757, 192
Kawasaki, N., Nagashima, K., Sakamoto, N., et al. 2022, Sci. Adv., 8, 50
King, A. J., Tu, V., Schofield, P. F., et al. 2024, 55th LPSC, The Woodlands TX, 1109
Kiss, C., Marton, G., Parker, A. H., et al. 2019, Icarus, 334, 3
Kissel, J., Brownlee, D. E., Büchler, K., et al. 1986, Nature, 321, 336
Kissel, J., & Krueger, F. R. 1987, Nature, 326, 755
Kitazato, K. et al. 2021, Nat. Astron, 5
Krijt, S., Bosman, A. D., Zhang, K., et al. 2020, ApJ, 899, 134
Kruijer, T. S., Burkhardt, C., Budde, G., & Kleine, T. 2017, PNAS, 114, 6712
Kunitomo, M., & Guillot, T. 2021, A&A, 655, A51
Kunitomo, M., Guillot, T., & Buldgen, G. 2022, A&A, 667, L2
Laming, J. M., Heber, V. S., Burnett, D. S., et al. 2017, ApJ, 851, L12
Lauretta, D. S., Connolly, H., Aebersold, J. E., et al. 2024, Meteorit. Planet. Sci., doi: 10.1111/maps.14227
Leshin, L. A., Rubin, A. E., & McKeegan, K. D. 1997, GeCoA, 61, 835
Levison, H. F., & Duncan, M. J. 1997, Icarus, 127, 13
Li, C., Ingersoll, A., Bolton, S., et al. 2020, Nat Astron, 4, 609
Li, C., Allison, M., Atreya, S., et al. 2024, Icarus, 414, 116028
Lodders, K. 2003, ApJ, 591, 1220
Lodders, K. 2020, in: Oxford Research Encyclopedia of Planetary Science (Oxford Univ. Press)
Lodders, K. 2021, Space Sci Rev, 217, 44
Luna, R., Millán, C., Domingo, M., Santonja, C., & Satorre, M. Á. 2022, ApJ, 935, 134
Lunine, J. I., & Stevenson, D. J. 1982, Icarus, 52, 14
Macke, R. J., Consolmagno, G. J., & Britt, D. T. 2011, Meteorit. Planet. Sci., 46, 1842
Magg, E., Bergemann, M., Serenelli, A., et al. 2022, A&A, 661, A140
McKinnon, W. B., & Mueller, S. 1988, Nature, 335, 240
McKinnon, W. B., & Leith, A. C. 1995, Icarus, 118, 392
McKinnon, W. B., Lunine, J.I., Banfield, D. 1995, in: Neptune and Triton, ed. Cruikshank, D. P. et al. (Tucson, AZ: Univ. Arizona Press)
McKinnon, W. B., Stern, S. A., Weaver, H. A., et al. 2017, Icarus, 287, 2
McKinnon, W. B., Glein, C. R., Bertrand, T., & Rhoden, A. R. 2021, in: The Pluto System after New Horizons, ed. Stern, A. et al. (Tucson, AZ: Univ. Arizona Press)
Miller, K. E., Glein, C. R., & Waite, J. H. 2019, ApJ, 871, 59
Mosqueira, I., & Estrada, P. R. 2003, Icarus, 163, 198
Mousis, O., Lunine, J. I., & Aguichine, A. 2021, ApJL, 918, L23
Mousis, O., Schneeberger, A., Lunine, J. I., et al. 2023, ApJL, 944, L37
Nagayama, T., Bailey, J. E., Loisel, G. P., et al. 2019, Phys Rev Lett, 122, 235001
Nakamura, E., Kobayashi, K., Tanaka, R., et al. 2022, PJA-B, 98, 227
Nakamura, T., Matsumoto, M., Amano, K., et al. 2023, Science, 379, eabn8671
Naraoka, H. et al. 2023, Science, 379, eabn9033
Néri, A., Guyot, F., Reynard, B., & Sotin, C. 2020, EPSL, 530, 115920
Nesvorný, D., Vokrouhlický, D., Dones, L., et al. 2017, ApJ, 845, 27
Nieva, M.-F., & Przybilla, N. 2012, A&A, 539, A143
Nimmo, F., Umurhan, O., Lisse, C. M., et al. 2017, Icarus, 287, 12
Nishiizumi, K. et al. 2022, 53 LPSC, The Woodlands TX, 1777
Öberg, K. I., Murray-Clay, R., & Bergin, E. A. 2011, ApJ, 743, L16
Ortiz, J. L., Santos-Sanz, P., Sicardy, B., et al. 2017, Nature, 550, 219
Owen, T., Mahaffy, P., Niemann, H. B., et al. 1999, Nature, 402, 269
Palme, H. et al. 2014, Treatise on Geochemistry, Elsevier, 2
Pätzold, M., Andert, T. P., Hahn, M., et al. 2019, MNRAS, 483, 2337
Pekmezci, G. S., Johnson, T. V., Lunine, J. I., & Mousis, O. 2019, ApJ, 887, 3
Piani, L., Nagashima, K., Kawasaki, N., et al. 2023, ApJL, 946, L43
Pilleri, P., Reisenfeld, D. B., Zurbuchen, T. H., et al. 2015, ApJ, 812, 1
Podolak, M., Pollack, J. B., & Reynolds, R. T. 1988, Icarus, 73, 163
Pollack, J. B., Hollenbach, D., Beckwith, S., et al. 1994, ApJ, 421, 615
Prinn, R. G. P., & Fegley, B., Jr. 1989, ed. Atreya, S. K. et al., in: Origin and evolution of planetary and satellite atmospheres (Tucson, AZ: Univ. Arizona Press)
Reynard, B., & Sotin, C. 2023, EPSL, 612, 118172
Russell, H. N. 1929, ApJ, 70, 11
Serenelli, A. M., Basu, S., Ferguson, J. W., & Asplund, M. 2009, ApJ, 705, L123
Serenelli, A. M., & Basu, S. 2010, ApJ, 719, 865
Serenelli, A. M., Haxton, W. C., & Peña-Garay, C. 2011, ApJ, 743, 24
Serenelli, A., Peña-Garay, C., & Haxton, W. C. 2013, Phys Rev D, 87, 043001
Simonelli, D. P., Pollack, J. B., Mckay, C. P., Reynolds, R. T., & Summers, A. L. 1989, Icarus, 82, 1
Sloan, E. D., Koh, C. A. 2007, Clathrate Hydrates of Natural Gases (3rd ed.; Boca Raton: CRC Press)
von Steiger, R., & Zurbuchen, T. H. 2016, ApJ, 816, 13
Takano, Y. et al. 2024, Nat. Commun, 15, 5708
The SNO+ collaboration, T. S., Albanese, V., Alves, R., et al. 2021, J Inst, 16, P08059
Teanby, N. A., Irwin, P. G. J., Moses, J. I., & Helled, R. 2020, Philos. Trans. R. Soc. A, 378 , 20190489
Thomas, N., Davidsson, B., El-Maarry, M. R., et al. 2015, A&A, 583, A17
Tsuchiyama, A., Matsumoto, M., Matsuno, J., et al. 2024, GeCoA, 375, 146
Tsuda, Y. et al. 2020, Acta Astronaut, 171
Villante, F. L., & Serenelli, A. 2021, Front Astron Space Sci, 7
Vinyoles, N., Serenelli, A. M., Villante, F. L., et al. 2017, ApJ, 835, 202
Volk, K., & Malhotra, R. 2008, ApJ, 687, 714
Waite, J. H., Glein, C. R., Perryman, R. S., et al. 2017, Science, 356, 155
Wang, H., Weiss, B. P., Bai, X.-N., et al. 2017, Science, 355, 623
Wiens, R. C., Burnett, D. S., Neugebauer, M., & Pepin, R. O. 1991, Geophys. Res. Lett, 18, 207
Wong, M. H., Lunine, J. I., Atreya, S. K., et al. 2008, Rev. Mineralogy. Geochem, 68, 219
Yada, T. et al. 2022, Nat. Astron, 6
Yamaguchi, A. et al. 2023, Nat. Astron, 7
Yoshimura, T. et al. 2023, Nat. Commun, 14, 5284
Yokoyama, T., Nagashima, K., Nakai, I., et al. 2023, Science, 379, eabn7850
Young, P. R. 2018, ApJ, 855, 15
Zolensky et al. 2022, 53 LPSC, The Woodlands TX, 1451
|
http://arxiv.org/abs/2409.02216v1 | 20240903184012 | Classification of generalized Seifert fiber spaces | [
"Fernando Galaz-Garcia",
"Jesús Núñez-Zimbrón"
] | math.GT | [
"math.GT",
"math.DG",
"math.MG",
"53C23, 57S15"
] |
Generalized Seifert Spaces]Classification of generalized Seifert fiber spaces
F. Galaz-García]Fernando Galaz-García
[Galaz-García]Department of Mathematical Sciences, Durham University, United Kingdom
[email protected]
J. Núñez-Zimbrón]Jesús Núñez-Zimbrón
[Núñez-Zimbrón]Facultad de Ciencias, UNAM, Mexico
[email protected]
§ ABSTRACT
We provide a symbolic classification of generalized Seifert fiber spaces, which were introduced by Mitsuishi and Yamaguchi in the classification of collapsing Alexandrov 3-spaces. Additionally, we show that the canonical double branched cover of a non-manifold generalized Seifert fiber space is a Seifert manifold and compute its symbolic invariants in terms of those of the original space.
[
[
=====
§ MAIN RESULTS
Generalized Seifert fiber spaces first appeared in the work of Mitsuishi and Yamaguchi on the classification of collapsing Alexandrov 3-spaces <cit.> and generalize Seifert 3-manifolds, sharing a similar structure: they decompose into fibers over a 2-orbifold, with the fibers being either circles or closed intervals. As in the manifold case, small tubular neighborhoods of the circle fibers are standard fibered tori. In contrast, small neighborhoods of the interval fibers are homeomorphic to , a fundamental non-negatively curved Alexandrov 3-space that serves as a “singular” fibered torus and can be written as the boundary connected sum of two cones over P^2 (see Section <ref> for a detailed definition).
A closed (i.e., compact and without boundary) Alexandrov 3-space is homeomorphic to either a 3-manifold or to a non-orientable 3-manifold with finitely many boundary components homeomorphic to P^2, the real projective plane, that have been capped off with cones over P^2 <cit.>; the latter spaces have also appeared in the literature under the name singular 3-manifolds <cit.>. Conversely, any 3-manifold or singular 3-manifold is homeomorphic to some Alexandrov space <cit.>. A consequence of the surgery procedure introduced in <cit.>, which removes spaces homeomorphic to from non-manifold Alexandrov 3-spaces, combined with the geometrization theorem for Alexandrov 3-spaces <cit.>, is that every closed Alexandrov 3-space can be decomposed into hyperbolic pieces and generalized Seifert fiber spaces. Therefore, it is natural to consider a classification of generalized Seifert fiber spaces, as they constitute fundamental components of all Alexandrov 3-spaces and play a central role in their collapse theory <cit.>. Our first main result is such a classification, extending the classical one originally due to Seifert <cit.>.
Let p X→ B be a generalized Seifert fibration with X closed and connected. Then the generalized Seifert fiber space X is determined up to fiber-preserving homeomorphism by the set of invariants
{ b,ε,g, ι, { (α_i, β_i) }_i=1^n }.
For concision, the precise definitions of the symbols in Theorem <ref> are stated in Section <ref>.
Some of these invariants coincide with those appearing in the classification of local circle actions on Alexandrov 3-spaces <cit.>. However, Theorem <ref> is not a special case of the classification of local circle actions on Alexandrov 3-spaces <cit.>, as the canonical “fibration” of the space does not arise from any local circle action (see the discussion after <cit.>). Our strategy to prove Theorem <ref> involves excising small neighborhoods of the interval fibers and gluing appropriate “blocks” to obtain an associated space that does admit a local circle action, and use the corresponding classification.
Every closed non-manifold Alexandrov 3-space X is the base of a two-fold branched cover πX̃→ X whose total space X̃ is a closed orientable manifold
and whose branching set is the set of non-manifold points of X, i.e., points which have neighborhooods homeomorphic to a cone over P^2 <cit.>. In particular, there is an
orientation reversing (piecewise linear) involution ιX̃→X̃ with only isolated fixed points such that X
is homeomorphic to the quotient X̃ /ι <cit.>. The manifold X̃ is unique up to homeomorphism and we refer to it as the double branched cover of X.
In Theorem <ref>, we show that the canonical double branched cover of a non-manifold generalized Seifert fiber space X is a Seifert manifold X̃ where the Seifert fibration commutes with the double branched cover construction. We also compute the symbolic invariants of X̃ in terms of those of the original space X, finding, roughly speaking, that the invariants of X are doubled in X̃.
Let p X→ B be a generalized Seifert fibration where X is closed, connected, and not a manifold,
and let πX̃→ X be the canonical double branched cover of X. Then there exists a Seifert fibration p̃X̃→B̃ commuting with the double branched cover. Moreover, if X is determined by the invariants
{ b,ε,g, ι, { (α_i, β_i) }_i=1^n },
then X̃ is determined by the invariants
{b , ε, 2g, 0, { (α̃_i, β̃_i) }_i=1^2n},
where (α̃_2k,β̃_2k)=(α_k, β_k) and (α̃_2k-1,β̃_2k-1)=(α_k, β_k) for each k=1,2,…,n.
Our article is organized as follows. In Section <ref>, we recall basic facts on generalized Seifert fibered spaces and their topology, and collect basic facts on the classification of Alexandrov 3-spaces with local circle actions that we will use in the proofs of the main theorems. We prove Theorems <ref> and <ref> in Sections <ref> and <ref>, respectively.
JNZ acknowledges support from PAPIIT-UNAM project number IN101322. JNZ wishes to thank Diego Corro and Luis Jorge Sánchez Saldaña for useful conversations.
§ PRELIMINARIES
To keep our presentation concise, throughout the article we assume the reader to be familiar with the basic theory of Seifert manifolds and Alexandrov spaces of curvature bounded below, particularly in the three-dimensional case. For an introduction to Alexandrov spaces, we refer to the standard references <cit.>. For basic results in dimension three, we refer the reader to <cit.>. For further results on Alexandrov 3-spaces, see <cit.>. Our main reference for the basic theory of Seifert fiber manifolds will be <cit.>.
In this section, we only recall the key aspects necessary for our discussion.
§.§ Alexandrov 3-spaces
Let X be a closed Alexandrov 3-space. The space of directions Σ_x X at each point x in X is a closed Alexandrov 2-space with curvature bounded below by 1 and, by the Bonnet–Myers Theorem <cit.>, the fundamental group of Σ_x X is finite. Thus, Σ_x X is homeomorphic to a 2-sphere S^2 or a real projective plane P^2. A point in X whose space of directions is homeomorphic to S^2 is a topologically regular point. Points in X whose space of directions is homeomorphic to P^2 are topologically singular. We will denote the set of topologically singular points of X by sing(X). The set of topologically regular points is open and dense in X. By Perelman's Conical Neighborhood Theorem <cit.>, every point x in X has a neighborhood that is pointed-homeomorphic to the cone over Σ_x X. Therefore, the set of topologically singular points of X is finite, and X is homeomorphic to a compact 3-manifold with a finite number of P^2-boundary components to which one glues in cones over P^2. If X contains topologically singular points, it is homeomorphic to the quotient of a closed, orientable, topological 3-manifold X by an orientation-reversing involution ιX̃→X̃ with only isolated fixed points. The 3-manifold X̃ is the orientable double branched cover of X (see, for example, <cit.>). One may lift the metric on X to X̃ so that X̃ becomes an Alexandrov space with the same lower curvature bound as X and ι is an isometry with respect to the lifted metric (see <cit.> and <cit.>) and compare with <cit.>). In particular, ι is equivalent to a smooth involution on X̃ considered as a smooth 3-manifold.
§.§ Generalized Seifert fiber spaces
Let us recall the definition of generalized Seifert spaces, introduced by Mitsuishi and Yamaguchi in <cit.>.
We begin with the definition of the space . Consider the isometric involution (with respect to the flat metric)
α S^1× D^2 → S^1× D^2
(e^iθ,x) ↦ (e^-iθ,-x).
The space is then defined as (S^1× D^2)/⟨α⟩, and we denote the projection by π S^1× D^2 →. There is a natural projection
p B(pt) → K_1(S^1)
induced by (e^iθ,x)↦ x.
Observe that the restriction of α to S^1× S^1 has as quotient the flat Klein bottle K^2, viewed as the non-orientable circle bundle over S^1 (see, for example, <cit.>). Consequently, the fibration on K^2 induced by the restriction p K^2 → S^1 is fiberwise equivalent (i.e., there is a fiberwise homeomorphism) to the fibration induced by the free local circle action (see next section for the definition of a local circle action) on K^2 given by rotating one of the circle factors.
A generalized Seifert fibration of a (topological) 3-orbifold X over a (topological) 2-orbifold B (possibly with non-empty boundaries) is a map f X→ B whose fibers are homeomorphic to circles or bounded closed intervals, and which satisfies the following conditions. For every x∈ B, there is a neighborhood U_x homeomorphic to a 2-disk satisfying the following conditions:
(i) If f^-1(x) is homeomorphic to a circle, then there is a fiber-preserving homeomorphism of f^-1(U_x) to a Seifert fibered solid torus in the usual sense. In this case, we say that f^-1(x) is a C-fiber.
(ii) If f^-1(x) is homeomorphic to an interval, then there exists a fiber-preserving homeomorphism of f^-1(U_x) to the space
B(pt), with respect to the fibration
( B(pt), p^-1(o)) →( K_1(S^1), o).
In this case, we say that f^-1(x) is an I-fiber.
Moreover, for any compact component C of the boundary of B, we require the existence of a collar neighborhood N of C in B such that f|_f^-1(N) is a circle bundle over N. In this context, we say that X is a generalized Seifert fiber space with base B and use the notation X=(B). The base spaces of two generalized Seifert fibered spaces X=(B) and Y=(C) are isomorphic if there exists a homeomorphism between B and C that preserves the fiber type.
As noted in <cit.>, when there are I-fibers, the fibration f is not induced by any local circle action.
§.§ Local circle actions
We briefly summarize the classification of local circle actions on Alexandrov 3-spaces obtained in <cit.>, which will play a key role in our discussion (see <cit.> for the manifold case).
A local circle action on a closed Alexandrov 3-space X is a decomposition of X into (possibly one-point) disjoint, simple, closed curves (the fibers), having a tubular neighborhood which admits an effective circle action whose orbits are the curves of the decomposition. The local circle action is isometric if the circle actions on each tubular neighborhood of the fibers are isometric with respect to the restricted metric.
§.§ Fiber types
The fiber types of a local circle action are in correspondence with the isotropy groups of the fibers viewed as orbits of the isometric circle action on a small tubular neighborhood around them. These types are the following:
* F-fibers are topologically regular fixed-point fibers.
* SF-fibers are topologically singular fixed-point fibers.
* E-fibers are those that correspond to ℤ_k isotropy, acting in such a way that local orientation is preserved.
* SE-fibers correspond to ℤ_2 isotropy, reversing the local orientation.
* R-fibers are fibers that are not F-, SF-, E- or SE-fibers.
The sets of points in F-, SF-, E-, SE- and R-fibers are denoted by F, SF, E, SE and R, respectively. The fiber space X^* is a 2-orbifold. Its boundary consists of the images of F-, SF- and SE-fibers under the natural projection map, while the interior consists of R-fibers and a finite number of E-fibers.
§.§ Building blocks
A closed Alexandrov 3-space X with a local isometric circle action can be decomposed into building blocks of types F, SF, E and SE, defined by considering small tubular neighborhoods of connected components of fibers of the corresponding type. A building block is simple if its boundary is homeomorphic to a torus, and twisted if its boundary is homeomorphic to a Klein bottle. Note that the stratum of R-fibers is an S^1-fiber bundle with structure group O(2).
Let X be a closed, connected Alexandrov 3-space with 2r≥ 0 topologically singular points. Then, the set of isometric local circle actions (up to equivariant equivalence) is in one-to-one correspondence with the set of unordered tuples
{b; ε, g, (f,k_1), (t,k_2), (s,k_3); { (α_i, β_i) }_i=1^n; (r_1,r_2, …, r_s-k_3); (q_1, q_2, …, q_k_3)}.
The definition of the symbols appearing in this result is as follows:
* (ε,k) is the pair that classifies the S^1-bundle of R-fibers according to <cit.>.
* g≥ 0 is the genus of X^*.
* The symbols f, t, k_1, k_2 are non-negative integers such that k_1 ≤ f and k_2≤ t, where k_1 is the number of twisted F-blocks and k_2 is the number of twisted SE-blocks (therefore f-k_1 is the number of simple F-blocks and t-k_2 is the number of simple SE-blocks).
* n is the number of E-fibers and we let { (α_i, β_i)}_i=1^n be the corresponding Seifert invariants (see <cit.> for the definition).
* b is an integer or an integer mod 2 with the following conditions: b=0 if f+t>0 or if ε∈{o_2,n_1,n_3,n_4} and some α_i=2 (see <cit.> for the precise definitions of the o_i, n_j and α_l); b∈{0,1} if f+t=0 and ε∈{ o_2,n_1,n_3,n_4} and all α_i≠ 2. In the remaining cases, b is an arbitrary integer;
* We let s, k_3 be non-negative integers, where k_3≤ s is the number of twisted SF-blocks (thus s-k_3 is the number of simple SF-blocks).
* (r_1, r_2, …, r_s-k_3) and (q_1, q_2, …, q_k_3 ) are (s-k_3)- and k_3-tuples of non-negative even integers corresponding to the number of topologically singular points in each simple and twisted SF-block, respectively.
* The numbers k, k_1, k_2, and k_3 satisfy k_1 + k_2 + k_3 = k.
§ INVARIANTS AND CLASSIFICATION
In this section, we define the equivariant and topological symbols that appear in Theorem <ref> and prove this result.
§.§ Invariants
Let X=(B) be a closed, generalized Seifert fiber space and associate the following information to X:
* The subset X_0 of X obtained by removing all I-fibers and C-fibers with non-trivial isotropy is an S^1-bundle with structure group O(2), so we associate to X the classifying pair (ε,k). In fact, it will follow from the proof of Theorem <ref> that k=ι (where ι is defined below).
* The integer (or integer mod 2 depending on the orientability of B) b is the obstruction to extending a cross-section of the bundle X_0 to X.
* The symbol g stands for the genus of B.
* The symbol ι is the (necessarily finite) number of I-fibers.
* Finally, the ordered pairs (α_i,β_i) are the Seifert invariants of the C-fibers with non-trivial isotropy.
All of these symbols, except ι, are inspired by those assigned to a space with a local circle action as can readily be seen. In turn, they are subject to the same restrictions on its values as those indicated in Theorem <ref> (except obviously ι, which is new); however, not all values are possible under the assumption that X is a generalized Seifert space,
as will follow from the proof of Theorem <ref>.
To summarize, to each generalized Seifert bundle we associate a set of invariants of the form
{ b,(ε,k),g, ι, { (α_i, β_i) }_i=1^n }.
§.§ Classification
If there are no I-fibers, Theorem <ref> reduces to the classification of usual Seifert manifolds by letting ι=0, and the fact that the classical classification of Seifert (see <cit.>) is contained in the classification of local circle actions on 3-manifolds of Fintushel, Orlik and Raymond (see <cit.>). Therefore, to prove Theorem <ref>, it is sufficient to consider the case in which the set of I-fibers is non-empty. Note that this implies that sing(X), the set of topologically singular points of X, is non-empty.
To establish Theorem <ref>, we need to prove existence and uniqueness statements as follows: that given a list of symbols as in <ref>, one is able to construct a generalized Seifert fiber space realizing these symbols, and that if two generalized Seifert fiber spaces share the same set of invariants, then there exists a fiber-preserving homeomorphism between them.
§.§ Proof of Theorem <ref>
We first prove the uniqueness statement. Let X_1=(B_1) and X_2=(B_2) be closed and connected generalized Seifert spaces. It is clear that if X_1 is fiberwise homeomorphic to X_2, then B_1 is isomorphic to B_2. We now prove that if there is an isomorphism of fiber spaces φ B_1 → B_2, then X_1 is equivalent to X_2.
Let I^∗_j={x^∗_j1, x^∗_j2, …, x^∗_jι}⊆ B_j for j=1,2. We now choose disjoint closed disk neighborhoods D_1i of each x^∗_1i such that π_1^-1(D_1i)≅. Without loss of generality, as φ is fiber-type preserving, we can assume φ(x^∗_1i)=x^∗_2i and denote D_2i= φ(D_1i), i=1,…, ι. Furthermore, we can assume that D_1i is small enough so that π_2^-1(D_2i)≅ for all i=1,…,ι.
Now, observe that φ restricts to a fiber-type preserving homeomorphism B_1 ∖⋃_i=1^ιint(D_1i) → B_2 ∖⋃_i=1^ιint(D_2i) and consider for j=1,2
Z_j:= X_j∖⋃_i=1^ιπ_j^-1( int(D_ji) ).
Then ∂ Z_j is a disjoint union of ι copies of Klein bottles K^2. Observe moreover that Z_j is a Seifert manifold with boundary.
We let V_ji be twisted F-blocks for j=1,2, i=1,…, ι and consider the spaces Y_j obtained by gluing each V_ji to each boundary component of Z_j via fiber-preserving homeomorphisms. Note that the spaces Y_j satisfy that sing(Y_j)=∅ as all topologically singular points of X_j must be on I-fibers and not on C-fibers. Thus, Y_j is a closed topological 3-manifold, j=1,2 and it carries a local circle action. Therefore, by the work of Orlik and Raymond <cit.> and Fintushel <cit.>, the Y_j are determined up to fiber-preserving homeomorphism by the same set of invariants, namely
{b,ε,g, (f=ι,k_1=ι), (t=0,k_2=0), {(α_j,β_j) }_j=1^n},
and there is a fiber-preserving homeomorphism Ψ Y_1 → Y_2 which is determined by cross-sections to the local actions away from the exceptional orbits.
Observe now that Ψ(V_1i)⊂ Y_2 is a twisted F-block so that, without loss of generality, we can assume Ψ(V_1i)=V_2i, for each i=1,…,ι. Thus, by restricting Ψ we obtain a fiber-preserving homeomorphism φ Z_1 → Z_2 determined by cross-sections ρ_j∂ ( B_j ∖⋃_i=1^ιint(D_ji))→∂ Z_j, that is, such that
Ψ( ρ_1( ∂ ( B_1 ∖⋃_i=1^ιint(D_1i)))) = ρ_2( ∂ ( B_2 ∖⋃_i=1^ιint(D_2i))).
Furthermore, as the local circle action restricted to ∂ Z_j is equivalent to the “free” local circle action, we can parametrize each component of ∂ Z_J, homeomorphic to the Klein bottle, seen as the non trivial bundle S^1×S^1 as an equivalence class (with respect to the involution α as in (<ref>)) of points [e^iθ, e^iη], where e^iθ∈ρ_1( ∂ ( B_1 ∖⋃_i=1^ιint(D_1i))) ≅ S^1 and e^iη parametrizes the fiber direction. Hence, Ψ|_∂ Z_1 is of the form
Ψ|_∂ Z_1([e^iθ, e^iη]) = [Ψ_1(e^iθ), Ψ_2,θ(e^iη)],
where Ψ_1 and Ψ_θ,2 are self-homeomorphisms of S^1 for each θ∈ [0,2π).
We now conclude by observing that Ψ can clearly be extended to a fiber-preserving homeomorphism X_1→ X_2 by parametrizing each block by classes of points [e^iθ, re^iη] where r∈ [0,1], and defining
Ψ|_([e^iθ, re^iη]):= [Ψ_1(e^iθ), rΨ_2,θ(e^iη)].
We now prove the existence part of the theorem. By the classification of spaces with local circle actions <cit.>, there exists a space Y with a local circle action with the invariants
{b,ε,g, (f=ι,k_1=ι), (t=0,k_2=0), {(α_j,β_j) }_j=1^n},
and an orbifold Riemannian metric of curvature bounded below.
In a similar fashion to the uniqueness part of the proof, we take out all ι twisted F-blocks from Y, obtaining a Riemannian orbifold with boundary Z with curvature bounded below and with a local circle action. We now take ι copies V_i of with the orbifold metric induced by the flat metric on S^1× D^2. By the collar theorem, the orbifold metrics on Z and V_i are product metrics and thus, we can glue the V_i to Z by ι copies of the cylinder I× K^2 in such a way that the metric on {0}× K^2 coincides with the metric on the i-th boundary component of Z and the metric on {1}× K^2 coincides with the metric on the boundary of V_i. Moreover, this can be done so that the glued space X is a smooth Riemannian orbifold which by compactness has curvature bounded below, and in particular, X is an Alexandrov space admitting a generalized Seifert fiber space structure with the invariants
{ b,(ε,k),g, ι, { (α_i, β_i) }_i=1^n }.
As we pointed out, k=ι so that we remove the superfluous invariant k and uniquely associate (up to fiber-preserving homeomorphism) the list
{ b,ε,g, ι, { (α_i, β_i) }_i=1^n }.
to each generalized Seifert fiber space, as stated in Theorem <ref>.
§ COMPATIBILITY WITH THE DOUBLE BRANCHED COVER
The canonical double branched cover of an Alexandrov space X with sing(X)≠∅ is a fundamental construction which, in dimension 3, de-singularizes the space <cit.>. In this section, we show that the double branched cover of a generalized Seifert fiber space is a Seifert manifold and compute its invariants from the invariants of the original space, thus proving Theorem <ref>.
The following lemma follows directly from the fact that the orientation double cover of a non-orientable manifold can be characterized in the following way: Let M be a non-orientable manifold and pM̃→ M its orientation double cover. If N is an oriented manifold and π N→ M is a double cover with an orientation-reversing non-trivial deck transformation, then (N,π) is isomorphic to (M̃,p), (see for example <cit.>).
The canonical double branched cover of is isomorphic to the projection used in the construction of , that is S^1× D^2 →, given by (e^iθ, x)↦ [e^iθ, x].
With this observation in hand, we may prove our second main theorem.
§.§ Proof of Theorem <ref>
By Lemma <ref>, the preimage in S^1× D^2 of the fibers of the standard generalized Seifert fibration on (with base K_1(S^1)) define a (usual) Seifert fibration p̃ S^1× D^2 → D^2, where we have an action of ℤ_2 on the D^2-base by rotations of angle π, with quotient K_1(S^1) (we call the quotient map π̃). In other words the following diagram commutes
S^1× D^2 [d]_π[r]^p̃ D^2 [d]^π̃
[r]_p K_1(S^1)
In all other points away from the branching locus, the action of the canonical involution of the double branched cover is free. Therefore, slightly abusing the notation and denoting πX̃→ X the canonical double branched cover, and p X→ B the generalized Seifert fibration, we have that there exists a 2-orbifold B̃ and a (usual) Seifert fibration ρ̃X̃→B̃ such that the following diagram commutes
X̃[d]_π[r]^p̃ B̃[d]^π̃
X [r]_p B
where π̃ is the obvious map. Thus we have shown that for any closed Alexandrov 3-space X with non-empty set of topologically singular points, its canonical double branched cover X is a Seifert 3-manifold.
Let us denote the symbolic invariants associated to X=(B) as in (<ref>) and those of X̃=(B̃) by
{b̃,ε̃,g̃, ι̃, { (α̃_i, β̃_i) }_i=1^ñ}.
It is then immediately clear that, since sing(X̃)=∅, then ι̃=0. Let us now point out that near the exceptional C-orbits, the canonical involution of X̃ acts freely and therefore the preimage in X̃ of each exceptional orbit on X consists of two copies of itself. Thus ñ=2n and we can enumerate the Seifert invariants of X̃, for example as (α̃_2k,β̃_2k)=(α_k, β_k) and (α̃_2k-1,β̃_2k-1)=(α_k, β_k) for each k=1,2,…,n. Because the local actions of ℤ_2 on B̃ that give rise to B are orientation preserving, it follows that B̃ is orientable if and only if B is orientable. Therefore, ε̃ = ε. Moreover, it is also clear from the definition of B̃ that g̃=2g. Finally, any section B→ X determines a section B̃→X̃ and vice versa, so that b̃=b.
amsplain
|
http://arxiv.org/abs/2409.02432v1 | 20240904041842 | Preliminary Insights on Industry Practices for Addressing Fairness Debt | [
"Ronnie de Souza Santos",
"Luiz Fernando de Lima",
"Maria Teresa Baldassarre",
"Rodrigo Spinola"
] | cs.SE | [
"cs.SE"
] |
University of Calgary
Calgary
AB
Canada
[email protected]
CESAR
Recife
PE
Brazil
[email protected]
Università di Bari
Bari
BA
Italy
[email protected]
[email protected]
Virginia Commonwealth University
Richmond
Virginia
USA
§ ABSTRACT
Context: This study explores how software professionals identify and address biases in AI systems within the software industry, focusing on practical knowledge and real-world applications. Goal: We focused on understanding the strategies employed by practitioners to manage bias and their implications for fairness debt. Method: We employed a qualitative research method, gathering insights from industry professionals through interviews and using thematic analysis to explore the collected data. Findings: Professionals identify biases through discrepancies in model outputs, demographic inconsistencies, and training data issues. They address these biases using strategies such as enhanced data management, model adjustments, crisis management, improving team diversity, and ethical analysis. Conclusion: Our paper presents initial evidence on addressing fairness debt and lays the groundwork for developing structured guidelines to manage fairness-related issues in AI systems.
LAY ABSTRACT. This paper explores how software professionals tackle biases in AI systems. We discovered that they identify problems by checking if the AI's outputs match real-world conditions, ensuring it performs well for different groups of people, and investigating biases in the training data. To address these issues, they use various strategies like improving the data quality, regularly updating the AI to adapt to new information, and involving a diverse range of people in the development process. Our findings provide a solid starting point for creating clear guidelines to manage these biases. These guidelines will help ensure that AI systems are not only technically accurate but also fair and equitable for everyone. This research is important for making sure that as AI technology advances, it benefits all users without reinforcing existing inequalities.
<ccs2012>
<concept>
<concept_id>10003456.10003457.10003580</concept_id>
<concept_desc>Social and professional topics Computing profession</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
Preliminary Insights on Industry Practices for Addressing Fairness Debt
Rodrigo Spinola
=======================================================================
§ INTRODUCTION
The quality of software systems depends on well-informed decisions that software development teams make about the technical aspects of the system, as well as the dynamics of their work interactions <cit.>. In this context, technical debt emerges from poor technical decisions, where shortcuts or suboptimal solutions are adopted to expedite delivery, leading to future maintenance difficulties. Technical debt is defined as the accumulated cost of additional rework caused by choosing an easier, limited solution now instead of a better approach that would take longer <cit.>. Analogous to technical debt, social debt arises from poor social decisions that negatively affect the work environment. This includes ineffective communication, lack of collaboration, and unresolved conflicts, leading to stressed relationships, decreased morale, and a breakdown in teamwork. Social debt is defined as the accrued negative impact on team dynamics and productivity resulting from suboptimal social interactions and management practices <cit.>.
Recently, the rapid growth of artificial intelligence has increased discussions around the importance of fairness in software engineering <cit.>. This scenario has introduced a new set of decisions necessary to deliver high-quality systems. Considering the characteristics of AI-powered systems and their profound societal impact, a new type of debt has emerged within software development: fairness debt <cit.>. While technical debt traditionally relates to the technical aspects of software implementation and social debt focuses on the human dynamics within development teams, the concept of fairness debt extends beyond these boundaries to encompass the broader societal implications resulting from suboptimal decisions and workarounds made in machine learning projects <cit.>.
Fairness debt arises from design, implementation, or managerial choices that prioritize short-term gains at the expense of creating a context where future adjustments become costly or impractical, resulting in significant societal consequences. This differentiates software fairness debt from other types of debt. While software fairness debt can exhibit properties similar to traditional debt, such as principal and interest, its resolution typically requires a blend of technical and social strategies aimed at addressing biases in software systems. Repaying fairness debt involves activities such as auditing algorithms for bias, improving team diversity and inclusion, refining dataset management practices, adopting fairness-aware techniques, training professionals to address and recognize bias, and ensuring transparent decision-making processes <cit.>.
Considering that discussions around fairness debt are still in their early stages and acknowledging that many strategies are being used to mitigate the effects of fairness issues caused by biased outcomes in software systems, this study presents initial findings from interviews with software professionals working on AI and machine learning projects to explore the techniques software teams are using to address fairness issues in practice. Our goal is to answer the following research question: RQ. What strategies are industry professionals currently employing to address fairness debt in AI and machine learning projects? Our study makes key contributions to industry practice by identifying techniques and strategies currently used to manage fairness debt, as well as offering recommendations for improving the integration of fairness considerations into software development processes.
§ FAIRNESS DEBT
Fairness is a non-functional requirement and a critical quality attribute for software, especially for systems driven by data processes <cit.>. Software fairness involves the ethical principle of ensuring that software systems, algorithms, and their outcomes are equitable and unbiased across diverse groups of people, regardless of characteristics such as race, gender, ethnicity, or socioeconomic status <cit.>. Imbalances in fairness can arise from various sources throughout the software development lifecycle, influenced by both internal software practices and external factors. These imbalances may originate from the technologies used, the development practices employed, or the interactions among team members.
Fairness debt is defined as a collection of design, implementation, or managerial decisions that, while providing short-term benefits, establish conditions where future adjustments become costly or impractical, resulting in significant societal impacts. It involves various types of biases, including cognitive, design, historical, model, requirement, societal, testing, and training biases, all of which contribute to fairness deficiencies within software systems. The central concern of fairness debt is its impact on society, with profound implications for individuals and communities, ranging from minor inconveniences to severe societal injustices <cit.>.
Examples of fairness debt encompass a range of negative impacts, including the exacerbation of social inequalities, legal challenges, and various forms of discrimination such as ageism, classism, racism, sexism, and xenophobia. Addressing fairness debt involves a multifaceted approach that combines technical and societal perspectives. Technically, it requires the identification and mitigation of biases within data and algorithms to prevent skewed outcomes. From a societal perspective, it involves recognizing the broader implications of these biases and ensuring that software development practices promote equity and inclusivity <cit.>. Figure 1 illustrates this concept in terms of root causes, examples, and effects in society.
§ METHOD
In this study, we interviewed <cit.> a group of professionals involved in developing various AI and machine learning-powered software solutions. Our participants held diverse roles, including designers, software engineers, data scientists, and testers, across four distinct projects. Project A featured an application using deep learning neural networks to translate sign language into Portuguese text in real-time, enabling hearing-impaired individuals to communicate via gestures translated into text. Project B focused on digital twins and prediction models in the petroleum and energy sector. Project C involved using Large Language Models (LLMs) in educational contexts, while Project D utilized computer vision for facial recognition to identify patterns within images of individuals.
To engage professionals with diverse expertise, we employed a combination of sampling strategies <cit.>. Initially, we sent invitation to professionals working on these projects, asking those interested to participate in the study and provide their preferred dates and times for interviews, using convenience sampling. We then employed snowball sampling by asking participants to suggest other professionals who might be interested in joining the study. To further refine our sample, we incorporated theoretical sampling by sending direct invitations to individuals we suspected could provide valuable insights into the understanding of the problem. This approach enabled us to capture a wide range of perspectives and experiences from professionals.
§.§ Data Collection
During data collection, interviews were conducted in three rounds, starting with seven participants per round and increasing the number in the final round as responses started to reach saturation, e.g., become repetitive. Hence, between June 1 and July 5, 2024, we interviewed a total of 22 professionals. Interviews were conducted online following a pre-established guide, with rounds 1 and 2 including 7 participants each and round 3 comprising 8 participants. The interviews ranged from 23 to 42 minutes, over 3 hours of recorded audio. Three participants could not participate in recorded interviews, so they completed a questionnaire with open-ended questions.
§.§ Data Analysis
For data analysis, we employed thematic analysis <cit.>, a method used to identify and analyze patterns within qualitative data. This approach is widely used in software engineering research and helps identify cross-references among different data sources. After each round of interviews, we systematically reviewed the transcripts to identify key elements that could answer our research question. This iterative process involved coding the data, categorizing the codes into themes, and refining these themes as new data emerged. We continued this process until we achieved a good amount of evidence. This structured approach allowed us to gather detailed information, synthesize narratives from different participants, and draw conclusions to provide clear and actionable insights for practitioners.
§ FINDINGS
We interviewed 22 software professionals using convenience, snowball, and theoretical sampling methods. These professionals are actively involved in developing AI-powered systems such as deep learning neural networks, prediction models, LLMs, and computer vision for facial recognition. The participants hold diverse roles in software development, including data scientists, programmers, software QA/testers, designers, and software project managers. Additionally, recognizing the critical role of diversity in addressing bias and fairness debt, we included individuals from equity-deserving groups, encompassing non-male professionals, individuals with disabilities, non-white individuals, LGBTQIA+ individuals, and neurodivergent individuals. Understanding their experiences and perspectives is essential for discussing bias in AI development. Table <ref> summarizes the information of our group of participants, and our two main findings are presented below.
§.§ How Do Software Professionals Identify Bias in AI Projects?
Our analysis reveals that software professionals use a range of strategies to identify biases in AI systems. These strategies primarily involve searching for discrepancies between model outputs and real-world conditions, including evaluating the correctness and variability of outputs across demographic groups and assessing the presence of biased training data. Additionally, professionals recognize bias during data collection and preparation, as well as before releasing the system by evaluating algorithmic behavior through testing datasets. More specifically, we translated these strategies into the following practices:
* Mismatch Between Model Output and Reality. Biases are often detected when professionals notice that their algorithms produce results that significantly deviate from what is expected in real-world situations. This occurs when the model's output does not align with actual conditions or practical realities, indicating that the model may have difficulty generalizing from its training data to real-world applications. Such mismatches can reveal underlying issues with how the model processes and interprets data, suggesting that the model might not fully capture the complexities or nuances of the real-world scenarios it is intended to address.
* Inconsistent Performance Across Different Demographics. Biases become evident when a model demonstrates varying levels of performance or accuracy across different demographic groups, such as age, gender, ethnicity, or socio-economic status. For instance, a model might perform exceptionally well for one group but poorly for another. This inconsistency indicates that the model may be inadvertently favoring certain groups over others, which could be a sign of biased training data or unequal representation in the dataset. Identifying such discrepancies helps in understanding and addressing potential biases embedded in the model's design and training process.
* Dependence on Biased Training Data. Biases are identified when there is clear evidence that the training data used to develop the model contains inherent biases or is not representative of the entire population. For example, if the training data overrepresents certain groups while underrepresenting others, the model is likely to reflect and perpetuate these biases in its predictions and decisions. This dependence on biased training data can lead to skewed outcomes and reinforce existing inequalities, making it important to ensure that training data is diverse, representative, and free from bias to develop fair and effective models.
* Algorithmic Behavior During Testing. Biases can be detected by analyzing how the model behaves during the testing phase, where it is evaluated against different test scenarios. Professionals may observe that the model’s performance varies inconsistently across these scenarios, revealing potential biases in how the model handles different types of data or situations. Such inconsistencies during testing can highlight underlying biases in the model's design or training, such as poor handling of underrepresented cases or specific conditions. Understanding these behavioral patterns reveals the importance of a robust testing process, which is essential for identifying and addressing biases to ensure that the model performs fairly and effectively across a wide range of scenarios.
These findings show that software professionals adopt diverse approaches to uncover biases in AI systems. They emphasized that the perspective of bias can emerge from various sources, not just the exploration of data. This includes discrepancies between model outputs and real-world conditions, inconsistent performance across different demographic groups, and biases observed during algorithmic testing. Examples of evidence collected from interviews to illustrate these findings are presented in Table <ref>
§.§ How Do Software Professionals Address Bias in AI Projects?
Our analysis reveals that software professionals employ a variety of strategies to address biases identified in AI system projects post-release. These strategies encompass different aspects of managing and mitigating bias to enhance model performance and fairness. More specifically, we have categorized these strategies as follows:
* Data Enrichment. This strategy focuses on enhancing and diversifying the dataset to better reflect real-world diversity and minimize biases. It involves several key activities, including data cleaning and preprocessing to remove or correct harmful biases within the existing data, acquiring additional data sources to cover different scenarios, and conducting preliminary bias analysis to identify and address potential biases before training the model. By creating a more balanced and representative dataset, this practice helps develop fairer and more effective models.
* Model Adjustment. This strategy centers on continuously assessing and refining the model to manage emerging biases and maintain performance across different conditions. It includes regular model testing with diverse scenarios to detect performance variations and potential biases, retraining the model with new, diverse data to adapt to evolving conditions, and implementing thorough model audits to ensure transparency, accountability, and adherence to fairness standards. This practice ensures the model remains effective and equitable as new data and scenarios are encountered.
* Crisis Response. This strategy involves establishing protocols for managing and communicating about issues that arise during or after the deployment of the AI system. It includes creating clear procedures for addressing and resolving biases or other problems and involving users in the process by keeping them informed about potential issues. These measures help manage unexpected problems and maintain user trust and confidence in the system.
* Diverse Team Integration. This strategy emphasizes the importance of including individuals from varied backgrounds within the software development team to provide a broad range of perspectives. A diverse team ensures that various viewpoints and experiences are considered in the development process. This approach contributes to more comprehensive problem-solving and helps in recognizing potential biases more effectively.
* Ethical and Fairness Review. This strategy involves integrating ethical considerations and systematic bias assessments throughout the software development process. It includes incorporating ethical guidelines to ensure the model adheres to fairness and integrity principles and continuously evaluating the model for biases at various stages to address and mitigate them proactively. This practice ensures the software meets ethical standards throughout its development and deployment.
The diverse strategies used by software professionals to tackle identified biases in AI systems highlight the complexity and multifaceted nature of bias management. From data refinement and model adjustments to crisis management and ethical analysis, each strategy targets different aspects of bias mitigation. By employing this range of strategies, professionals aim to ensure that AI solutions are not only technically sound but also socially responsible. Table <ref> illustrates these findings with quotations extracted from our participants.
§ DISCUSSIONS
Fairness debt represents the long-term costs associated with design, implementation, or managerial decisions that offer short-term advantages but ultimately create conditions where future corrections are difficult or costly. Effectively addressing fairness debt requires recognizing and mitigating biases that originate from various sources, including cognitive, design, historical, model, requirement, societal, testing, and training biases. Understanding these root causes is vital for managing and alleviating fairness debt <cit.>.
Our findings indicate that while professionals in our study recognized several sources of bias, such as cognitive, testing, and training, not all the root causes identified in the literature were explicitly mentioned. Specifically, biases related to historical and requirements were less frequently highlighted. This suggests that although professionals have a broad understanding of bias sources, there remains an opportunity for deeper exploration and comprehension of all aspects of fairness debt.
Furthermore, software professionals apply a range of strategies to address biases, demonstrating a comprehensive approach to managing fairness debt. Their practices span from refining data and adjusting models to managing crises and including diverse perspectives. These strategies aim not only to improve the technical performance of AI systems but also to address the ethical and societal implications of bias.
Answering our RQ, industry professionals employ strategies that can be leveraged to contain the effects of fairness debt in AI and machine learning projects. They focus on enhancing data management through practices like refining datasets, mitigating harmful biases, and acquiring additional data to address gaps. Model management practices include regular testing with diverse scenarios, retraining to adapt to new data, and ensuring transparency through auditing. Effective crisis management involves establishing protocols for addressing and communicating issues as they arise. Additionally, professionals emphasize the importance of including diverse perspectives within development teams and integrating ethical analysis throughout the development process.
These preliminary findings are significant in laying the groundwork for developing detailed guidelines for managing fairness debt. Just as technical debt literature provides established practices for dealing with technical challenges <cit.>, these insights offer the first step toward creating structured methods for addressing fairness debt. Formulating such guidelines will be essential for ensuring that software systems are not only technically proficient but also socially responsible and equitable.
§ CONCLUSIONS
Our study provides valuable insights into how software professionals identify biases in AI systems. We found that professionals focus on detecting various types of biases, including those arising from discrepancies between model outputs and real-world conditions, inconsistencies across demographic groups, and inherent biases in training data. These findings highlight the importance of not only recognizing but also understanding the root causes of bias. By concentrating on specific indicators, such as the alignment between model outcomes and reality and demographic performance variability, professionals are better equipped to identify potential fairness issues early in the development process. This proactive identification helps in addressing biases before they affect the system’s performance and fairness.
In terms of bias management, we highlighted a range of strategies that professionals use to address and mitigate identified biases. These include technical strategies, such as refining data management practices and adjusting models regularly, as well as human-related strategies, such as integrating diverse perspectives within development teams and effectively managing crises. Looking at these results, we conclude that managing bias requires a balanced approach that combines both technical solutions and human insights to ensure that AI systems are not only accurate but also fair and equitable.
Finally, relating this study to the research on fairness debt, our findings demonstrate the need for a more structured approach to managing fairness-related issues and their effects on society. By highlighting the practical strategies currently employed in the industry, we provide the initial foundation for developing guidelines to address fairness debt. We propose that future work should focus on creating these guidelines, building on the insights gathered from this study, and exploring how these strategies can be refined and standardized to enhance both the technical and ethical dimensions of software fairness.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02284v1 | 20240903203743 | Biochemical Prostate Cancer Recurrence Prediction: Thinking Fast & Slow | [
"Suhang You",
"Sanyukta Adap",
"Siddhesh Thakur",
"Bhakti Baheti",
"Spyridon Bakas"
] | cs.CV | [
"cs.CV",
"cs.AI",
"68T10",
"I.5.4"
] |
S.You, et al.
Division of Computational Pathology, Department of Pathology & Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
Corresponding authors: [email protected], [email protected]
Biochemical Prostate Cancer Recurrence Prediction: Thinking Fast & Slow
Suhang You*1 Sanyukta Adap1 Siddhesh Thakur1 Bhakti Baheti1 Spyridon Bakas*1
September 2024
=================================================================================
§ ABSTRACT
Time to biochemical recurrence in prostate cancer is essential for prognostic monitoring of the progression of patients after prostatectomy, which assesses the efficacy of the surgery. In this work, we proposed to leverage multiple instance learning through a two-stage “thinking fast & slow” strategy for the time to recurrence (TTR) prediction. The first (“thinking fast”) stage finds the most relevant WSI area for biochemical recurrence and the second (“thinking slow”) stage leverages higher resolution patches to predict TTR. Our approach reveals a mean C-index (Ci) of 0.733 (θ=0.059) on our internal validation and Ci=0.603 on the LEOPARD challenge validation set. Post hoc attention visualization shows that the most attentive area contributes to the TTR prediction.
§ INTRODUCTION
In 2020, more than 10 million new male cancer cases were diagnosed, with prostate cancer (PC) ranking second to lung cancer <cit.>. Currently, PC clinical treatment relies on prostatectomy targeting prolonged life expectancy. However, up to 40% of PC patients would experience biochemical recurrence of the prostate-specific antigen
within 10 years <cit.>.
The Gleason score <cit.> has been ranking PC on different risk grades, based on morphological features, albeit its limitations lead to recurrence rate differences within the same grade <cit.>.
Recently deep learning methods <cit.> have targeted superior biochemical recurrence prediction to the Gleason score, relying on the analysis of digitized histological images of tissue microarrays, rather than whole slide images (WSIs).
A common solution to analyze WSIs is by partitioning them into smaller patches, notwithstanding the challenge of obtaining patch-level annotations. Along these lines, multiple instance learning (MIL) <cit.> has become prominent in computational pathology for many applications <cit.>, as it encapsulates features from individual patches of the same WSI as a bag <cit.>, reducing the patch-level labeling requirement and transforming it into a weakly-supervised learning problem with known bag/WSI-level labels. Direct risk prediction has been proposed in <cit.> by modeling a Cox layer and recently advanced with MIL in <cit.>, which groups the extracted patch-level features with K-means to improve patch sampling.
Motivated by this recent literature, here we propose a two-stage MIL regression approach to tackle the task of predicting biochemical recurrence in prostate cancer, as part of the LEarning biOchemical Prostate cAncer Recurrence from histopathology sliDes (LEOPARD) Challenge 2024 <cit.>. The proposed two-stage approach follows a “thinking fast & slow” strategy, towards improving the patch sampling/pooling and targeting inference efficiency. Specifically, the 1^st stage aims to rapidly localize the most important WSI area, and the 2^nd stage leverages these important patches and focuses on selecting the most attentive features to predict TTR.
§ MATERIAL
We developed our model using the LEOPARD challenge training set (508 cases). We used all training data for our 2^nd stage (Sec. <ref>), and excluded 30% for our 1^st stage (Sec. <ref>) by setting the time threshold T=1.65, where cases with e_i = 0 (no recurrence) and t_i (follow-up years) < T are excluded. For both stages, the split ratios for training, validation, and testing were 64%:16%:20%.
UNI was used as the feature extractor, including its pre-trained weights <cit.>.
Our final model was submitted to the LEOPARD Challenge validation and testing phase. The validation set comprised 49 cases from `Radboud' and 50 cases from external sources. The testing set was hidden from challenge participants.
§ METHODS
Our proposed method consists of two MIL-based stages (Fig. <ref>). The 1^st stage (“thinking fast”) targets classification at a low WSI resolution (≈16mpp, ≈0.625Xmagnification), and the 2^nd stage (“thinking slow”) focuses on regression at a high resolution (≈0.25mpp, ≈40Xmagnification). This approach targets improved patch sampling/pooling and inference efficiency. CLAM <cit.> was used for pre-processing (WSI patching and excluding background).
In the 1^st stage we extracted non-overlapping patches (224×224), whereas patches in the 2^nd stage were of size 2048×2048 with 75% overlap (step size=1024). These were embedded in lower dimensional spaces through UNI and used for classification (recurrence or not) in stage 1, and for TTR regression in stage 2.
§.§ Thinking Fast: MIL Classification at Low Resolution
The 1^st stage intends to facilitate the rapid selection of WSI areas with the largest contribution to the TTR prediction, given a particular time threshold T. Its goal is reduced inference time and increased performance of the proposed approach. The recurrence of the WSI is defined as:
Y|_t = T =
0 if ∑_i = 1^Ny_i = 0
1 otherwise
where y_i is the prediction for the i^th WSI patch and Y|_t = T is the prediction for the WSI, at time threshold T. For this classification, we apply the CLAM-SB <cit.> model as the “thinking fast” classifier, which includes the patch loss and the cross-entropy for the WSI.
After prediction, the probability of recurrence for each patch is generated and the top m percent (up to 40%) of patches with the highest attention scores will be assigned 1 in a mask, and 0 otherwise. This mask intends to filter out the less relevant tissue, in preparation for the second stage MIL process.
§.§ Thinking Slow: MIL Regression at High Resolution
Following the work of <cit.>, the “thinking slow” stage is the regression task of predicting the patient risk R for biochemical recurrence. This risk is inversely related to TTR. Thus, the output layer is described by a Cox Proportional Hazard <cit.> (CPH) layer, which is a single node and outputs the logarithmic risk h(S) of a WSI feature embedding S = {f_1,f_2,...,f_N}. The WSI feature embeddings are extracted from patches selected by the mask of the “thinking fast” stage, after being pooled and aggregated for regression.
In the CPH model, the risk R(S) = e^h(S) is estimated by the linear function ĥ_β(S) = β^T · S. In Cox regression, the weights β are optimized by the Cox partial likelihood, which is defined as:
L(β) = ∏_i:e_i = 1e^ĥ_β(S_i)/∑_j:R(t_i) e^ĥ_β(S_j) ,
where e_i is the event status (recurrence: 1, or not: 0) at follow up t_i (in years), and S_i is the WSI embedding. R(t_i) indicates that the patient, whose input is the WSI, is still at risk of recurrence at time t_i. The optimization of Cox partial likelihood is equivalent to minimizing the following negative log partial likelihood function through re-parameterization:
l(β) = - ∑_i:e_i = 1 ( ĥ_β(S_i) - log∑_j:R(t_i) e^ĥ_β(S_j) ),
In our design, patch embeddings {f_1,f_2,...,f_N} output their corresponding logarithmic risk {r_1,r_2,...,r_N} through the Cox layer. Then, embeddings with top k logarithmic risk are selected as {f_top_1,f_top_1,...,f_top_k} (Fig. <ref>). Among these embeddings, we define the pooling as a self-attention process <cit.>
S ≈ S_top_k = ∑_i = top_1 ^top_k a_i r_i,
a_i=exp{𝐰^⊤tanh (𝐕r_i^⊤)}/∑_j=top_1^top_kexp{𝐰^⊤tanh𝐕 r_j^⊤},
where 𝐰 and 𝐕 are learnable parameters. S_top_k is the top_k embeddings weighted by attention pooling, designed to approximate the WSIs feature embeddings S (Eq. <ref> & <ref>). tanh(·) is an element-wise hyperbolic tangent function, introducing non-linearity. We approximate the TTR using exp(-1 ×log R(S)), since the logarithmic output risk log R(S) is inversely related to TTR.
§.§ Model Training, Evaluation & Selection
We used the Adam optimizer with a learning rate of 1× 10^-4. The weight decay was 1× 10^-5 and the dropout rate was 0.25. Models were trained and evaluated on NVIDIA A100 GPUs during model selection. Our source code is based on the CLAM platform and the tiffslide library.
To select the best trained “thinking fast” model, we set up a 5-fold cross validation with a fixed test set and select the best fold as the model. The metric is the AUC of prediction on biochemical recurrence. For the “thinking slow” model, we use another 5x5-fold nested cross-validation without a fixed test set. In the outer fold, the hold-out set is used for validation of each inner fold. In each inner fold, the hold-out set is used to select the model for validation on the outer fold hold-out set, where the best inner hold-out validation loss is the criterion during training. The metric we used for model selection is the censored concordance index <cit.> (Ci) of the outer hold-out set. In our setting, 25 Ci are calculated for one parameter setting (e.g., top_k = 10 and m = 20%). In the experiments, we evaluated the model with combinations of top_k = {5, 10, 15, 20, 30, 40, 50} and m = {5%,10%,15%,20%,25%,30%,35%,40%}. We select the model parameters by comparing the best mean and standard deviation (σ) of Ci.
For the model submission to the LEOPARD challenge, we randomly split the data into a 10-fold cross-validation without a testing set and used the best model weights from each fold. The final prediction of TTR is calculated by averaging the predicted logarithmic risk from each set of model weights. We select model weights in each fold, based on the best hold-out validation loss, after 40 epochs. This 40-epoch threshold is set by calculating the zero-crossing epoch of the second derivative of the training loss curve to avoid under-training.
§ RESULTS
For the internal data splits (Sec. <ref>), we selected the best performance with parameter settings top_k = 10 and m = 20%. Our proposed approach yielded a mean C-index of 0.733 (σ=0.059) on our test data (i.e., the outer hold-out set), indicating superior performance compared to MAD-MIL <cit.> (0.704 ± 0.058) and AC-MIL <cit.> (0.714 ± 0.056) with regression modifications.
Our inference pipeline container submitted in the LEOPARD validation phase, yielded a C-index (Ci) of 0.603 (Ci_Radboud=0.616, Ci_external=0.589).
As shown in Fig. <ref> (A), we compare the results of different combinations of top_k over m percentage values (x axis). The upper plots show mean Ci (y axis) of the outer hold-out set, while the lower plots show their corresponding standard deviation. The best overall result was observed for top_k = 30 and m = 10%. We also observed that using a larger area of the WSI for regression does not always achieve better prediction, which in turn proves that a more relevant area of the WSI provides more accurate features for TTR regression and increases inference efficiency. This phenomenon can also be observed for the other two ablation methods, MAD-MIL and AC-MIL (Fig. <ref>(B)), where the selected method demonstrates a better regression prediction across almost all m parameters when fixing other parameters.
§ INTERPRETABILITY
In the first stage, the patch selection criterion is the highest attention score (Fig. <ref>) of the attention map, which serves as an interpretability visualization for previous classification works. It show the most attentive area for the stage one classfication. Shown in Fig. <ref>, in our second stage, the attention scores are sparsely distributed on the WSI since only a small portion of patches (10%) are selected. Those color-highlited area also shows the most attentive region for TTR regression. In general, our method leverages the attention mechanism, but further clinical interpretablity requires to be evaluated from clinicians/pathologists.
§ DISCUSSION
In this study, we proposed to leverage MIL through a two-stage “thinking fast & slow” strategy for the TTR regression. The first “thinking fast” stage aims to find the most relevant area of the WSI to the biochemical recurrence and the second “thinking slow” stage leverages higher resolution patches to predict the TTR. In the ablation result, we have shown that an improved prediction can be achieved by focusing on a more relevant area of the WSI along with an improved prediction efficiency. We also showed that the regression is affected by areas of attention which contain cancerous tissues. The limitation of our method is from the CPH model, which focuses on the risk prediction, not the real TTR. In the future, we will extend our work to other tumor types.
§ CODE LINK
The source code of our inference pipeline is available at <https://github.com/yousuhang/IU-ComPath-LeoPard>.
splncs04
|
http://arxiv.org/abs/2409.02379v1 | 20240904015903 | Inertial focusing of spherical capsule in pulsatile channel flows | [
"Naoki Takeishi",
"Kenta Ishimoto",
"Naoto Yokoyama",
"Marco Edoardo Rosti"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Nodeless superconductivity and topological nodal states in molybdenum carbide
Toni Shiroka
=============================================================================
§ ABSTRACT
We present numerical analysis of the lateral movement of spherical capsule in the steady and pulsatile channel flow of a Newtonian fluid, for a wide range of oscillatory frequency. Each capsule membrane satisfying strain-hardening characteristic is simulated for different Reynolds numbers Re and capillary numbers Ca.
Our numerical results showed that capsules with high Ca exhibit axial focusing at finite Re similarly to the inertialess case.
We observe that the speed of the axial focusing can be substantially accelerated by making the driving pressure gradient oscillating in time.
We also confirm the existence of an optimal frequency which maximizes the speed of axial focusing, that remains the same found in the absence of inertia.
For relatively low Ca, on the other hand, the capsule exhibits off-centre focusing, resulting in various equilibrium radial positions depending on Re.
Our numerical results further clarifies the existence of a specific Re for which the effect of the flow pulsation to the equilibrium radial position is maximum.
The roles of channel size and viscosity ratio on the lateral movements of the capsule are also addressed.
capsule,
hyperelastic membrane,
inertial focusing,
off-centre focusing,
pulsatile channel flow,
computational biomechanics.
§ INTRODUCTION
In a pipe flow at a finite channel (or particle) Reynolds number Re (Re_p), a rigid spherical particle exhibits migration perpendicular to the flow direction, originally reported by <cit.>,
the so-called “inertial focusing” or “tubular pinch effect”,
where the particles equilibrate at a distance from the channel centreline as a consequence of the force balance between the shear-induced and wall-induced lift forces.
The phenomenon is of fundamental importance in microfluidic techniques such as label-free cell alignment, sorting, and separation techniques <cit.>.
Although the techniques allow us to reduce the complexity and costs of clinical applications by using small amount of blood samples,
archetypal inertial focusing system requires steady laminar flow through long channel distances L_f, which can be estimated as L_f = π H/(Re_p f_l), where H is the dimension of the channel (or its hydraulic diameter) and f_l is a non-dimensional lift coefficient <cit.>.
So far, various kind of geometries have been proposed to achieve the required distance for inertial focusing in a compact space, e.g., sinusoidal, spiral, and hybrid channels <cit.>.
Without increasing Re_p, the recent experimental study by <cit.> achieved inertial focusing of 0.5-μm-size particle (Re_p ≈ 0.005) in short channels by using oscillatory channel flows.
Since the oscillatory flows allow a suspended particle to increase its total travel distance without net displacement along the flow direction,
utilizing oscillatory flow can be an alternative and practical strategy for inertial focusing in microfluidic devices.
Recently, <cit.> experimentally investigated the effects of the Womersley number (Wo) on inertial focusing in planar pulsatile flows,
and evaluated the lateral migration (or off-centre focusing) speed on a small and weakly inertial particle for different oscillatory frequencies.
They concluded that inertial focusing is achieved in only a fraction of the channel length (1 to 10%) compared to what would be required in a steady flow <cit.>.
While a number of studies have analysed the off-centre focusing of rigid spherical particles under steady flow by a variety of approaches,
such as analytical calculations <cit.>,
numerical simulations <cit.>,
and experimental observations <cit.>,
the inertial focusing of deformable particles such as biological cells,
consisting of an internal fluid enclosed by a thin membrane,
has not yet been fully described, especially under unsteady flows.
Due to their deformability,
the problem of inertial focusing of deformable particles is more complex than with rigid spherical particles, as originally reported by <cit.>.
It is now well known that a deformable particle at low Re migrates toward the channel axis under steady laminar flow <cit.>.
Hereafter, we call this phenomenon as “axial focusing”.
Recent numerical study showed that, in almost inertialess condition, the axial focusing of a deformable spherical capsule can be accelerated by the flow pulsation at a specific frequency <cit.>.
For finite Re (> 1), however, it is still uncertain whether the flow pulsation can enhance the off-centre focusing or impede it (i.e., axial focusing).
Therefore, the objective of this study is to clarify whether a capsule lateral movement at finite Re in a pulsatile channel flow can be altered by its deformability.
At least for steady channel flows,
inertial focusing of deformable capsules including biological cells have been investigated in recent years both by means of experimental observations <cit.> and numerical simulations <cit.>.
For instance, <cit.> experimentally investigated the inertial focusing of various cell types (including red blood cells, leukocytes, and cancer cells such as a cervical carcinoma cell line, breast carcinoma cell line, and osteosarcoma cell line) with a cell-to-channel size ratio 0.1 ≤ d_0/W ≤ 0.8,
using a rectangular channel with a high aspect ratio of W/H ≈ 0.5,
where d_0, W and H are the cell equilibrium diameter, channel width, and height, respectively.
They showed that the cells can be separated according to their size and deformability <cit.>.
The experimental results can be qualitatively described using a spherical capsule <cit.> or droplet model <cit.>.
In more recent experiments by <cit.>, the authors investigated the effect of Re (1 < Re < 40) and capillary number Ca – ratio between the fluid viscous force and the membrane elastic force – (0.1 < Ca < 1) on the lateral equilibrium of bubbles in rectangular microchannels and different bubble-to-channel size ratios with 0.48 ≤ d_0/W ≤ 0.84.
The equilibrium position of such soft particles results from the competition between Re and Ca,
because high Re induce the off-centre focusing,
while high Ca, i.e., high deformability, allows axial focusing.
However, notwithstanding these recent advancements, a comprehensive understanding of the effect on the inertial focusing of these two fundamental parameters has not been fully established yet.
Numerical analysis more clearly showed that the “deformation-induced lift force” becomes stronger as the particle deformation is increased <cit.>.
Although a number of numerical analyses regarding inertial focusing have been reported in recent years mostly for spherical particles <cit.>,
the equilibrium positions of soft particles is still debated owing to the complexity of the phenomenon.
<cit.> showed that the equilibrium position in a cross section of rectangular microchannel with d_0/H = 0.2 shifts toward the wall as Re increases from 1 to 100.
<cit.> also performed numerical simulations of spherical capsules in a square channel for 0.1 ≤ d_0/H ≤ 0.4 and 5 ≤ Re ≤ 100 without viscosity contrast,
and showed that the equilibrium position was nearly independent of Re.
In a more recent numerical analysis by <cit.>,
simulations of a spherical hyperelastic particle in a circular channel with d_0/D = 0.2 were performed with 100 ≤ Re ≤ 400 and Weber number (We) with 0.125 ≤ We ≤ 4.0,
the latter of which is the ratio of the inertial effect to the elastic effect acting on the particles.
Their numerical results showed that regardless of Re,
the final equilibrium position of a deformable particle is the centreline,
and harder particles (i.e., with lower We) tended to rapidly migrate toward the channel centre <cit.>.
Despite these efforts,
the inertial focusing of capsules subjected to pulsatile flow at finite inertia cannot be estimated based on these achievements.
Aiming for the precise description of the inertial focusing of spherical capsules in pulsatile channel flows,
we thus perform numerical simulations of individual capsules with a major diameter of d_0 = 2a_0 = 8 μm in a cylindrical microchannel with D = 2R = 20–50 μm (i.e., R/a_0 = 2.5–6.25) for a wide range of oscillatory frequency.
Each capsule membrane, following the Skalak constitutive (SK) law <cit.>, is simulated for different Re, Ca, and size ratio R/a_0
Since this problem requires heavy computational resources,
we resort to GPU computing,
using the lattice-Boltzmann method (LBM) for the inner and outer fluids and the finite element method (FEM) to describe the deformation of the capsule membrane.
This model has been successfully applied in the past for the analysis of the capsule flow in circular microchannels <cit.>.
The remainder of this paper is organised as follows.
Section 2 gives the problem statement and numerical methods,
Section 3 presents the numerical results for single spherical capsule. Finally, a summary of the main conclusions is reported in Section 4.
A description of numerical verifications is presented in the Appendix.
§ PROBLEM STATEMENT
§.§ Flow and capsule models and setup
We consider the motion of an initially spherical capsule with diameter d_0 (= 2a_0 = 8 μm) flowing in a circular channel diameter D (= 2R = 20–50 μm),
with a resolution of 32 fluid lattices per capsule diameter d_0.
The channel length is set to be 20a_0, following previous numerical study <cit.>.
Although we have investigated in the past the effect of the channel length L and the mesh resolutions on the trajectory of the capsule centroid (see Fig. 7 in <cit.>),
we further assess the effect of this length on the lateral movement of a capsule in Appendix <ref> (figure <ref>a).
The capsule consists of a Newtonian fluid enclosed by a thin elastic membrane, sketched in figure <ref>.
The membrane is modeled as an isotropic and hyperelastic material following the SK law <cit.>, in which the strain energy w_SK and principal tensions in the membrane τ_1 and τ_2 (with τ_1 ≥τ_2) are given by
w_SK/G_s = 1/4( I_1^2 + 2I_1 - 2I_2 + C I_2^2),
and
τ_i/G_s = η_i/η_j[ η_i^2 - 1 + C ( η_i^2 η_j^2 - 1 ) ], for (i, j) = (1, 2) or (2, 1).
Here, w_SK is the strain energy density function,
G_s is the membrane shear elastic modulus,
C is a coefficient representing the area incompressibility,
I_1 (= η_1^2 + η_2^2 - 2) and I_2 (= η_1^2η_2^2 - 1) are the invariants of the strain tensor,
with η_1 and η_2 being the principal extension ratios.
In the SK law (<ref>),
the area dilation modulus is K_s = G_s (1 + 2C).
In this study, we set C = 10^2 <cit.>,
which describes an almost incompressible membrane.
Bending resistance is also considered <cit.>,
with a bending modulus k_b = 5.0 × 10^-19 J <cit.>.
These values have been shown to successfully reproduce the deformation of red blood cells in shear flow <cit.> and the thickness of cell-depleted peripheral layer in circular channels (see Figure A.1 in <cit.>).
Neglecting inertial effects on the membrane deformation,
the static local equilibrium equation of the membrane is given by
∇_s ·τ̌ + q̌ = 0̌,
where ∇_s (= ( Ǐ - ňň) ·∇) is the surface gradient operator,
ň is the unit normal outward vector in the deformed state,
q̌ is the load on the membrane,
and τ̌ is the in-plane elastic tension that is obtained using the SK law (equation <ref>).
The fluids are modeled with the incompressible Navier–Stokes equations for the fluid velocity v̌:
ρ( ∂v̌/∂ t + v̌·∇v̌)
= ∇·σ̌^f + ρf̌,
∇·v̌ = 0,
with
σ̌^f = -pǏ + μ( ∇v̌ + ∇v̌^T ),
where σ̌^f is the total stress tensor of the flow,
p is the pressure,
ρ is the fluid density,
f̌ is the body force,
and μ is the viscosity of the liquid,
expressed using a volume fraction of the inner fluid I (0 ≤I≤ 1) as:
μ = { 1 + ( λ - 1 ) I}μ_0,
where λ (= μ_1/μ_0) is the viscosity ratio,
μ_0 is the external fluid viscosity, and μ_1 is the internal fluid viscosity.
The dynamic condition coupling the different phases requires the load q̌ to be equal to the traction jump ( σ̌^f_out - σ̌^f_in) across the membrane:
q̌ = ( σ̌^f_out - σ̌^f_in) ·ň,
where the subscripts `out' and `in' represent the outer and internal regions of the capsule, respectively.
The flow in the channel is sustained by a uniform pressure gradient ∂ p_0/∂ z (= ∂_z p_0),
which can be related to the maximum fluid velocity in the channel by ∂_z p_0 = -4 μ_0 V_max^∞/R^2.
The pulsation is given by a superimposed sinusoidal function,
such that the total pressure gradient is
∂_z p (t) = ∂_z p_0 + ∂_z p_a sin( 2 π f t ).
The problem is governed by six main non-dimensional numbers, including i) the Reynolds number Re and ii) the capillary number Ca defined as:
Re = ρ D V_max^∞/μ_0,
Ca = μ_0 γ̇_m a_0/G_s = μ_0 V_max^∞/G_sa_0/4 R,
where V_max^∞ (= 2V_m^∞) is the maximum fluid velocity in the absence of any cells,
V_m^∞ is the mean fluid velocity,
and γ̇_m (= V_m^∞/D) is the mean shear rate.
Note that, increasing Re under constant Ca corresponds to increasing G_s, namely, a harder capsule.
Furthermore, we have iii) the viscosity ratio λ, iv) the size ratio R/a_0, v) the non-dimensional pulsation frequency f^∗ = f/γ̇_m, and vi) the non-dimensional pulsation amplitude ∂_z p_a^∗ = ∂_z p_a/∂_z p_0. Considered the focus of this study, we decide to primarily investigate the effect of Re, R/a_0, and f^∗.
Representative rigid and largely deformable capsules are considered with Ca = 0.05 and Ca = 1.2, respectively.
When presenting the results, we will initially focus on the analysis of lateral movements of the capsule in effectively inertialess condition (Re = 0.2) for R/a_0 = 2.5, and later consider variations of the size ratio R/a_0, viscosity ratio λ, Reynolds number Re (> 1), and Ca.
We confirmed that the flow at Re = 0.2 well approximates an almost inertialess flow for single- <cit.> and multi-cellular flow <cit.>.
Unless otherwise specified, we show the results obtained with ∂_z p_a^∗ = 2 and λ = 1.
§.§ Numerical simulation
The governing equations for the fluid are discretised by the LBM based on the D3Q19 model <cit.>.
We track the Lagrangian points of the membrane material points x̌_m (X̌_m,t) over time,
where X̌_m is a material point on the membrane in the reference state.
Based on the virtual work principle,
the above strong-form equation (<ref>) can be rewritten in weak form as
∫_S û̌·q̌ dS = ∫_S ϵ̌̂̌ : τ̌ dS,
where S is the surface area of the capsule membrane, and û̌ and ϵ̌̂̌ = ( ∇_s û̌ + ∇_s û̌^T )/2 are the virtual displacement and virtual strain, respectively.
The FEM is used to solve equation (<ref>) and obtain the load q̌ acting on the membrane <cit.>.
The velocity at the membrane node is obtained by interpolating the velocities at the fluid node using the immersed boundary method <cit.>.
The membrane node is updated by Lagrangian tracking with the no-slip condition.
The explicit fourth-order Runge–Kutta method is used for the time integration.
The volume-of-fluid method <cit.> and front-tracking method <cit.> are employed to update the viscosity in the fluid lattices.
A volume constraint is implemented to counteract the accumulation of small errors in the volume of the individual cells <cit.>:
in our simulation, the relative volume error is always maintained lower than 1.0 × 10^-3%,
as tested and validated in our previous study of cell flow in circular channels <cit.>.
All procedures were fully implemented on a GPU to accelerate the numerical simulation.
More precise explanations for numerical simulations including membrane mechanics are provided in our previous works <cit.>.
Periodic boundary conditions are imposed in the flow direction (z-direction).
No-slip conditions are employed for the walls (radial direction).
We set the mesh size of the LBM for the fluid solution to 250 nm,
and that of the finite elements describing the membrane to approximately 250 nm (an unstructured mesh with 5120 elements was used for the FEM).
This resolution was shown to successfully represent single- and multi-cellular dynamics <cit.>.
§.§ Analysis of capsule deformation
Later, we investigate the in-plane principal tension T_i (with T_1 ≥ T_2) and the isotropic tension T_iso in the membrane of the capsule.
In the case of a two-dimensional isotropic elastic membrane,
the isotropic membrane tension can be calculated by T_iso = (T_1 + T_2)/2 for the deformed capsule.
The averaged value of T_iso is then calculated as
⟨ T_iso⟩ = 1/ST∫_T∫_S T_iso (x̌_m, t) dS dt,
where T is the period of the capsule motion.
Hereafter, ⟨·⟩ denotes a spatial-temporal average.
Time average starts after the trajectory has finished the initial transient dynamics,
which differs for each case.
For instance, at finite Re conditions, a quasi-steady state is usually attained around the non-dimensional time of γ̇_m t = 200,
and we start accumulating the statistics from γ̇_m t ≥ 400 to fully cancel the influence of the initial conditions.
§ RESULTS
§.§ Axial focusing of the capsule under steady channel flow (Re < 1)
We first investigate the axial focusing of a capsule under steady flow,
which can be assumed to be effectively inertialess (Re = 0.2).
Figure <ref>(a) shows side views of the capsule during its axial focusing in channel of size R/a_0 = 2.5 for different Ca (= 0.05, 0.2, and 1.2).
The capsule, initially placed at r^∗_c = r_c/R = 0.55,
migrates after the flow onsets towards the channel centreline (i.e., capsule centroid is r_c = 0) while deforming,
finally reaching its equilibrium position at the centreline
where it achieves an axial-symmetric shape.
Although the magnitude of deformation during axial focusing depends on Ca,
these process is commonly observed for every Ca.
The time history of the radial position of the capsule centroid r_c is shown in figure <ref>(b).
The results clearly show that the speed of axial focusing grows with Ca.
Interestingly, all trajectories are well fitted by the following empirical expression:
r_c^∗ = βexp(-α t^∗),
where t^∗ (= γ̇_mt) is the non-dimensional time,
and α (> 0) and β are two coefficients that can be found by a least-squares fitting to the plot.
Fitting are performed using data between the initial (r_c/R = 0.55) and final state (Δ x_LBM/R ≤ 0.01 for R/a_0 = 2.5),
defined as the time when the capsule is within one mesh size (Δ x_LBM) from the channel axis.
Performing time differentiation of equation (<ref>),
the non-dimensional velocity of the capsule centroid ṙ_c^∗ can be estimated as:
ṙ_c^∗ = -α r_c^∗.
This linear relation (<ref>) may be understood by a shear-induced lift force propotional to the local shear strength. A more detailed description of the relationship between the coefficient α and the lift force on the capsule are provided in Appendix <ref>.
Figure <ref>(a) shows the coefficient α as a function of Ca.
As expected from figure <ref>(b),
the value of α increases with Ca.
Since the capsule deformability is also affected by the viscosity ratio λ,
its influence on α is also investigated in figure <ref>(b).
At a fixed Ca (= 1.2),
the value of α decreases with λ.
To further proof that α is independent of the initial radial position of the capsule centroid,
additional numerical simulations are performed with a larger channel (R/a_0 = 5) for different r_0/R.
Note that a case with larger channel for constant Re denotes smaller V_max^∞, resulting in smaller G_s (i.e., softer capsule) for constant Ca.
Figure <ref>(a) is one of the additional runs at Ca = 0.2,
where the capsule is initially placed at r_0/R = 0.75.
Figure <ref>(b) is the time history of the radial position of the capsule centroid r_c for different initial positions r_0/R.
We observe that the exponential fitting is still applicable for these runs, with the coefficient α reported in figure <ref>(c).
These results provide a confirmation that α is indeed independent of the initial radial position r_0/R.
Furthermore, the fitting provided in equation (<ref>) is applicable even for a different constitutive law.
Discussion of these results for capsule described by the neo-Hookean model,
which features strain-softening,
is reported in Appendix <ref> (see also figure <ref>).
§.§ Capsule behaviour under pulsatile channel flow
Next, we investigate inertial focusing of capsules at finite Re,
and investigate whether the equilibrium radial position of the capsule can be altered by pulsations of the flow.
Two representative behaviours of the capsule at low Ca (= 0.05) and high Ca (= 1.2) are shown in figure <ref>(a),
which are obtained with f^∗ = 0.02 and Re = 10.
The simulations are started from a off-centre radial position r_0/R = 0.4.
At the end of the migration, the least deformable capsule (Ca = 0.05) exhibits an ellipsoidal shape with an off-centred position (figure <ref>a, left), while the most deformable one (Ca = 1.2) exhibits the typical parachute shape at the channel centreline (figure <ref>a, right).
Detailed trajectories of these capsule centroids r_c/R are shown in figure <ref>(b),
where the non-dimensional oscillatory pressure gradient ∂_z p^∗ (t^∗) (= 1 + 2 sin(2 π f^∗ t^∗)) is also displayed.
The least deformable capsule (Ca = 0.05) fluctuates around the off-centre position r_c/R (≈ 0.2),
and the waveform of r_c/R lags behind ∂_z p^∗ (t^∗).
The capsule with large Ca (= 1.2), on the other hand, immediately exhibits axial focusing, reaching the centerline within one flow period (figure <ref>b).
Therefore, axial and off-centre focusing strongly depend on Ca.
Figure <ref>(c) is the time history of the isotropic tension T_12.
The major waveforms of T_iso are synchronised with ∂_z p^∗ in both Ca = 0.05 and Ca = 1.2, thus indicating that the membrane tension spontaneously responds to the background fluid flow.
The Taylor parameter, a classical index of deformation, is described in Appendix <ref> (see figure <ref>).
To clarify whether fast axial focusing depends on the phase of oscillation or not,
an antiphase pulsation (i.e., ∂_z p_a^∗ = -2) is given by ∂_z p^∗ (t^∗) = 1 - 2 sin(2 π f^∗ t^∗).
Time histories of the capsule centroid r_c/R and membrane tension T_iso under such condition are shown in figures <ref>(d) and <ref>(e),
where the case at the same Ca = 1.2 from figures <ref>(b) and <ref>(c) are also superposed for comparison, together with the solution for steady flow.
Here, we define the focusing times T and T_st needed by the capsule centroid to reach the centreline (within a one fluid mesh corresponding to ∼ 6% of its radius to account for the oscillations in the capsule trajectory) under pulsatile and steady flows, respectively.
Although the focusing time is decreased almost by 50% in prograde pulsation (∂_z p_a^∗ = 2) compared to that in the steady flow,
the time in antiphase pulsation is decreased only by 1%.
Such small acceleration in antiphase pulsation comes from relatively small deformation in early periods (figure <ref>e).
We now understand that fast axial focusing relies on the large membrane tension after flow onset,
and our numerical results exhibit the even faster axial focusing due to the pulsation of the flow.
Figure <ref>(a) is the time history of the distance travelled along the flow direction (z-axis) r_z/D.
The distance to complete the axial focusing (Ca = 1.2) under pulsatile flow increases comparing to that in steady flow because the capsule speed along the flow direction increases by adding flow pulsation,
where the circle dots represent the points when the capsule has completed the axial focusing.
The capsule speed along the flow direction at Ca = 0.05, on the other hand, decreases with the pulsation of the flow.
Figure <ref>(b) shows again the radial position of capsule centroids r_c/R as a function of z/D.
The capsule trajectories obtained for Ca = 1.2 remains almost the same,
while the capsule trajectory for Ca = 0.05 reaches equilibrium within a shorter traveled distance with pulsation.
Following the classification by <cit.>,
our problem is oscillatory dominated,
since the oscillation amplitude is one order of magnitude greater than the steady flow component (i.e., O(sω/u̅^') ∼ 10^1,
where s is the centreline displacement amplitude and u̅^' is the centreline velocity in a steady flow component).
Notwithstanding this, the oscillatory motion was not enough to enhance the inertial focusing,
in terms of channel lengths needed for the inertial focusing,
because of the capsule deformations impeding the inertial focusing,
consistently with previous numerical study (see figure 4a in <cit.>).
We now focus on axial focusing (i.e., cases of relatively high Ca) at finite Re.
As discussed in figure <ref>(d),
previous study showed that the speed of the axial focusing can be accelerated by the flow pulsation <cit.>.
An acceleration indicator of the axial focusing [1 - T/T_st] at Re = 10 is summarised in figure <ref>,
as a function of f^∗ (= f/γ̇_m),
where the results at Re = 0.2 <cit.> are also supperposed.
Although the initial radial position of the capsule r_0/R is slightly different between the two Re,
the focusing time is commonly minimised at a specific frequency in both cases.
Note that, the values of the dimensional frequency depend on the estimation of G_s,
which varies with the membrane constitutive laws and which is also sensitive to different experimental methodologies, e.g., atomic force microscopy, micropipette aspiration, etc. <cit.>;
the estimation of the dimensional frequency is therefore not trivial and left as for future investigations.
We hereby conclude that capsules with large Ca exhibit axial focusing even at finite Re, and that their equilibrium radial positions are not altered by the flow pulsation.
§.§ Effect of Reynolds number on capsule behaviour under pulsatile channel flow
We now focus on the inertial focusing of capsules at relatively small Ca, and, unless otherwise specified, we show the results obtained for Ca = 0.05.
Figure <ref>(a) shows representative time history of the capsule centroid during inertial (or off-centre) focusing at Re = 30 and f^∗ = 0.02 for different initial position of the capsule r_0/R (= 0.1 and 0.4),
where insets represent snapshots of the lateral view of deformed capsule at various time γ̇_m t (= 60, 75, and 90), respectively.
The results clearly show that the equilibrium radial position of the capsule is independent of its initial position r_0
(except when r_0 = 0 for which the capsule remains at centreline).
Hereafter, each run case is started from a slightly off-centre radial position r_0/R = 0.4 (R/a_0 = 2.5).
For the trajectory at early times (γ̇_m t ≤ 20),
fitting by equation (<ref>) still works.
At quasi-steady state (γ̇_m t > 20),
the capsule centroid fluctuates around an off-centre position r_c/R (≈ 0.3).
Thus, the trajectory of the capsule during inertial focusing can be expressed as
r_c^∗ =
βexp(-α t^∗) for t^∗≤ t_ax^∗
r_e^∗ + Δ r_osci^∗ for t^∗ > t_ax^∗
,
where t_ax^∗ is the time period during axial focusing,
r^∗_e is the equilibrium radial position of the capsule centroid due to inertia,
and Δ r_osci^∗ is a perturbation due to the oscillatory flow.
Here, the equilibrium radial position is measured numerically by time averaging the radial position of the capsule centroid as r^∗_e = ⟨ r_c^∗⟩ = ( 1/T) ∫_t^∗^t^∗ + T r_c (t^') dt^'.
Figure <ref>(b) shows the time histories of the capsule centroid r_c/R at f^∗ = 0.02 for different Re,
together with those with steady flow.
We observe that the radial positions are greater than those at steady flow for all Re, due to the larger values achieved by the pressure gradient during the pulsation.
However, the actual contribution of the oscillatory flow to the inertial focusing depends on Re.
For instance, for Re ≤ 7, the capsule exhibits axial focusing at steady flow,
but a pulsatile channel flow allows the capsule to exhibit off-centre focusing.
Therefore, the pulsation itself can impede the axial focusing.
Figure <ref>(c) shows the waveforms of r_c/R at the end of the migration (γ̇_m≥ 350),
where the instantaneous values are normalised by their respective amplitudes χ_amp and and shifted so that each baseline is the mean value χ_m.
Although the delay of r_c/R from the oscillatory pressure gradient ∂_z p^∗ tends to decrease as Re increases,
the overall waveforms of r_c/R well follow that of ∂_z p^∗, as shown in figure <ref>(b).
To quantify the waveform of r_c/R and its correlation to ∂_z p^∗,
we extract the dominant (or peak) frequency f^∗_peak of r_c/R with a discrete Fourier transform,
whose principle and implementation are described in <cit.>,
and the result are shown as a function of Re in figures <ref>(d).
In the cases of Re ≤ 6, the capsule does not exhibit off-centre focusing, and thus the plots are displayed for Re ≥ 7 only.
The value of f^∗_peak collapses on the frequency of ∂_z p^∗ with f^∗ = 0.02 for Re ≥ 7 (figure <ref>d).
The transition from the axial focusing to the off-centre focusing thus requires a synchronisation, induced by capsule deformability, between the capsule centroid and the background pressure gradient.
Figures <ref>(a) and <ref>(b) show the time average of the radial position or equilibrium position ⟨ r_c⟩/R and the isotropic tension ⟨ T_iso⟩, respectively, as a function of Re,
where the error bars represent the standard deviation (SD) during a period.
Overall, both these values nonlinearly increase with Re,
with the mean values in the oscillating flows always greater than those in steady flows.
The curves show steep increases for Re ≤ 10,
followed by a more moderate increases for Re > 10;
these general tendency are the same in steady or pulsatile flows.
The effect of the flow pulsation is maximised at moderate Re (= 7),
in which the axial focusing is impeded by the pulsatile flow (figure <ref>a).
The results also show that small fluctuations of the capsule radial position (SD(r_c/R) < 10^-2) are accompanied by large fluctuations of the membrane tension (SD(T_iso) > 10^-1).
§.§ Effect of oscillatory frequency on capsule behaviour under pulsatile channel flow
Finally, we investigate the effect of the oscillatory frequency f^∗ on the equilibrium radial position ⟨ r_c⟩/R at Re = 10,
with the results summarised in Figure <ref>(a),
where those at steady flow are also displayed at the point f^∗ = 0.
The results clearly suggest that there exists a specific frequency to maximise ⟨ r_c⟩/R, independently of Re.
Interestingly, such effective frequency (f^∗ = 0.05) are close to or slightly larger than those maximising the axial focusing speed (see figure <ref>).
Comparing to steady flow,
the equilibrium radial position ⟨ r_c⟩/R at the effective frequency was enhanced by 640% at Re = 7, 40% at Re = 10, 13% at Re = 20, and 7.6% at Re = 30.
The contribution of the oscillatory flow to the off-centre focusing becomes negligible for higher frequencies,
in which the trajectory of the capsule centroid at the highest frequency considered (f^∗ = 5) collapses on that obtained with steady flow.
Figure <ref>(b) shows the time average of the isotropic tension ⟨ T_iso⟩ as a function of f^∗.
The values of ⟨ T_iso⟩ decrease as f^∗ increases because of the reduction of the shear stress when moving closer to the channel centreline (i.e., small ⟨ r_c⟩/R).
The results of large capsule deformation at relatively small frequencies are consistent with a previous numerical study by <cit.>,
who showed that at high frequency a neo-Hookean spherical capsule undergoing oscillating sinusoidal shear flow cannot adapt to the flow changes,
and only slightly deforms, consistently with predictions obtained by asymptotic theory <cit.>. Thus, capsules at low frequencies exhibit an overshoot phenomenon,
in which the peak deformation is larger than that its value in steady shear flow.
By increasing channel diameter D (= 2R = 30 μm, 40 μm, and 50 μm),
we also investigate the effect of the size ratio R/a_0 (= 3.75, 5, and 6.25) on the equilibrium radial position ⟨ r_c⟩/R.
Figure <ref>(a) is the time history of r_c/R for different size ratios R/a_0 at Re = 30, and f^∗ = 0.02,
where the trajectories obtained with the steady flow are also displayed.
All run cases are started from r_0/R = 0.4.
The equilibrium radial positions increase with R/a_0,
while the contribution of oscillatory flow to ⟨ r_c⟩/R becomes small as well as its fluctuation.
This is quantified in figure <ref>(b),
where ⟨ r_c⟩/R is shown as a function of the size ratio R/a_0.
Although the equilibrium radial position ⟨ r_c⟩/R increases with R/a_0,
indicating that dimensional equilibrium radial position ⟨ r_c⟩ also increases with R,
the isotropic tension ⟨ T_iso⟩/G_s decreases as shown in figure <ref>(c).
This is because the distance from the capsule centroid to the wall (R - ⟨ r_c⟩) increases with R, resulting in lower shear stress.
Oscillatory-dependent off-centre focusing is summarised in figure <ref>(d),
where the results are obtained with different channel size R/a_0 and different Re (= 10 and 30).
The result shows that oscillatory-dependent off-centre focusing is impeded as Re increases.
It is known that rigid particles align in an annulus at a radius of about 0.6R for Re = DV/ν = O(1) <cit.>,
and shift to larger radius for larger Re <cit.>,
where V is the average axial velocity <cit.>.
Our numerical results show that capsules with low deformability (Ca = 0.05) are still in ⟨ r_c⟩/R ∼ 0.5 even for the largest channels (R/a_0 = 6.25; R = 25 μm) and Reynolds numbers (Re = 30), both in the steady and pulsatile flows (figure <ref>b).
Therefore, off-centre focusing is impeded even at such small particle deformability.
This result is consistent with previous numerical study about a spherical hyperelastic particle in a circular channel with R/a_0 = 5 under steady flow for 100 ≤ Re ≤ 400 and 0.00125 ≤ We ≤ 4 <cit.>.
There, the authors showed that the particle radial position is ⟨ r_c⟩/R ∼ 0.5 at the highest Re (= 400) and lowest We (= 0.00125).
Our numerical results further show that the contribution of the flow pulsation to the off-centre focusing decreases as the channel size R/a_0 increases (figures <ref>b and <ref>d) because of the low shear stress acting on the membranes (figure <ref>c).
In other word, a large amplitude is required for oscillaton-induced off-centre focusing in high Re and large channels.
Throughout our analyses, we have quantified the radial position of the capsule in a tube based on the empirical expression (<ref>).
We have provided insights about the coefficient α (> 0) in r_c^∗ = βexp(-α t^∗), which potentially scales the lift force and depends on shape, i.e., capillary number Ca and viscosity ratio λ.
§ CONCLUSION
We numerically investigated the lateral movement of spherical capsules in steady and pulsatile channel flows of a Newtonian fluid, for a wide range of Re and oscillatory frequency f^∗.
The roles of size ratio R/a_0, viscosity ratio λ, and capillary number Ca on the lateral movement of the capsule have been evaluated and discussed.
The first important question we focused on is whether a capsule lateral movement at finite Re in a pulsatile channel flow can be altered by its deformability.
The second question is whether equilibrium radial positions or traveling time are controllable by oscillatory frequency.
Our numerical results showed that capsules with high Ca still exhibit axial focusing even at finite Re (e.g., Re = 10), and that their equilibrium radial positions cannot be altered by flow pulsation.
However, the speed of axial focusing at such high Ca is substantially accelerated by making the driving pressure gradient oscillating in time.
We also confirmed that there exists a most effective frequency (f^∗≈ 0.02) which maximises the speed of axial focusing, and that it remains the same as that in almost inertialess condition.
For relatively low Ca, on the other hand, the capsule exhibits off-centre focusing, resulting in an equilibrium radial position ⟨ r_c⟩/R which depends on Re.
There also exists a specific frequency to maximise ⟨ r_c⟩/R, which is independent of Re.
Interestingly, such effective frequency (f^∗≈ 0.05) is close to that for axial focusing.
Frequency-dependent inertial focusing requires a synchronisation between the radial centroid position of the capsule and the background pressure gradient, resulting in periodic and large membrane tension, which impedes axial focusing.
Such synchronisation abruptly appear at O(Re) = 10^0,
and shifts to an almost perfect syncrohisation as Re increases.
Thus, there is almost no contribution of flow pulsation to ⟨ r_c⟩/R at relatively low Re (≤ 5) or large Re (≥ 30),
while the contribution of the pulsation to ⟨ r_c⟩/R is maximised at moderate Re (≈ 7),
allowing the capsule to exhibit axial focusing in steady flow.
For constant amplitude of oscillatory pressure gradient,
oscillatory-dependent inertial focusing is impeded as Re and channel diameter increase,
and thus relatively large oscillatory amplitude is required in such high Re and large channels.
Given that the speed of inertial focusing can be controlled by oscillatory frequency,
the results obtained here can be utilized for label-free cell alignment/sorting/separation techniques, e.g., for circulating tumor cells in cancer patients or precious hematopoietic cells such as colony-forming cells.
§ ACKNOWLEDGEMENTS
This research was supported
by the Okinawa Institute of Science and Technology Graduate University (OIST) with subsidy funding to M.E.R. from the Cabinet Office, Government of Japan.
The presented study was partially funded by Daicel Corporation.
K.I. acknowledges the Japan Society for the Promotion of Science (JSPS) KAKENHI for Transformative Research Areas A (Grant No. 21H05309) and the Japan Science and Technology Agency (JST), FOREST (Grant No. JPMJFR212N).
§ CONFLICTS OF INTEREST
The authors report no conflict of interest.
§ NUMERICAL SETUP AND VERIFICATION
To show that the channel length is adequate for studying the behaviour of a capsule that is subject to inertial flow,
we have tested the channel length L (= 20a_0 and 40a_0),
and investigated its effect on the radial positions of the capsule centroids.
The time history of the radial position of the capsule centroid r_c is compared between these different channel lengths in figure <ref>, where the centroid position r_c is normalised by the channel radius R.
The results obtained with the channel length L used in the main work (= 20 a_0) are consistent with those obtained with twice longer channel (L = 40 a_0).
§ LIFT FORCE ON A CAPSULE IN A POISEUILLE FLOW
We consider an object immersed in a Poisseulle flow,
assuming that the flow is in the (steady) Stokes regime and that the object size is much smaller than the channel size.
We also neglect any boundary effects acting on the object.
Let y be the position relative to the channel centre.
Due to the linearity of the Stokes equation, the object experiences a hydrodynamic resistance proportional to its moving velocity, given by
f_1^L = - ξ_1 ẏ.
Note that the drag coefficient ξ_1 > 0 is only determined by the viscosity and the shape (including the orientation) of the particle.
We then consider the effects of the background Poiseuille flow.
We have assumed that the channel size is much larger than the particle size, and hence the background flow to the particle is well approximated by a local shear flow with its local shear strength,
γ̇= -2 V_max^∞/R^2 y.
In the presence of the background shear,
the shear-induced lift force in general appears,
and this is proportional to the shear strength <cit.>,
f_2^L = -ξ_2 γ̇= 2 ξ_2 V_max^∞/R^2 y,
where the coefficient ξ_2 is again only determined by the viscosity and the shape.
The force balance equation on the y-direction therefore reads f_1^L + f_2^L = 0.
If we introduce a new shape-dependent coefficient, α, as
α = 2 ξ_2/ξ_1V_max^∞/R^2,
we obtain the evolution equation for the position y as
ẏ = - α y.
This equation is easily solved if α is constant and the result is the exponential accumulation to the channel centre, consistent with the numerical results.
§ NEO-HOOKEAN SPHERICAL CAPSULE
The NK constitutive law is given by
w_NH/G_s = 1/2( I_1 - 1 + 1/I_2 + 1).
Figure <ref>(a) shows side views of the capsule during its axial focusing at each time for different Ca (= 0.05, 0.1, and 0.2).
Other numerical settings (Re, initial position r_0/R, and viscosity ratio λ) are the same as described in <ref>.
Even at relatively small Ca (= 0.2),
the NH-capsule exhibits large elongation after flow onsets,
resulting in fast axial focusing.
The trajectory and fitting for it at each Ca are shown in figure <ref>(b),
where the result at the highest Ca (= 1.2) obtained with SK law described in figure <ref>(b) is also superposed.
The results suggest that equation (<ref>) still works even for NH-spherical capsules, although the applicable ranges of Ca are relatively small compared to those described by the SK law.
§ TAYLOR PARAMETER
The SK-spherical capsule deformation is quantified by the Taylor parameter D_12, defined as
D_12 = | a_1 - a_2 |/a_1 + a_2,
where a_1 and a_2 are the lengths of the semi-major and semi-minor axes of the capsule, and are obtained from the eigenvalues of the inertia tensor of an equivalent ellipsoid approximating the deformed capsule <cit.>.
Figure <ref> shows the time history of D_12 at Re = 10, R/a_0 = 2.5, and f^∗ = 0.02.
Differently from what observed for the isotropic tension shown in figure <ref>(c),
the off-centred capsule exhibits large D_12,
which well responds to the oscillatory pressure ∂_z p^∗.
Thus, the magnitude of D_12 is strongly correlated with the capsule radial position (and the consequent shear gradient).
Figures <ref>(a–c) are the time average of D_12.
Overall, these results exhibit trends comparable to those of ⟨ T_iso⟩, previously shown in figures <ref>(b), <ref>(b), and <ref>(c).
Despite the similarities,
the axial-symmetric shaped capsule,
typical of large Ca,
exhibits small D_12 (figure <ref>a),
and the capsule membrane state in pipe flows cannot be easily estimated from the deformed shape.
This is why we use the isotropic tension T_iso as an indicator of membrane deformation.
jfm
|
http://arxiv.org/abs/2409.03473v1 | 20240905123527 | Purification of Gaussian States by Photon Subtraction | [
"Kun Zhang",
"Huijun Li",
"Jietai Jing",
"Nicolas Treps",
"Mattia Walschaers"
] | quant-ph | [
"quant-ph",
"physics.optics"
] |
Institute of Nonlinear Physics and Department of Physics, Zhejiang Normal University, Jinhua, 321004 Zhejiang, China
State Key Laboratory of Precision Spectroscopy, Joint Institute of Advanced Science and Technology, School of Physics and Electronic Science, East China Normal University, Shanghai 200062, China
[email protected]
Institute of Nonlinear Physics and Department of Physics, Zhejiang Normal University, Jinhua, 321004 Zhejiang, China
State Key Laboratory of Precision Spectroscopy, Joint Institute of Advanced Science and Technology, School of Physics and Electronic Science, East China Normal University, Shanghai 200062, China
[email protected]
State Key Laboratory of Precision Spectroscopy, Joint Institute of Advanced Science and Technology, School of Physics and Electronic Science, East China Normal University, Shanghai 200062, China
CAS Center for Excellence in Ultra-intense Laser Science, Shanghai 201800, China
Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006, China
Laboratoire Kastler Brossel, Sorbonne Université, CNRS, ENS-Université PSL, Collège de France, 4 place Jussieu, F-75252 Paris, France
[email protected]
Laboratoire Kastler Brossel, Sorbonne Université, CNRS, ENS-Université PSL, Collège de France, 4 place Jussieu, F-75252 Paris, France
§ ABSTRACT
Photon subtraction can enhance entanglement, which for pure states induces a decrease in the purity of reduced states. In contrast, by analyzing the purities of Gaussian states before and after subtracting a single photon, we prove that the purity of a Gaussian state can also be increased by less than 20%. On the one hand, it reveals that photon subtraction can reduce entanglement, and on the other hand, it reveals that it can achieve a limited amount of Gaussian state purification. Through the analysis of some examples, we demonstrate the inherent mechanism and applicable scope of photon-subtraction-based purification. In a multimode system, we find that photon subtraction can increase entanglement and purify some of the reduced states simultaneously. We thus present purification through the suppression of Gaussian noise as a new application for photon subtraction in continuous-variable quantum information processing.
Purification of Gaussian States by Photon Subtraction
Mattia Walschaers
September 9, 2024
=====================================================
§ INTRODUCTION
Quantum information based on continuous-variable (CV) optics exploits quadratures of the electromagnetic fields to encode, transmit, and process quantum information <cit.>. One of the key advantages of this approach is that the quantum information processing tasks can be implemented in a deterministic unconditional way using easily implemented Gaussian entangled states and Gaussian operations <cit.>.
Several protocols such as quantum teleportation <cit.> and quantum key distribution <cit.> have already been demonstrated experimentally.
The noise caused by dissipation and decoherence is the main source of errors in information processing<cit.>. For a long time, there has been a constant pursuit for high-purity or low-noise states to transmit information more faithfully<cit.>.
Gaussian operations can effectively suppress non-Gaussian noise by driving non-Gaussian into Gaussian states <cit.>.
However, the most common noise, such as fluctuation noise, thermal noise, scattering noise, etc., are all Gaussian, which cannot be suppressed by local Gaussian operations <cit.>. For this reason, a method called entanglement distillation is proposed to combat noise by increasing entanglement, rather than directly suppressing noise.
This method aim to extract a small number of high entangled states from a large number of weak entangled states by means of non-Gaussian operations and classical communications <cit.>.
In addition to the required number of copies of the initial entangled state <cit.>, the main characteristic is the selection of non-Gaussian operations <cit.>.
Photon subtraction (PS) is a typical non-Gaussian operation used in entanglement distillation, and has driven breakthrough experimental demonstrations of distillation protocols due to its practical feasibility in the laboratory <cit.>.
Moreover, an improved version of the PS-based distillation protocol has been proposed by adding additional local Gaussian operations, such as displacement <cit.> and squeezing <cit.>.
On the other hand, the characteristic of PS improving bipartite entanglement has been analyzed through different measurement methods <cit.>, such as logarithmic negativity <cit.> and von Neumann entropy <cit.> for two-mode states, and Rényi-2 entropy <cit.>
and purity <cit.> for multimode states.
However, for mixed states, PS usually reduces the purity of global entangled states. In other words, PS-based distillation does not exert all effort to increase entanglement as it does in pure states, it may allocate a portion of the effort to generate additional noise. This has to make people reconsider the practical value of PS-based distillation.
Purity is a good indicator for evaluating the amount of noise in states.
Because both Gaussian and non-Gaussian noise can lead to a decrease in the purity of a state. Based on purity, this paper studies the feasibility of PS directly combating Gaussian noise. We reveal that under certain conditions, PS can increase the purity of Gaussian states, which means PS can be used to develop PS-based purification protocols. Furthermore, we strictly prove the upper limit of purity increase for any Gaussian state, which indicates that PS-based purification has a performance upper limit.
Through the analysis of single-mode Gaussian states, we reveal the purification mechanism of suppressing noise in one direction from the perspective of phase space. Through the analysis of an entangled pure state with three modes, we find that PS-based distillation and PS-based purification can coexist. These works pave the way for purifying CV multimode Gaussian states.
The structure of the paper is as follows. In Sec. <ref>, we firstly review the general methods for calculating the purity of a CV state, then prove the general condition and upper bound for the increase in purity of any Gaussian state before and after subtracting one photon. In Sec. <ref>, we demonstrate some examples of PS-based purification scheme. We will conclude in Sec. <ref>.
§ PROOF OF PURIFICATION UPPER BOUND
§.§ Purity of CV quantum state
In CV quantum optics, Wigner function is a representation of an arbitrary quantum state in phase space. A quantum state ρ̂^G that can be represented by a Gaussian Wigner function is called a Gaussian state, which is usually written as <cit.>
W^G(β⃗) = exp[-1/2(β⃗-α⃗_0)^tV^-1(β⃗-α⃗_0)]/(2π)^m√(det(V)),
where β⃗=(x_1, ⋯, x_m, p_1, ⋯, p_m)^t∈ℝ^2m is a set of
amplitude and phase quadrature. x_i and p_i are the eigenvalues of quadrature operators x̂_i and p̂_i, which have the relation with the associated annihilation operator â_i and creation operator â_i^† as x̂_i=â_i^†+â_i, and p̂_i = i(â^†_i-â_i). α⃗_0 and V are referred to as the displacement vector and the covariance matrix. For a Gaussian state, purity can be directly obtained through the associated covariance matrix by μ=1/√(|V|), but it is not true for non-Gaussian states. It is worth noting that all information about a quantum state ρ̂ can be obtained through its Wigner function W_(β⃗). For instance, the purity of ρ̂ can be obtained by integrating its Wigner function as follows <cit.>
μ_=tr(ρ̂^2)=(4π)^m_∫_ℝ^2m_|W_(β⃗)|^2 d^2m_β.
Similarly, the variance of the amplitude quadrature of mode-k can be obtained by
Δ^2x̂_k
= ∫_ℝ^2mx_k^2W_(β) d^2m_β-[∫_ℝ^2mx_kW_(β) d^2m_β]^2.
This procedure of extracting quantum state information is still effective for non-Gaussian states.
Here, we are interested in photon subtracted non-Gaussian state, whose Wigner function can be accurately described through a phase space representation method <cit.>, yielding a comprehensive expression <cit.>
W^-_(β⃗_)= [X V^-1_(β⃗_ - α⃗_) - α⃗_g^2 + tr(V_g - X V^-1_ X^t)
-2 ] × W^G_(β⃗_)/α⃗_g^2 + tr(V_g) - 2,
where the label g indicates the mode in which the photon is subtracted.
X = G^t(V-), and is the identity matrix. Here, G is a 2m × 2 matrix where the two columns are the basis vectors g⃗^(x) and g⃗^(p) associated with the two phase space axes of mode g:
G = [ | |; g⃗^(x) g⃗^(p); | | ].
Analogously, V_g = G^tVG, and the displacement vectors α⃗_ and α⃗_g = G^tα⃗. We can now use standard techniques of Gaussian integrals to evaluate the purity with Eq. (<ref>) from W^-(β⃗).
By comparing the purities of a Gaussian state before and after PS, we can analyze the changes in any Gaussian state.
§.§ Upper bound of relative purity
Given the vast array of Gaussian states, it is impractical to investigate all Gaussian states one by one. To derive general conclusions and features, a research methodology that comprehensively encompasses all Gaussian states is required.
In reference <cit.>, we have provided a general expression of relative purity
μ^-/μ=1/2 +[1/2(∑_i=1^mÑ_i/n_i)^2+
1/2|α_g|^4+|∑_i=1^mk_il_in_i^2-1/2n_i|^2
+α_g^∗^2(∑_i=1^mk_il_i(n_i^2-1)/2n_i)+α_g^2(∑_i=1^mk_i^∗l_i^∗(n_i^2-1)/2n_i)
+|α_g|^2∑_i=1^mN_i]/(∑_i=1^mN_i+|α_g|^2)^2,
where μ^ and μ^- are the purities of the Gaussian state before and after PS operation, respectively. N_i=|k_i|^2n_i+1/2+|l_i|^2n_i-1/2, and Ñ_i =|k_i|^2n_i+1/2-|l_i|^2n_i-1/2, where n_i is the Gaussian noise factor (n_i=1 represents vacuum). And l_i, k_i, and α_g are the complex coefficients of the unitary operator D̂Û transforming operator â_g. The
specific details are as follows,
Û^†D̂^†â_gD̂Û = α_g+∑_i^mk_iâ_i^†+l_iâ_i,
where D̂ and Û come from the thermal decomposition ρ̂^G=D̂Û⊗_i=1^mρ̂_i^thÛ^†D̂^† of Gaussian state ρ̂^G<cit.>, where ρ̂_i^th is a single-mode thermal state. According to the property of the Bogoliubov transformation, k_i and l_i are subject to the following constraint
∑_i=1^m|l_i|^2-|k_i|^2=1 and ∑_i=1^mk_il_i^*=-∑_i=1^mk_i^*l_i.
When α_g=0, it always holds that μ^-/μ⩽1 (See Appendix <ref>). This means that for any Gaussian state with α_g=0, PS can only reduce purity, which also means an increase in entanglement entropy, if we consider the initial Gaussian state as a subsystem of a big entangled state. As an irreversible operation, it is natural for PS to exhibit this phenomenon. After some necessary mathematical processing, we have strictly proven that the lower bound of Eq. (<ref>) is 1/2 <cit.>. Here, what we are interested in is whether the upper bound on relative purity is possibly beyond one.
To address this inquiry, we rewrite the relevant complex parameters as α_g=α̃_g e^iφ, k_i=k̃_ie^iϕ_i, and l_i=l̃_ie^iθ_i, respectively. α̃_g, φ, k̃_i, ϕ_i, l̃_i, and θ_i are real numbers. Through the derivation in appendix <ref>, we obtain the necessary and sufficient conditions for μ^-/μ⩾1
∑_i=1^mk̃_il̃_i(n_i^2-1)/2n_icos(2φ-ϕ_i-θ_i)>0,
and
α̃_g^2 ⩾ (∑_i=1^mN_i)^2 -(∑_i=1^mÑ_i/n_i)^2-2|∑_i=1^mk_il_in_i^2-1/2n_i|^2 /4∑_i=1^mk̃_il̃_i(n_i^2-1)/2n_icos(2φ-ϕ_i-θ_i).
Eq.(<ref>) takes the equal sign corresponding to μ^-/μ=1.
In addition, by making the angle factors satisfy 2φ=2nπ+ϕ_i+θ_i for all i, we also obtain an upper bound function f(α) that satisfies μ^-/μ⩽ f(α). This is an reachable bound because the adjustment of the angular factors does not violate the constraint condition Eq. (<ref>).
f(α) can be expressed as
f(α)=1+1/2[x^2-y^2+1/2z^2+2α z]/(y+α)^2,
where x=∑_i=1^mÑ_i/n_i, y=∑_i=1^mN_i, z=|∑_i=1^m2k̃_il̃_in_i^2-1/2n_i|^2, and α=α̃_g^2.
Furthermore, by the derivative of f(α) with respect to α,
we obtain the maximum value of f(α)
max(f(α)) =1+z^2/2(y^2-x^2+2yz-1/2z^2).
and its upper open bound
max( f(α)) < 1+1/5+8ζ/1-ζ,
where ζ∈(0,1]. The maximum value of right side of Eq. (<ref>) asymptotically approaches 1.2, when ζ→0. This means that the relative purity always meets μ^-/μ<1.2. That is to say, the purity increased by PS will not reach or exceed 20% of the original Gaussian state purity.
Therefore, Gaussian states can be
purified by PS and the ability of purification is limited.
§ ANALYSIS OF PURIFICATION CASES
§.§ PS-based Purification of Single-mode Gaussian state
We here consider a single-mode state ρ̂_g that can be characterized by a single-mode
covariance matrix V_g=diag(n_gs,n_g/s) and a displacement α_g=α̃e^iφ.
Due to this state has only one mode, PS can only be applied to this mode. According to Section <ref>, it can be deduced that k̃_g=(s-1)/(2√(s)) and l̃_g=(s+1)/(2√(s)) , which means that the angular factor satisfies ϕ_g=θ_g=0.
As shown in FIG. <ref>, we draw several curves of relative purity as a function of the parameters α̃, φ, n_g and s.
Each point on the curves correspond to a Gaussian state and its photon subtracted state. As shown in FIG. <ref>(a), we can always obtain the optimal relative purity f(α) at φ=nπ, this result is consistent with Eq. (<ref>).
As shown by the red dashed curve and the blue dashed curve in FIG. <ref>(b), when the angle factor takes a mismatched value φ=2π, no matter how large the displacement distance α̃ is,
the purity will never increase.
When the squeezing parameter s=10 and the matching angle factor φ=0 are taken, we can always find the maximum value max(f(α)) by adjusting the displacement distance α̃, as shown by the red circle in FIG. <ref>(b).
Note that these two real curves share the same maximum value max(f(α)), which is close to 1.2. When s→∞, ζ→0
and then max(f(α)) can infinitely approaches 1.2. In short, the range of Gaussian states that can be purified by PS is limited and the degree of purification varies with the initial Gaussian state.
In order to better understand the PS-based purification,
we now analyze in detail the case corresponding to the red circle on the red solid line in FIG. <ref>(a). According to Eq. (<ref>), the variances of the amplitude and phase quadrature before and after PS have the relation
Δ^2q̂_g^- ≈ 0.85Δ^2q̂_g, and Δ^2p̂_g^- = Δ^2p̂_g.
This shows that PS suppresses noise in the q_g-direction and has no effect in the p_g-direction. In other words, we reduce the anti-squeezing, while keeping the squeezing unchanged. As shown in FIG. <ref>, comparing the two Wigner functions, it shows that PS causes the peak of the Wigner function to become higher and causes a slight displacement of the Wigner function along the q_g-direction, in other words, the suppression of noise is distinct.
§.§ PS-based Purification of Multimode Gaussian state
When it comes to multimode states, the number of parameters increases with the number of modes. To achieve purification, besides some methods of changing parameters, directly changing the mode used to perform PS operation is also a good alternative option.
As an example, we consider a three mode Gaussian entangled pure state, which is expressed as the following state vector
|ψ⟩_123=Ŝ_3Ŝ_2Ŝ_1D̂|0,0,0⟩,
which can be formed by connecting three vacuum states using one displacement operation D̂ and three two-mode squeezing operations Ŝ_i(i=1,2,3), as shown in FIG. <ref>(a). We arrange a displacement operation D̂ here so that the three modes can generate a small displacement. This is necessary because without displacement, photon subtraction will not increase the purity of any state, which has been strictly proven in Appendix <ref>.
We perform PS operation on the three modes separately, as shown by the red, green, and blue circles in FIG. <ref>(a). Then we calculate the relative purity of each mode under each PS operation and plot it in FIG. <ref>(b). Multimode systems have a replicated entanglement structure. In terms of bipartite entanglement, a system with three modes has three types of bipartite entanglement, which are the entanglements between the three mode and their remaining subsystems. Since the global state |ψ⟩_123 is a pure state, and PS preserves this global purity, the increase (decrease) in the purity of a mode directly indicates a decrease (an increase) in bipartite entanglement between the mode and the remaining subsystem.
When one photon is subtracted from mode-1, the relative purities of the three modes are all greater than one, as shown by the red circles in FIG. <ref>(b). This means that the entanglement of all bipartite entangled structures is reduced. On the contrary, when one photon is subtracted from mode-2, the relative purities of the three modes are all less than one, as shown by the green diamonds in FIG. <ref>(b). This means that the entanglement of all bipartite entangled structures increases. When we subtract one photon from mode-3, the purities of both mode-1 and mode-2 increase, while the purity of mode 3 decreases, as shown by the blue pentagrams in FIG. <ref>(b). This implies that the bipartite entanglement between mode-3 and its subsystems increases, while the bipartite entanglement between mode-1 or mode-2 and their respective remaining subsystems decreases.
Interestingly, as the entanglement between mode-3 and its remaining subsystems increases, the purity of mode-1 and mode-2 in the remaining subsystem increases. This means that PS can purify the modes of a reduced subsystem while increasing the entanglement of the global system. This means that PS-based distillation and purification can be achieved simultaneously in one system.
§ CONCLUSION
This paper finds that PS can be used to develop purification protocols for Gaussian states, and studies the properties of such protocols. Firstly,
based on the general expression of relative purity in Eq. (<ref>), we prove that the purity of a Gaussian state can be increased by less than 20%, and provide the sufficient and necessary conditions for purity increase. These results directly reflect the performance and implementation conditions of PS-based purification protocols. Then, based on the analysis of single-mode Gaussian states, we use phase space to visually demonstrate the physical mechanism of PS-based purification. That is to suppress Gaussian noise along a single direction in phase space. Finally, with the help of a three-mode entangled state, we demonstrate a strategy of achieving different purification effects by changing the photon subtracted mode. And we find that PS-base distillation and purification can coexist in one system.
In general, for a pure bipartite entangled system, without considering the threshold point μ_^-/μ_=1, PS either purifies or distills it, which leads to either reducing or increasing the entropy of the subsystem.
This paper reveals a special case that lies between pure purification and pure distillation,
in which the entanglement of an global bipartite system can be enhanced, while the noise of some modes in the subsystems can be suppressed.
In a sense, this paper demonstrates the flexibility and plasticity of PS in multimode systems, although whether impure systems can have similar properties
is still an open question.
§ ACKNOWLEDGMENT
K.Z. and J.J. acknowledge financial
support through the National Natural Science Foundation of China (12225404, 12434014, 11874155); Innovation Program of Shanghai Municipal Education Commission (2021-01-07-00-08-E00100); Program of Shanghai Academic Research Leader (22XD1400700); Natural Science Foundation of Shanghai (17ZR1442900); Minhang Leading Talents (201971); the 111 project (B12024).
H.L. acknowledge financial support through the
Project supported by Natural Science Foundationof Zhejiang Province of China(LZ22A050002), National Natural Science Foundation of China(12074343).
M.W. and N.T. acknowledge financial support from the ANR JCJC project NoRdiC (ANR-21-CE47-0005), the Plan France 2030 through the project OQuLus (ANR-22-PETQ-0013), and the QuantERA II project SPARQL that has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 101017733.
§ WHEN Α=0
Let α=0, Eq. (<ref>) becomes
μ_^-/μ_
= 1/2+1/2(∑_i=1^mÑ_i/n_i)^2+|∑_i=1^mk_il_in_i^2-1/2n_i|^2/(∑_i=1^mN_i)^2.
where N_i =|k_i|^2n_i+1/2+|l_i|^2n_i-1/2 and Ñ_i =|k_i|^2n_i+1/2-|l_i|^2n_i-1/2.
For any complex number α, it always holds that (α+α^∗)^2⩾0 and (α-α^∗)^2⩽0, thus the value range of α^2+α_g^∗^2 is
-2|α_g|⩽α_g^2+α_g^∗^2⩽2|α_g|^2.
Then we get
|∑_i=1^mk_il_i(n_i^2-1)/2n_i|^2 ⩽
(∑_i=1^m√(|k_i|^2|l_i|^2)(n_i^2-1)/2n_i )^2.
Bring Eq. (<ref>) into Eq. (<ref>)
μ_^-/μ_ ⩽ 1/2+1/2(∑_i=1^mÑ_i/n_i)^2+(∑_i=1^m√(|k_i|^2|l_i|^2)(n_i^2-1)/2n_i )^2/(∑_i=1^mN_i)^2
= 1/2+1/2(∑_i=1^mÑ_i/n_i)^2+1/2(∑_i=1^m√(|k_i|^2|l_i|^2)(n_i^2-1)/n_i )^2/(∑_i=1^mN_i)^2.
We know that |k_i|^2+|l_i|^2/2⩾√(|k_i|^2|l_i|^2), then
(∑_i=1^m√(|k_i|^2|l_i|^2)n_i^2-1/n_i )^2/2⩽(∑_i=1^m (|k_i|^2+|l_i|^2)n_i^2-1/n_i) ^2/8.
Due to the constraints of Eq. (<ref>) and (|k_i|^2+|l_i|^2)1/n_i⩽ (|k_i|^2+|l_i|^2)n_i,
the right side of Eq. (<ref>) is less than or equal to zero. This means that
(∑_i=1^mÑ_i/n_i)^2+1/2(∑_i=1^m√(|k_i|^2|l_i|^2)(n_i^2-1)/n_i )^2/(∑_i=1^mN_i)^2⩽ 1,
which means that it always holds that μ_^-/μ_⩽ 1.
§ WHEN Α≠0
Now we rewrite Eq. (<ref>) as follows
μ^-/μ=1 +1/2[(∑_i=1^mÑ_i/n_i)^2+
1/2|∑_i=1^m2k_il_in_i^2-1/2n_i|^2
+2α_g^∗^2(∑_i=1^mk_il_i(n_i^2-1)/2n_i)+2α_g^2(∑_i=1^mk_i^∗l_i^∗(n_i^2-1)/2n_i)
-(∑_i=1^mN_i)^2]/(∑_i=1^mN_i+|α_g|^2)^2.
It is not difficult to find that only the second term on the right side of Eq. (<ref>) is greater than or equal to zero to ensure that μ_^-/μ_⩾1, that is,
2α_g^2(∑_i=1^mk_i^∗l_i^∗(n_i^2-1)/2n_i)+2α^∗_g^2(∑_i=1^mk_il_i(n_i^2-1)/2n_i)
+(∑_i=1^mÑ_i/n_i)^2+1/2|∑_i=1^m2k_il_in_i^2-1/2n_i|^2⩾ (∑_i=1^mN_i)^2.
For convenience, the parameters can be rewritten as α_g=α̃e^iφ, k_i=k̃_ie^iϕ_i, and l_i=l̃_ie^iθ_i, where we set k̃_i⩾ 0, l̃_i⩾ 0, and α̃⩾ 0. The complex angle factors can control the positive or negative signs of these parameters, which do not affect the generality of Eq. (<ref>). According to Eq. (<ref>),
when taking 2φ=2nπ+ϕ_i+θ_i for all i, it can be found that (∑_i=1^mÑ_i/n_i)^2 and (∑_i=1^mN_i)^2 remain unchanged, while 2α_g^2(∑_i=1^mk_i^∗l_i^∗(n_i^2-1)/2n_i)+2α^∗_g^2(∑_i=1^mk_il_i(n_i^2-1)/2n_i) and 1/2|∑_i=1^m2k_il_in_i^2-1/2n_i|^2 can reach their maximum values. Thus, an optimal value of μ_^-/μ_ can be obtained as
μ_^-/μ_ ⩽ 1+1/2[x^2-y^2+1/2z^2+2α z]/(y+α)^2
=f(α),
where x=∑_i=1^mÑ_i/n_i, y=∑_i=1^mN_i, z=|∑_i=1^m2k̃_il̃_in_i^2-1/2n_i|^2, and α=α̃^2. f(α) can be understood as the reachable boundary function of μ_^-/μ_. Because for any μ_^-/μ_, we can always obtain an optimal value by adjusting the complex angle factor. It is theoretically feasible because it does not violate the constraints of Eq. (<ref>).
Thus, the derivative of f(α) with respect to α is
f(α)^'=df(α)/dα = y^2-x^2+yz-1/2z^2-zα/(y+α)^3.
When α=y^2-x^2+yz-1/2z^2/z, we get that
f(α)^'=0, and f(α)^''<0.
So the maximum value of f(α) is
max(f(α)) =1+z^2/2(y^2-x^2+2yz-1/2z^2).
Due to y⩾ x>0, z>0, and y-x>z, max(f(α)) follows the following constraint
max(f(α))<1+1/5+8x/y-x.
Due to y⩾ x>0, we can introduce a variable ζ such that x=ζ y, where ζ∈(0,1]. Due to x>0 , ζ cannot reach zero.
Thus Eq. (<ref>) becomes
max( f(α)) < 1+1/5+8ζ/1-ζ
=f(ζ)
Due to f(ζ)^'<0, the maximum value of f(ζ) asymptotically approaches 1.2, when ζ→0.
This means that any relative purity must satisfy μ^-/μ<1.2.
|
http://arxiv.org/abs/2409.03685v1 | 20240905163921 | View-Invariant Policy Learning via Zero-Shot Novel View Synthesis | [
"Stephen Tian",
"Blake Wulfe",
"Kyle Sargent",
"Katherine Liu",
"Sergey Zakharov",
"Vitor Guizilini",
"Jiajun Wu"
] | cs.RO | [
"cs.RO",
"cs.AI",
"cs.CV",
"cs.LG"
] |
Physics case for quarkonium studies at the Electron Ion Collider
[
September 9, 2024
================================================================
§ ABSTRACT
Large-scale visuomotor policy learning is a promising approach toward developing generalizable manipulation systems. Yet, policies that can be deployed on diverse embodiments, environments, and observational modalities remain elusive.
In this work, we investigate how knowledge from large-scale visual data of the world may be used to address one axis of variation for generalizable manipulation: observational viewpoint.
Specifically, we study single-image novel view synthesis models, which learn 3D-aware scene-level priors by rendering images of the same scene from alternate camera viewpoints given a single input image.
For practical application to diverse robotic data, these models must operate zero-shot, performing view synthesis on unseen tasks and environments.
We empirically analyze view synthesis models within a simple data-augmentation scheme that we call () to understand their capabilities for learning viewpoint-invariant policies from single-viewpoint demonstration data.
Upon evaluating the robustness of policies trained with our method to out-of-distribution camera viewpoints, we find that they outperform baselines in both simulated and real-world manipulation tasks.
Videos and additional visualizations are available at <https://s-tian.github.io/projects/vista>.
§ INTRODUCTION
[17]rt0.45
< g r a p h i c s >
We aim to learn policies that generalize to novel viewpoints from widely available, offline single-view RGB robotic trajectory data.
A foundation model for robotic manipulation must be able to perform a multitude of tasks, generalizing not only to different environments and goal specifications but also to varying robotic embodiments. A particular robotic embodiment often comes with its own sensor configuration and perception pipeline. This variety is a major challenge for current systems, which are often trained and deployed with carefully controlled or meticulously calibrated perception pipelines. One approach to training models that can scale to diverse tasks as well as perceptual inputs is to train on a common modality, such as third-person RGB images, for which diverse data are relatively plentiful <cit.>.
A challenge in using these data is that policies learned by current methods struggle to generalize across perceptual shifts for single RGB images. In this paper, we study one ubiquitous and practically challenging shift: when the camera viewpoint is altered. Prior studies have found that policies trained on RGB images collected from fixed viewpoints are consistently unable to generalize to visual inputs from other camera poses <cit.>.
Existing approaches to learning viewpoint invariance include training using augmented data collected at scale in simulation <cit.> or physically varying camera poses when collecting large-scale real robot datasets <cit.>. However, these strategies require resolving the additional challenges of sim-to-real transfer and significant manual human effort, respectively.
In this work, we leverage the insight that 3D priors can be obtained by generative models from large-scale (potentially robot-free) data and used to make robot policies more robust to changes in camera pose. We take a simple data augmentation approach to this problem by sampling views from a 3D-aware image diffusion novel view synthesis (NVS) model during policy training time. In training on these augmented views, the policy becomes robust to images from out-of-distribution camera viewpoints. We refer to this approach as ().
has several advantages. First, it can leverage large-scale 2D image datasets, which are more diverse than existing robotic interaction datasets with explicit 3D observations. Second, if in-domain robotic data is available, performance may be further improved via finetuning. Third, neither depth information nor camera calibration is required. Fourth, no limitations are placed on the form of the policy. While we focus on imitation learning, can also be applied to other robotic learning paradigms. Lastly, policy inference time is not impacted, as we do not modify inference behavior.
We first investigate the performance of a diffusion-based novel view synthesis model, ZeroNVS <cit.>, when applied using our data augmentation scheme, and perform an empirical analysis of its performance with respect to various viewpoint distributions. Then, we investigate how finetuning an NVS model with in-domain data of robotic tasks can improve downstream policy robustness for held-out tasks. Finally, we show that these models can be used to learn viewpoint-robust policies from real robotic datasets. We demonstrate the potential for NVS models trained on large diverse robotic data to provide these priors across robot tasks and environments, finding that finetuning ZeroNVS models on the DROID dataset <cit.> can improve downstream real-world policy performance.
§ RELATED WORK
Learning viewpoint-robust robotic policies. Learning deep neural network policies that can generalize to different observational viewpoints has been discussed at length in the literature. One set of approaches effectively augment the input data to a learned policy or dynamics model with additional 2D images rendered from differing camera viewpoints. These renderings are often obtained from simulators <cit.> or by reprojecting 2D images <cit.>. Augmenting training with simulator data can improve robustness on simulation environments, but these methods must then address the challenge of sim-to-real transfer for deployment on real systems. In this work, we study methods for learning invariant policies directly using robot data from the deployment setting, including real robot trajectories. Existing work <cit.> performs view augmentation of real-world wrist camera images; however, this is performed with the goal of mitigating covariate shift as opposed to improving camera pose robustness, and requires many views of a static scene to generate realistic novel views.
Another line of work forms explicit 3D representations of the scene such as point clouds or voxels to leverage equivariance properties <cit.>, or projects from these representations to 2D images originating from canonical camera poses <cit.>. While these approaches have been shown to be robust to novel camera views <cit.>, they require well-calibrated camera extrinsics, which can be practically challenging and time-consuming to collect, and are not present in all existing datasets (for example, the majority of datasets in Open X-Embodiment <cit.> do not provide camera extrinsics).
Rather than rely on explicit 3D representations, a related body of work learns latent representations that are robust to variations in camera pose. These methods often use view synthesis or contrastive learning as a pretraining or auxiliary objective <cit.>, and also often require accurate extrinsics, can be computationally expensive to run at inference time, or impose restrictive requirements on the latent space that can make multi-task learning challenging.
A technique that has shown promise in reducing the observational domain gap in robotic learning settings is the use of wrist-mounted or eye-in-hand cameras as part of the observation space <cit.>. However, this does not obviate the need for third-person observations as it only provides information local to the gripper. We corroborate in our experiments that wrist-mounted camera observations are helpful but not solely sufficient for learning viewpoint-robust policies, and further that the use of wrist cameras can yield improvements orthogonal to the use of augmentation for third-person views.
Single-image novel view synthesis. Single-image novel view synthesis methods aim to reconstruct an image from a target viewpoint given a single image from a source viewpoint. One set of methods for novel view synthesis infers neural radiance fields from one or a few images <cit.>.
Another recent line of work trains diffusion models on images to perform novel view synthesis, and then distills 3D representations from these models <cit.>. These approaches have been extended to scene-level view synthesis <cit.>, making them amenable to robotic manipulation settings.
They have been largely developed, trained, and evaluated on large video datasets; however, to our knowledge, their application in robotic policy learning remains relatively unexplored.
Generative image models in robotics.
Pretrained image generation models have been applied in the context of robotic manipulation via semantic data augmentation <cit.>, where the goal is for the policy to better generalize to unseen backgrounds or objects as opposed to camera viewpoints. Similar generative models have also been applied to improve cross-embodiment transfer of policies <cit.> and as high-level planners using an image subgoal interface <cit.>. Overall, these methods address different challenges and are largely complementary to our method.
§ PRELIMINARIES
§.§ Problem Statement
[12]rt0.45
< g r a p h i c s >
Random samples from the two considered evaluation viewpoint ranges.
The techniques we discuss can be flexibly applied to many visuomotor policy learning settings; however, for a systematic and computationally constrained evaluation, we choose to study them in the context of visual imitation learning.
We frame each robotic manipulation problem as a discrete-time partially observed Markov decision process (POMDP), with state space 𝒮, action space 𝒜, transition function P, reward function R, and observation function 𝒪. This observation function maps states and actions into the observation space conditioned on extrinsic parameters E. We assume access to a dataset 𝒟 consisting of M expert demonstrations τ_0:M:
τ_i = { (o_0, a_0, …, o_t, a_t, …, o_T, a_T) }
where T is the total number of timesteps in a particular demonstration. Concretely, the observation o consists of both low-dimensional observations in the form of robot proprioceptive information, as well as RGB image observations o_I ∈ℝ^H × W × 3 captured by a fixed third-person camera with extrinsics E_orig.
The objective is to learn a policy π(a | o) that solves the task, where observed images o_I are captured by a camera with extrinsics E_test sampled from a distribution ℰ_test. Critically, we do not assume access to the environment or the ability to place additional sensors at training time.
§.§ Zero-Shot Novel View Synthesis from a Single Image
We define the single-image novel view synthesis (NVS) problem as finding a function ℳ(I_context, f, E_context, E_target) that, given an input image I_context∈ℝ^H × W × 3 of a scene captured with camera extrinsics (e.g., camera pose) E_context and simplified intrinsics represented by a field of view f, renders an image of the same scene I_context captured with camera extrinsics E_target.
To extend this setting to zero-shot novel view synthesis, we further assume that the image I_orig depicts a robotic task that was not seen when training ℳ. As we will describe in Section <ref>, we conduct experiments on both models that have never seen robotic data, as well as models finetuned on robotic data from simulated pre-training tasks and large-scale, real-world data.
§ LEARNING VIEW INVARIANCE WITH ZERO-SHOT NOVEL VIEW SYNTHESIS MODELS
In this section, we describe , the data augmentation scheme for view-invariant policy learning that we study in the remainder of the paper. The method is summarized in Figure <ref>.
To learn viewpoint-invariance, some prior works augment experience with images rendered from virtual cameras in simulation <cit.>. However, we wish to learn viewpoint-invariant policies directly from existing offline datasets, which could be from inaccessible simulated environments or data collected in the real world. Furthermore, many robotic datasets do not contain the multiview observations or depth images needed for 3D reconstruction. Thus, we explore using single image novel view synthesis methods to perform augmentation.
Concretely, given a single-image novel view synthesis model ℳ(I_context, f, E_context, E_target), uses ℳ to replace each frame of a demonstration trajectory with a synthesized frame with independently randomly sampled target extrinsics E_target∼ℰ_train. That is, we independently replace each observation-action pair (o, a) with (ℳ(o_I, f, E_context, E_target), a). For the sake of systematic evaluation, in our simulated experiments, we assume knowledge of both the initial camera pose E_context and the target distribution ℰ_target. However, the novel view synthesis models we study use only the relative poses between E_context and E_target; absolute poses are not required and are not used in real-world experiments. We assume that the field of view is known.
has several appealing properties. First, while methods that form explicit 3D representations must either use multi-view images or assume static scenes when performing structure-from-motion, it avoids the computational expense of 3D reconstruction and takes advantage of the fact that a scene is static at any slice in time. Second, does not add additional computational complexity at inference time, as the trained policy's forward pass remains the same. Lastly, inherits improvements in the modeling and generalization capability of novel view synthesis models.
We center our analysis around a particular novel-view synthesis model, ZeroNVS <cit.>. ZeroNVS is a latent diffusion model that generates novel views of an input image given a specified camera transformation. It is initialized from Stable Diffusion <cit.> and fine-tuned on a diverse collection of 3D scenes, therefore achieving strong zero-shot performance on a wide variety of scene types. Moreover, as a generative model, it tends to generate novel views which are crisp and realistic, mitigating the domain gap between generated and real images.
Although ZeroNVS provides reasonable predictions even in zero-shot settings, we found that it also has failure modes that generate images that appear to contain extreme close-ups of objects in the scene, potentially due to poor extrinsics in the training dataset. To partially address these scenarios, we simply reject and resample images that have a perceptual similarity (LPIPS) <cit.> distance larger than a value η from the input image, which we found to slightly improve performance.
While many techniques for imitation learning have been proposed,
as a strong baseline that fits our computational budget, we use the implementation of behavior cloning with a Gaussian mixture model output from robomimic <cit.> in our simulated experiments. In real-world experiments, we instead train diffusion policies <cit.> due to their success in learning policies for real robots.
For additional implementation details and pseudocode of , please see Appendix <ref>.
§ EXPERIMENTAL ANALYSIS
In this section, we perform empirical analyses to answer the following questions:
* Can policies trained with data augmented by novel view synthesis models trained on large-scale out-of-domain datasets improve robustness to novel viewpoints? How do these models compare to existing alternatives?
* Can finetuning novel view synthesis models on robotic data improve the performance of when applied to unseen tasks with larger viewpoint changes?
* How do methods providing augmented third-person viewpoints interact with strategies for reducing the observational domain gap, such as adding wrist-mounted cameras?
* Can be applied to learn policies on real robots directly using real-world data? How does finetuning view synthesis models on diverse real robot data affect downstream performance?
Simulated experimental setup. We perform simulated experiments using the robomimic framework built on the MuJoCo simulator <cit.>, with additional tasks introduced in MimicGen <cit.>. For the Lift and Nut Assembly tasks, we use the proficient-human expert demonstration datasets from robomimic for training. For the remainder of the tasks, we train using the first 200 demonstrations of D0 datasets and evaluate using the D0 environment variants as defined by MimicGen.
To contextualize the results, we introduce the following baseline methods:
* Single view. To represent the performance of a model trained without view-invariance, this performs behavioral cloning using only the source demonstration dataset.
* Simulator (oracle). As an upper bound of the performance of per-frame random augmentation for learning viewpoint invariance, this baseline directly uses the simulator to render novel viewpoints.
* Depth estimation + reprojection. This baseline follows the augmentation scheme described in Section <ref>.
It synthesizes novel views from RGB images using a three-stage pipeline.
Because we do not assume access to depth, it first performs metric depth estimation using an off-the-shelf model <cit.>. Next, it lifts the RGBD information into a 3D point cloud with a pinhole camera model and then renders the point cloud into an RGB image at the target camera extrinsics. Finally, because this reprojection often produces partial images, we perform inpainting of “holes” and outpainting of image boundaries using a pretrained diffusion model <cit.>.
* PixelNeRF. To evaluate differences between novel view synthesis models, we evaluate a method that performs per-frame viewpoint augmentation using a PixelNeRF <cit.> model trained on the same mixture of datasets as ZeroNVS. PixelNeRF uses a convolutional encoder to condition a neural radiance field <cit.>, which is then rendered from the novel viewpoint.
Further details regarding baseline implementations and hyperparameters are in Appendix <ref>.
Q1. Performance of pre-trained novel view synthesis models.
First, we seek to evaluate the performance of view synthesis models that rely on large-scale, diverse pretraining.
We test a distribution of test camera poses denoted perturbations, that are representative of incremental changes, for instance, that of a physical camera drifting over time or subject to unintentional physical disturbance.
Specifically, this distribution is parameterized by a random translation Δ t∼𝒩(0, diag(σ_t^2)) and rotation around a uniformly randomly sampled axis, where the magnitude is sampled from 𝒩(0, σ_r^2). Samples from this range are visualized in Figure <ref>, and example observations are in Appendix <ref>.
The results are presented in Table <ref>. First, we note that the oracle simulator augmentation scheme is able to reclaim a significant portion of policy performance that is lost by only training on the original data (single view). We find that the depth estimation + reprojection method is able to consistently provide modest improvements to the performance on test viewpoints. Among the fully neural methods, PixelNeRF does not synthesize views with sufficient fidelity, and causes even a drop in performance compared to not doing augmentation. We thus omit this baseline in further evaluations. However, we find that a pretrained ZeroNVS model, despite (likely) having never seen an image of a robotic arm during training, is able to improve novel view performance even further.
Q2. Effect of finetuning view synthesis models on in-domain data.
Next, we investigate whether finetuning these novel view synthesis models on in-domain data can yield improved performance when applied to unseen tasks.
To test this, we study a more challenging distribution of camera poses with a real-world analogue to constructing another view of a given scene. We first compute a sphere centered at the robot base and containing the initial camera pose. We then sample camera poses on the sphere at the same z height and within a 90 azimuthal angle of the starting viewpoint. The radius of the sphere is further randomly perturbed with Gaussian noise with variance σ_r^2. We call this distribution quarter circle arc, with samples shown in Figure <ref> and more details in Appendix <ref>.
We finetune the ZeroNVS model on a multi-view dataset generated using eight MimicGen tasks: , , , , , , , and . Additional finetuning details can be found in Appendix <ref>. We then test the model, denoted ZeroNVS (MimicGen), when used for augmentation on datasets of held-out tasks, which we categorize below by their relationship with the finetuning tasks.
* Unseen Object: Tasks contain objects that are not present in any finetuning tasks.
* Shared Object: Tasks contain objects that are present in one or more finetuning tasks, but in the context of a set of different objects or scenes.
* X-Embodiment: The same task is present in the finetuning data, but is performed by a different robot (Rethink Sawyer instead of Franka Panda).
Quantitative results are presented in Table <ref>. In this more challenging setting, improvements from the depth estimation + reprojection baseline are much more limited, likely because many requested novel viewpoints are outside the original camera's viewing frustum. The pretrained ZeroNVS model yields modest improvements on all tasks. We see the best performance when using the model finetuned on the MimicGen data, often doubling the success rate of the next best method.
Qualitatively, as seen in Figure <ref>, we find that the ZeroNVS model finetuned on MimicGen data produces higher fidelity images, particularly with respect to the robotic arm's appearance.
[21]r0.40
< g r a p h i c s >
Performance of novel view–augmented policies when provided with additional wrist camera observations, which are consistent between train and test settings. We find as per expectation that wrist observations improve performance across the board, as they are agnostic to third-person viewpoint. These improvements complement those of view augmentation methods.
Q3. Use of wrist-mounted cameras to reduce domain gap.
Wrist-mounted cameras are a popular and effective approach to improving visuomotor policy performance and reducing domain shift due to changes in visual observations <cit.>. In this experiment, we examine the effect of using wrist-camera observations in conjunction with augmented third-person views. The results are shown in Figure <ref>. We see that adding wrist camera observations slightly improves performance on the threading task for all augmentation techniques, suggesting that methods for view-invariance for third-person views can be complementary to the use of wrist cameras. For the task, the performance of a policy using solely wrist observations, which are unperturbed at test time, is 58%. This is better than even our strongest policy using a third-person view augmentation model. However, the performance of wrist-camera-only policies may be limited for many tasks <cit.>.
For instance, in , the oracle augmentation + wrist camera policy achieves a 73% success rate using the original third-person viewpoint.
Q4. Performance on real robots.
We further investigate the performance of when training policies on real-world data.
Critically, we also seek to validate whether finetuning NVS models on large-scale real multi-view robotic data can improve performance for real-world policies.
To test this, we first finetune a ZeroNVS model on a subset of the DROID <cit.> dataset, which contains over 75k trajectories of a variety of tasks in diverse environments. We randomly sample a subset of 3000 trajectories in DROID and sample 10 random timesteps within each trajectory as “scenes” for finetuning, using the four external views from two stereo cameras as paired data.
We then collect a dataset consisting of 150 expert demonstrations on a Franka Emika Panda robot for the task from a single camera viewpoint. We train diffusion policies <cit.> on this data following the configuration in DROID <cit.> with four policy variants as follows: Original data + wrist uses the third-person camera view and wrist view as policy inputs. ZeroNVS aug + wrist additionally performs augmentation on the third-person view using the ZeroNVS model from the original paper. ZeroNVS (DROID) + wrist uses the NVS model finetuned on DROID data instead. Finally, representing an alternative approach that sidesteps the viewpoint shift problem entirely by only using the wrist camera, which is always fixed related to the end effector, wrist only is a baseline that does not use third-person camera inputs. We evaluate these policies when the external observations are captured from the original viewpoint and four novel views (see Figure <ref>).
Full finetuning and real world experimental setup details are in Appendix <ref>.
The results, presented in Table <ref>, indicate that is also effective in improving policy viewpoint robustness in the real world settings. Further, we see a performance gain from finetuning the NVS model on a large, diverse robotic dataset. In contrast, the policy trained on the single viewpoint data struggles to reliably reach toward the cup under viewpoint changes.
§ CONCLUSION AND LIMITATIONS
Limitations. While is effective at improving the viewpoint robustness of policies, it does have certain limitations. First, the computational expense of generating novel views can be significant. Second, augmenting views during policy training can increase training time and therefore computational expense. Third, sampling views during data augmentation requires some distribution of poses from which to sample. This distribution must cover the reasonable space of views expected at deployment time. Fourth, single-image novel view synthesis models often perform poorly when the novel view is at a camera pose that differs dramatically from the original camera pose, and this limits the distribution from which views may be sampled during data augmentation.
Conclusion. In this paper, we presented , a simple yet effective method for making policies robust to changes in camera pose between training and deployment time. Using 3D priors from single image view synthesis methods trained on large-scale data, performs data augmentation to learn policies invariant to camera pose in an imitation learning context. Experiments in both simulated and real world environments demonstrated improved robustness to novel viewpoints of our approach over baselines, particularly when using view synthesis models finetuned on robotic data (though applied zero-shot with respect to tasks). There are a number of promising directions for future work, but of particular interest is studying the performance of this data augmentation scheme at scale across a large dataset of robotic demonstrations.
We thank Hanh Nguyen and Patrick “Tree” Miller for their help with the real robot experiments, and Weiyu Liu, Kyle Hsu, Hong-Xing “Koven” Yu, Chen Geng, Ruohan Zhang, Josiah Wong, Chen Wang, Wenlong Huang, and the SVL PAIR group for helpful discussions. This work is in part supported by the Toyota Research Institute (TRI), NSF RI #2211258, and ONR MURI N00014-22-1-2740. ST is supported by NSF GRFP Grant No. DGE-1656518.
Please see our project website at <https://s-tian.github.io/projects/vista> for code implementations, pretrained model weights, videos, and additional visualizations.
§ DETAILS OF MODEL IMPLEMENTATIONS
All novel view synthesis methods that we consider generate novel views at a resolution of 256 × 256 given RGB images at a resolution of 256 × 256. The synthesized images are later downsampled for policy training. To clarify how these models are used in a policy learning pipeline, we provide pseudocode of the algorithm in Algorithm <ref>.
§.§ ZeroNVS
ZeroNVS is a latent diffusion model that generates novel views of an input image given a specified camera transformation. It is initialized from Stable Diffusion and then fine-tuned on a diverse collection of 3D scenes, and therefore achieves strong zero-shot performance on a wide variety of scene types. Moreover, as a generative model, it tends to generate novel views which are crisp and realistic, mitigating the domain gap between generated and real images. This distinguishes ZeroNVS from methods such as PixelNeRF <cit.>, which are trained with regression-based losses and tend to produce blurry novel views even for small camera motion.
We use the implementation and pretrained checkpoint provided by <cit.>.
As mentioned in Section <ref>, although ZeroNVS largely produces reasonable views even zero-shot, it can sometimes produce images with significant visual artifacts. To filter these out, we reject and resample images that have a LPIPS <cit.> distance larger than a hyperparameter η from the input image. We set η = 0.5 for all simulated experiments and η = 0.7 for real experiments. We do not extensively tune this hyperparameter. If the model fails to produce an image with distance < η after 5 tries, the original image is returned.
ZeroNVS also requires as input a scene scale parameter. To determine the value of the scene scale for simulated experiments, we perform view synthesis using the pretrained ZeroNVS checkpoint on a set of 100 test trajectories for the and environments, and compute the LPIPS score between the ZeroNVS rendered images and ground truth simulator renders for values {0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95}. We find that the lowest error across the tasks is achieved at 0.6 and thus use 0.6 for all environments, including the real robot experiments.
While the behavior of the ZeroNVS model is somewhat sensitive to scene scale, we believe this may be alleviated by selecting a wider viewpoint randomization radius at training time, which is corroborated by our real robot experiments.
When sampling, we perform 250 DDIM steps and use a DDIM η of 1.0. We use a field of view (FOV) of 45 degrees for simulated experiments (obtained from the simulator camera parameters) and FOV of 70 degrees for the real world experiments (obtained from the Zed 2 camera datasheet).
Sampling the diffusion model for NVS is roughly similar to sampling from the vanilla Stable Diffusion model;
it takes on average 8.7 seconds to generate a single 256 × 256 image with ZeroNVS using these settings on a single NVIDIA RTX 3090 GPU.
§.§ Depth estimation + Reprojection baseline
This baseline represents a geometry-based approach that leverages depth estimation models trained on large-scale, diverse data.
First, we use ZoeDepth (ZoeD_NK) <cit.>, an off-the-shelf model, to perform metric depth estimation on the input RGB image. Next, we deproject the images into pointclouds using a pinhole camera model. We rasterize an image from the points using the Pytorch3D point rasterizer <cit.>, setting each point to have a radius of 0.007 and 8 points per pixel. Finally, we use a publicly available Stable Diffusion inpainting model (<https://huggingface.co./runwayml/stable-diffusion-inpainting>) to inpaint regions that are empty after rasterization. We use 50 denoising steps as per the defaults.
It takes on average 2.8 seconds to generate a single 256 × 256 image with this baseline on a single NVIDIA RTX 3090 GPU.
§.§ PixelNeRF
For PixelNeRF <cit.>, we use the implementation from the original authors at <https://github.com/sxyu/pixel-nerf>. We use a pretrained model trained on the same datasets as ZeroNVS <cit.>.
It takes on average 5.8 seconds to generate a single 256 × 256 with PixelNeRF on a single NVIDIA RTX 3090 GPU.
§ SIMULATED EXPERIMENTAL DETAILS
Here we provide details regarding the simulated experimental setup. As a high level goal, we aim to minimize differences from our setup from existing robotic learning pipelines to demonstrate how this augmentation technique can be generally and easily applied across setups.
§.§ Simulation Environments and Datasets
Our simulated experiments use environments created in the MuJoCo simulator and packaged by the <cit.> and MimicGen <cit.> frameworks.
For the Lift, PickPlaceCan, and Nut Assembly tasks, the training datasets are the Proficient-Human datasets for those tasks from <cit.> and consist of 200 expert demonstrations each. For all MimicGen tasks (Threading, Hammer Cleanup, Coffee, Stack) the datasets consist of the first 200 expert trajectories for the “core” MimicGen-generated datasets, downloaded from <https://github.com/NVlabs/mimicgen_environments>.
§.§ Details for Training and Test Viewpoints
We use the same distribution of viewpoints at training time for augmenting the dataset and when testing the policies. Note, however, that images generated by novel view synthesis models are not guaranteed to actually be from the target viewpoint – only the oracle that uses the simulator to render the scene satisfies this.
Due to the lack of widely adopted testing settings for testing robotic policies on novel views and that the effect of a particular view distribution is highly environment dependent, the hyperparameters for the view distribution were selected by hand by the authors to approximate reasonable distributions that a robot learning practitioner may encounter in practice. We hope these distributions may also be reasonable testing settings for evaluating future methods on these tasks.
§.§.§ Perturbations
This set of viewpoints are representative of incremental changes, for instance, that of a physical camera drifting over time or subject to unintentional physical disturbance.
Specifically, this distribution is parameterized by a random translation Δ t∼𝒩(0, diag(σ_t^2)) and rotation around a uniformly randomly sampled 3D axis, where the magnitude is sampled from 𝒩(0, σ_r^2). Specifically, we set σ_t = 0.03 m and σ_r = 0.075 rad. Samples of observations taken from viewpoints drawn from this distribution are shown in Figure <ref>.
§.§.§ Quarter Circle Arc
This is a more challenging distribution of camera poses with a real-world analogue to constructing another view of a given scene. We first compute a sphere centered at the robot base and containing the initial camera pose. We then sample camera poses on the sphere at the same z height and within a 90 azimuthal angle of the starting viewpoint. The radius of the sphere is further randomly perturbed with Gaussian noise with variance σ_r^2. Specifically, the radius of the sphere is 0.7106 m for all simulated environments, which is the distance between the camera and the robot base in the Lift task, and σ_r = 0.05 m.
§.§ Finetuning ZeroNVS on MimicGen Datasets
We finetune the ZeroNVS model on datasets from the MimicGen data of 8 tasks: , , , , , , , and . For each environment, we take the first 200 trajectories of the “core” MimicGen dataset for that task with the maximum initialization diversity (e.g. if is available in variants D0, D1, and D2, we take D2) and simulate 10 random viewpoints from the quarter circle arc distribution for each image in the dataset. We supply this as training data to ZeroNVS, using the training settings from the original ZeroNVS paper but changing the optimizer from AdamW to Adam and decreasing the learning rate to 2.5e-5, and decreasing the batch size to 512 due to computational constraints. We finetune the model for 5000 steps using four NVIDIA L40S GPUs. This takes approximately 16 hours of wall clock time.
§.§ Policy Learning
We use the same policy training settings for all simulated experiments, taken from the behavior cloning implementation in . The output of the policy network is a Gaussian mixture model. A brief overview of hyperparameters, corresponding directly to configuration file keys, are listed in Table <ref>. Note that we do not tune these hyperparameters and simply use them as sensible defaults. We train each policy using a single NVIDIA TITAN RTX GPU.
Because we generate the augmented dataset prior to performing policy learning, the computational cost of training policies is split between the augmented dataset generation and policy learning. In our experiments these take relatively similar time durations (around 10-20 hours for dataset generation and 15 for policy learning, varying slightly on the task), however, to achieve this we perform data augmentation parallelized across 10 GPUs. This roughly doubles the total wall clock time required to train the policies.
§ REAL WORLD EXPERIMENTAL DETAILS
Next we provide details regarding the real world experimental setup. As a high level goal, we aim to minimize the differences from existing robotic learning pipelines to demonstrate how this augmentation technique can be generally and easily applied across setups.
§.§ Real World Robot Setup
We use a Franka Research 3 (FR3) robot in our real world experiments. The hardware setup is otherwise a replica of that introduced by <cit.>. Specifically, the robot is mounted to a mobile desk base (although we keep it fixed in our experiments) and two ZED 2 stereo cameras provide observations for the robot. An overview of the real-world robot setup is shown in Figure <ref>.
[20]rt0.45
< g r a p h i c s >
Experimental setup for real robot evaluation. Here we show the testing setup for one particular novel camera view, camera 5. The original camera view that data was collected using is shown by the orange arrow. We use the left camera of each stereo pair.
We use a Meta Quest 2 controller (also as per the DROID hardware setup) to collect teleoperated expert demonstrations. We collect 150 human expert demonstrations of the task, randomizing the position of the cup and saucer after each task. Each demonstration trajectory lasts approximately 15 seconds of wall clock time.
When performing evaluations, we score task completion based on two stages: 1) Reaching the cup in a grasp attempt based on determination by a human rater and 2) Completing the task, which means that the cup is above and touching the surface of the saucer at some point during the trajectory.
§.§ Finetuning ZeroNVS on the DROID Dataset
To finetune ZeroNVS on the DROID dataset, we first collect a random subset of 3000 trajectories from the DROID dataset. Then, for each trajectory, we uniformly randomly sample 10 timestamps from the duration of the video, and consider the trajectory frozen at each of those times as a “scene”. Thus, we effectively have 30000 scenes. For each scene, we extract 4 views, which correspond to stereo images from the two external cameras. Although the DROID dataset does contain wrist camera data, we do not use it, as the wrist camera poses are much more challenging for synthesizing novel views.
We then perform depth estimation for each image using a stereo depth model. We then center crop all images to be square, and resize them to 256 × 256 to fit the existing ZeroNVS models. We obtain camera extrinisics from the DROID dataset, and use simplified intrinsics assuming a camera FOV of 68 degrees for all cameras, which we obtained from a single randomly sampled camera in the dataset. In reality, the FOV differs slightly for each camera due to hardware differences, and slightly better results may be obtained by using per-camera intrinsics.
As in the simulated finetuning experiments, we again use the training settings from the original ZeroNVS paper but change the optimizer from AdamW to Adam and decrease the learning rate to 2.5e-5, and decrease the batch size to 512 due to computational constraints. We use 29000 scenes for training and 1000 for validation. As an attempt to reduce overfitting, we mix in a single shard of 50 scenes each from the CO3D and ACID datasets which are sampled for each training sample with probability 0.025 each. DROID data is sampled with probability 0.95. We did not extensively validate the effect of this data mixing due to computational cost of finetuning the model repeatedly, and it is likely unnecessary. We finetune the model for 14500 steps using four NVIDIA L40S GPUs. This takes approximately 50 hours of wall clock time.
§.§ Policy Learning
Training augmentation viewpoints. For the real world experiments, we do not have access to the test viewpoint distribution. To sample viewpoints for ZeroNVS data augmentations for these experiments, we sample from a distribution parameterized in the same way as the “perturbation” range in the simulated experiments, but with a vastly increased variance in translation and rotation magnitude intending to cover a wide range of possible test viewpoints.
This distribution is parameterized by a random translation Δ t∼𝒩(0, diag(σ_t^2)) and rotation around a uniformly randomly sampled 3D axis, where the magnitude is sampled from 𝒩(0, σ_r^2). Specifically, we set σ_t = 0.15 m and σ_r = 0.375 rad.
[16]rt0.45
< g r a p h i c s >
Results for ablation of number of augmented transitions per original dataset transitions. Overall, we see modest performance improvements from increasing the amount of augmented data, across both the oracle and learned NVS model.
Policy learning. For policy learning on the real robot, we train diffusion policies <cit.>. Specifically, we use the implementation from the evaluation in the DROID paper <cit.> with language conditioning removed. The input images are of size 128 × 128, and both random color jitter and random crops (to 116 × 116) are applied to the images during training. We train all policies for 100 epochs (50000 gradient steps), using 2 NVIDIA RTX 3090 or RTX A5000 GPUs.
§ ADDITIONAL EXPERIMENTS
§.§ Increasing Number of Augmented Transitions
In the experiments presented in Section <ref>, we perform data augmentation via novel view synthesis by doing offline preprocessing of the dataset, augmenting and replacing each transition with a single augmented transition. However, many random data augmentation strategies for neural network training perform augmentation “on-the-fly”, applying augmentations to each particular batch. This increases the effective dataset size. Augmentation with novel view synthesis methods is too computationally expensive to apply per batch with our computational budget, but we are still interested in understanding how the performance of trained policies is affected by increasing the number of augmented trajectories for each original dataset trajectory.
For the task with viewpoints sampled from the “quarter circle arc” distribution, we trained policies on dataset containing 1, 2, 3, 4, and 5 augmented transitions per dataset transition for the simulator (oracle) and ZeroNVS (MimicGen finetuned) models.
The results are shown in Figure <ref>. We find that increasing the number of augmented transitions per original dataset transition yields modest improvements with both models, although there is a surprising dip in performance when using 5 augmented transitions for the ZeroNVS (MimicGen finetuned) model.
§.§ Multi-View Masked World Models Comparison
Here we conduct a baseline comparison to the multi-view masked autoencoding (MV-MAE) method from Multi-View Masked World Models (MV-MWM) <cit.>. While MV-MWM has a similar motivation to our work, it has a very different problem setting compared to ours: they assume training-time access to a multi-view dataset, while we assume only access to an offline dataset of trajectories captured from a single viewpoint. Thus we adapt the imitation learning approach described in the MV-MWM work to our finetuning experimental setup. Specifically, we train a MV-MAE, using all hyperparameters from <cit.> on the finetuning dataset from Experiment Q2. Of particular note, we use a higher rendering resolution for this baseline (96 × 96 compared to 84 × 84 for policies trained using our novel view augmentation) to match the image resolution used by <cit.>.
We train for 5000 steps, performing early stopping as we observe overfitting by monitoring the validation loss on datasets for the , , , and tasks. We then freeze the pre-trained encoder and use it to train policies on single-view datasets, testing on the quarter circle arc test view distribution. We show the results in Table <ref>. Our approach significantly outperforms the multi-view masked autoencoding method.
§ ADDITIONAL QUALITATIVE RESULTS
§.§ Real World Novel View Synthesis
In Figure <ref>, we provide additional qualitative results of novel views synthesized for the real world task. We show synthesized images for views sampled from the training view distribution described in Appendix <ref>.
§.§ Saliency Analysis of Learned Policies
To understand how different combinations of observation viewpoints qualitatively affect learned policies, we additionally conduct an analysis of saliency maps of learned policies trained on third-person views only compared to combined third-person and wrist cameras.
Specifically, we visualize saliency maps of convolutional layers of both simulated and real-world policies using GradCAM++ <cit.> in Figure <ref>. We find that wrist camera observations tend to have salient features at locations corresponding to objects nearby or underneath the gripper. Policies with only third-person camera views as input tend to have more salient features corresponding to the robotic arm or gripper itself.
For GradCAM++, we choose the target layer to be the common choice <cit.> of the last convolutional layer in ResNet18 or ResNet50 for simulated and real policies respectively. For the policies the target model output is the mean of the output Gaussian mixture model action distribution with the highest probability of being selected (largest logit). For the real-world policies the target model output is the mean of the output denoised action sequence. We visualize saliency maps on data from viewpoints sampled from the same test distributions as in Experiment Q2 (quarter-circle arc) and Experiment Q4 respectively.
|
http://arxiv.org/abs/2409.03513v1 | 20240905132350 | Bias correction of posterior means using MCMC outputs | [
"Yukito Iba"
] | stat.ME | [
"stat.ME",
"cond-mat.dis-nn"
] |
The Institute of Statistical Mathematics,
10-3 Midori cho, Tachikawa City, Tokyo 190-8562, Japan.
[email protected]
§ ABSTRACT
We propose algorithms for addressing the bias of the posterior mean when used as an estimator of parameters. These algorithms build upon the recently proposed Bayesian infinitesimal jackknife approximation (<cit.>) and can be implemented using the posterior covariance and third-order combined cumulants easily calculated from MCMC outputs. Two algorithms are introduced: The first algorithm utilises the output of a single-run MCMC with the original likelihood and prior to estimate the bias. A notable feature of the algorithm is that its ability to estimate definitional bias (<cit.>), which is crucial for Bayesian estimators. The second algorithm is designed for high-dimensional and sparse data settings, where “quasi-prior” for bias correction is introduced. The quasi-prior is iteratively refined using the output of the first algorithm as a measure of the residual bias at each step. These algorithms have been successfully implemented and tested for parameter estimation in the Weibull distribution and logistic regression in moderately high-dimensional settings.
Bias correction of posterior means using MCMC outputs
Yukito Iba
September 9, 2024
=====================================================
Keywords.
Bayesian statistics;
Markov Chain Monte Carlo;
bias correction;
infinitesimal jackknife;
posterior cumulant
§ INTRODUCTION
This paper presents algorithms for estimating and correcting the bias of the posterior mean when used as an estimator of parameters. We discuss two algorithms, both of which are based on posterior covariance and third-order combined cumulants.
The first algorithm utilizes the output of a single run of MCMC with the original likelihood and prior. A simple formula can be applied to any likelihood and prior, and the estimator is automatically computed from the posterior samples; no model-specific analytical calculation is required.
The second algorithm is designed for high-dimensional and sparse data settings, where a “quasi-prior” for bias correction is introduced. The quasi-prior is improved iteratively using the output of the first algorithm as a measure of the residual bias at each step. Typically, after several MCMC runs, the bias correction by the first algorithm yields satisfactory results; it is important to note that we do not need to entirely remove the bias through this iteration. Experiments show that the second algorithm is effective, for example, in models with 20 or 60 parameters.
These algorithms heavily rely on the recently proposed idea of the Bayesian infinitesimal jackknife approximation (Bayesian IJK, <cit.>). While frequentist covariance is represented by the posterior covariance in <cit.>, our algorithm estimates bias using posterior cumulants; we also refer to a related idea suggested in <cit.>. The proposed algorithms, however, accommodate the following novel features:
* The definitional bias (<cit.>) is included in the proposed algorithms; this is essential for the Bayesian estimators.
* Iterative tuning of the quasi-prior is introduced in the second algorithm. This remarkably improves the performance of the algorithm in high-dimensional and sparse-data settings.
Details of (a) and (b) are discussed in the remainder of the paper. Here we provide a simple example of the definitional bias. Consider a binomial likelihood with a beta prior Beta(α,β) on the success probability q, and the posterior mean
[q]=
(X+α)/(n+α+β)
as an estimator of q, where n and X is the number of the trials and successes, respectively. In this context, the definitional bias b_0 is expressed as
b_0
=
(n[X]+α)/(n+α+β)
-q_0
=
(nq_0+α)/(n+α+β)
-q_0,
where [X] denotes the population average of X. Since the estimator [q] is linear in X, the bias caused by the non-linearity of the estimator is zero. Definitional bias b_0 is often disregarded in discussions based on the von Mises expansion or nonparametric bootstrap; in many theoretical analyses, the “bias” of an estimator is defined as the deviation from θ_0+b_0 rather than from the population value θ_0 of the paramete. However, numerical experiments indicate that definitional bias b_0 can be significantly large for posterior mean estimators even when seemingly noninformative priors are used. This contrasts with the case of the maximum likelihood estimator, where b_0 is always 0. In this study, we introduce a simple method to estimate b_0 using the posterior covariance. The proposed estimator may fail in sparse data settings, but this issue is addressed by the use of the quasi-prior and its iterative improvement, as employed in the second algorithm.
The rest of the paper is organized as follows: In Sec. <ref>, we introduce the proposed algorithms. In sections <ref> and <ref>, illustrative examples are presented for these algorithms. In Sec. <ref>, the proposed algorithms are derived from the jackknife approximation to the bias. Sec. <ref> provides a summary and conclusion. Appendix <ref> defines the bias and definitional bias by means of the von Mises expansion and Appendix <ref> derives the jackknife bias correction formula.
§ PROPOSED METHOD
§.§ Settings and notation
Let us denote a sample of size n from the population G as X^n=(X_1,…, X_n).
The posterior distribution is defined by
p(θ| X^n ) =
exp{∑_i=1^nℓ(X_i; θ)}p(θ) /∫exp{∑_i=1^nℓ(X_i;θ')}p(θ') dθ',
where p(θ) is a prior density on θ=(θ_1,⋯, θ_K) and ℓ(x;θ)=log p(x |θ) is log-likelihood of the model.
Our objective is to correct the frequentist bias in the posterior mean defined by
[A] = ∫ A(θ)p(θ| X^n) dθ
of given statistics A(θ). We also define the posterior covariance and a third-order combined posterior cumulant of the arbitrary statistics A(θ), B(θ), and C(θ) as
[A(θ),B(θ)] =
[(A(θ)-[A(θ)])( B(θ)-[B(θ)])],
[A(θ),B(θ), C(θ)] =
[(A(θ)-[A(θ)])(B(θ)-[B(θ)])(C(θ)-[C(θ)]]).
§.§ Algorithm 1
In the algorithm 1, the bias b(A;n) is estimated using the formula
b̂(A;n)= - ∑_i=1^n [A(θ), ℓ(X_i; θ)]+1/2∑_i=1^n [A(θ), ℓ(X_i; θ), ℓ(X_i; θ)].
In practice, the posterior covariance and cumulants in the above formula are estimated from posterior samples obtained from a single MCMC run. The resultant algorithm is summarized as follows:
§.§ Algorithm 2
In the algorithm 2, simultaneous bias correction of all parameters is essential. For simplicity, here we restrict to the case of simultaneous bias correction of the original parameters (θ,⋯,θ_K), while general cases including an arbitrary statistics A(θ) may be dealt with a change of the parameter.
As a basis of the algorithm, we introduce a modified posterior distribution defined by
p_λ(θ| X^n ) =
exp{∑_i=1^nℓ(X_i; θ)-∑_k=1^K λ_k θ_k }p(θ) /∫exp{∑_i=1^nℓ(X_i;θ')-∑_k=1^K λ_k θ'_k}p(θ') dθ',
where λ=(λ_1,⋯,λ_K) is a vector of constants. An average [ ], covariance [ ], and third-order combined cumulant [ ] with respect to the distribution (<ref>) are defined in a similar manner that [ ], [ ], and [ ] is defined, respectively. Using them, we define
b̂(θ_k;n;λ)= - ∑_i=1^n [θ_k, ℓ(X_i; θ)]+1/2∑_i=1^n [θ_k, ℓ(X_i; θ), ℓ(X_i; θ)] ,
and C(θ_k,θ_k^';n;λ)=[θ_k, θ_k^'].
Hereafter, the vector whose components is b̂(θ_k;n;λ) is denoted as b̂(θ;n;λ), while the matrix whose (k,k^') component is C(θ_k,θ_k^';n;λ) is expressed as C(θ;n;λ).
The proposed procedure is an iterative improvement of the constants λ=(λ_1,⋯,λ_K), which intends to reduce the bias b̂(θ;n;λ) estimated in each step. Let us denote the value of λ at step m as λ^(m) and define λ^(m) as a vector that solves a linear equation
C(θ;n;λ^(m)) λ^(m) =
b̂(θ;n;λ^(m)).
Using λ^(m), the value of λ^(m) is updated as
λ^(m+1)=λ^(m)+δ λ^(m),
where 0< δ≤ 1 is a constant in the algorithm, the value of which is chosen to prevent oscillation of the value of λ. Since C(θ;n;λ^(m)) is a matrix that express the posterior covariance between parameters, the above equation has a unique solution, unless parameters are not identifiable or extremely correlated.
An essential point is that we need not entirely remove the bias by the iteration procedure. Our strategy is to reduce the magnitude of the bias towards the range where the approximation (<ref>) is sufficiently accurate, then estimate it using (<ref>).
The resultant algorithm is summarized as follows:
§.§ Definition of b̂_0 and b̂_2
Since the first and second term in (<ref>) have a considerably different nature, contribution of each term is checked separately in the following numerical experiments. Hereafter, we denote the first and second term in (<ref>) as
b̂_0(A;n)= - ∑_i=1^n [A(θ), ℓ(X_i; θ)],
b̂_2(A;n)= 1/2∑_i=1^n [A(θ), ℓ(X_i; θ), ℓ(X_i; θ)],
which can be abbreviated as b̂_0(A) and b̂_2(A), or even b̂_0 and b̂_2, respectively. By an abuse of notation, the corresponding components of b̂(A;n;λ) are also expressed by the same symbols, such as b̂_0 and b̂_2.
In appendix <ref>, we define two components b_0(A;n) and b_2(A;n) of the bias based on the von Mises expansion; b̂_0(A;n) and b̂_2(A;n) are considered as an estimator of b_0(A;n) and b_2(A;n), respectively.
§ EXAMPLE: WEIBULL FITTING
Here we test the algorithm 1 for the parameter estimation of the Weibull distribution, the probability density of which is given by
p^wb(x |λ, γ)=γ/λ (x/λ )^γ-1exp [- ( x/λ)^γ ], x ≥ 0.
This likelihood is not an exponential family in the shape parameter γ. An improper prior uniform on [0,∞) is assumed for each of the parameters λ and γ. Experiments below are performed with 5000 sets of artificial data of sample size n=30; they are generated from the Weibull distribution with λ=1.0 and γ=0.8.
The results are presented in Fig.<ref>. The first and second panel of Fig.<ref> correspond to the scale parameter λ and shape parameter γ, respectively. In each panel, from left to right, results for b̂_2, b̂_0, and b̂_0+b̂_2 are shown. The horizontal dotted line colored red indicates the true value of the bias estimated from the average over the sets of artificial data.
Fig.<ref> indicates that the bias is mostly explained by b̂_0 in the case of the scale parameter λ, while it is predominantly due to b̂_2 in the case of the shape parameter γ. This exhibit the importance of both b̂_0 and b̂_2 in the estimation of the bias. Fig.<ref> also presents that the proposed estimator b̂_0+b̂_2 provides reasonable estimates in both cases, while a small but systematic deviation is observed in the case of shape parameter; it may be related to a highly nonlinear nature of the estimator of γ.
§ EXAMPLE: LOGISTIC REGRESSION
Here we consider the logistic regression in a moderately high-dimensional settings, where the algorithm 2 is effectively applied. Let us consider the logistic regression for data Y^n=(Y_i), Y_i ∈{0,1} as:
p(Y^n;a)=∏_i=1^n q_i^Y_i(1-q_i)^1-Y_i, logq_i/1-q_i= ∑_s=1^N_p a_s x_is,
where a_s, s=1,⋯,N_p is regression coefficients and x_is, s=1,⋯,N_p are explanatory variables corresponding to Y_i. We consider the following settings:
* The model comprises N_p=3m explanatory variables. Here, we consider the cases of N_p=60 and N_p=21.
* Artificial data of sample size n are generated using the same model. The values of the regression coefficients (a_i) are 10, -5, and 0 in each of the three groups of the size m, respectively.
* The value of the explanatory variables are randomly selected from the normal distribution of the variance 1/n and mean 0. When we consider the effect of confounding between explanatory variables, the values of the explanatory variables are sampled from a multivariate normal distribution; off-diagonal elements of the covariance matrix are set as ρ/n.
* We select δ that defines a scale of increments of λ_is as 0.2 throughout the experiments; an adaptive choice of δ is importnat, as well as an adequate stopping criterion of the algorithm, which is left for the future study.
In figures <ref>–<ref>, medians of the estimated regression coefficients a_s, s=1,⋯, N_p in 30 trials are plotted; each of these 30 trials employs a different set of Y_i and x_is. The true values of a_s is expressed as horizontal red lines.
Fig. <ref> presents a result of algorithm 1, where the number of parameters is N_P=60 and the sample size n is set as 10× N_p=600. From left to right, the original estimates (posterior means), estimates corrected by b̂_2, b̂_0, and b̂_0+b̂_2 are presented; the rightmost panel corresponds to the output of algorithm 1. While the original estimates [a_s] in the leftmost panel have considerable bias, the algorithm 1 provides reasonably corrected values of a_s. In addition, results in the second and third panel indicate that both of b̂_2 and b̂_0 contribute the bias correction in algorithm 1.
Fig. <ref> presents another result, where the sample size n reduces to 5× N_p=300 with a corresponding change of the variance of explanatory variables. The first row of Fig. <ref> presents the results of the algorithm 1. In contrast to the case n=600, the rightmost panel indicates that the proposed estimator b̂_0 +b̂_2 fails to correct the bias. Results in the second and third panel indicate that adequate corrections are not achieved even when we select one of the terms b̂_0 and b̂_2 and discard the other. Numerical attempts of evaluating the accuracy of b̂_0 and b̂_2 separately (not shown here) suggest that the majority of the error in a sparse-data case comes from b̂_0.
On the other hand, the second row of Fig. <ref> presents the result of algorithm 2 for the same sets of artificial data. As shown in the rightmost panel shows, algorithm 2 successfully corrects the bias after only three iterations of updating λ_is using (<ref>) and (<ref>). We stress that the bias is not entirely removed even after updating λ_is; it is evident in the leftmost panel in the second row, where the posterior means defined with the updated values of λ_is are presented. This means that the role of the bias correction terms -∑_k=1^K λ_k A_k(θ) are to reduce the bias and make the bias correction by algorithm 1 effective.
Thus far, we assumed that the values of explanatory variables x_is are independently generated for each s; this assumption might hardly hold in a real-world problem. In Fig.<ref>, we present the results for artificial data of N_p=21 and n=5× Np=105, where the values of the explanatory variables x_is are correlated in the artificial sets of the data; off-diagonal elements of the covariance matrix are set as 0.5/N_p. The results are essentially the same as those in Fig.<ref>: algorithm 1 fails to correct the bias as shown in the rightmost panel of the first row, while algorithm 2 successfully correct the bias as shown in the rightmost panel of the second row.
§ DERIVATION
§.§ Settings and regularity conditions
The posterior distribution p_w(θ ;X^n) with weights (w_1,w_2,…,w_n) of the observations X^n=(X_1,X_2,… X_n) is defined by:
p_w(θ ;X^n) =
exp{∑_i=1^n w_i ℓ(X_i; θ)}p(θ) /∫exp{∑_i=1^n w_i ℓ(X_i;θ')}p(θ') dθ'.
The average of A(θ) over the distribution p_w(θ ;X^n) are expressed as ^w[A(θ)]. Hereafter w=1 is used as an abbreviation of w_i=1, i=1,…,n.
We also introduce the following notation for a fourth order combined cumulant:
[A(θ),B(θ)] =
^w [(A(θ)-[A(θ)])( B(θ)-^w [B(θ)])^3 ]
-3 ×^w [(A(θ)-^w[A(θ)])(B(θ)- ^w [B(θ)]) ]
×^w [(B(θ)-^w[B(θ)])^2 ].
We assume the following conditions (C1) and (C2). Hereafter [ ] denotes a population average over an observation X_1; 𝒲^(-1) is the set defined by 0 ≤ w_1 ≤ 1 and w_i≠ 1=1.
(C1)
[
sup_w∈𝒲^(-1)|
[A(θ),ℓ(X_1;θ)]
|
]
=o(1/n^2)
(C2) There exists an integrable function M(θ) such that
for all w∈𝒲^(-1),
| p(θ)
exp(w_1ℓ(X_1;θ)+
∑_k≠ 1ℓ(X_k;θ) )
A^l(θ) {ℓ(X_1;θ) }^m | ≤ M(θ), 0 ≤ l ≤ 1, 0 ≤ m ≤ 3.
§.§ Local case sensitivity formulae
Let us introduce local case sensitivity formulae as a basic tool for the following derivation. The first-order local case sensitivity formula () represents the first-order derivative of ^w[A(θ)] using the posterior covariance as
. ∂/∂ w_i^w[A(θ)] |_w=1
=[A(θ), ℓ(X_i;θ)].
In a similar manner, special cases of second- and third- order formulae required here are expressed as . ∂^2/∂ w_i^2^w[A(θ)] |_w=1
= [A(θ), ℓ(X_i;θ), ℓ(X_i;θ)].
∂^3/∂ w_i^3^w[A(θ)]
= [A(θ), ℓ(X_i;θ)],
where the last one is defined for an arbitrary 0 ≤ w ≤ 1. The proof of these formulae is straightforward when an exchange of integration and derivation is allowed under the condition (C2); it is presented in <cit.> in a generic form; (<ref>) is also proved in <cit.>.
§.§ Bayesian IJK for the bias
The algorithm 1 is derived as a Bayesian IJK approximation to the jackkinfe bias correction formula:
b_jack= (n-1) (1/n∑_i=1^n ^-i[A] -[A] )
= ∑_i=1^n (^-i[A] -[A] ) +o_p(1/n).
This formula is well known, but we give a derivation in the appendix <ref>. In the jackknife method, the number of the observations are changed from n to n-1 by the deletion of an observation. It is essential to deal with the difinitional bias. Seemingly more sophisticated sample-reuse approaches, such as the bootstrap method, may fail in dealing with the definitional bias, when the number of the observations is kept constant before and after disturbing the data.
Setting w_i=0 and w_j=1 (j≠ i) in the weighted posterior (<ref>) and consider a Taylor expansion with respect to w_i-1 of the posterior expectation, we obtain
^-i[A] = [A]
-. ∂/∂ w_i^w[A(θ)] |_w=1
+1/2. ∂^2/∂ w_i^2^w[A(θ)] |_w=1
- 1/6∑_i=1^n . ∂^3/∂ w_i^3^w[A(θ)] |_w=w*
where 0 ≤ w* ≤ 1. On the other hand, the expression (<ref>) and condition (C1) gives
1/6∑_i=1^n . ∂^3/∂ w_i^3^w[A(θ)] |_w=w* =o_p(1/n).
Substituting (<ref>) in (<ref>) and use (<ref>), b_jack is expressed as
b_jack= - ∑_i=1^n . ∂/∂ w_i^w[A(θ)] |_w=1
+1/2∑_i=1^n . ∂^2/∂ w_i^2^w[A(θ)] |_w=1
+ o_p(1/n).
Now that we use (<ref>)
and (<ref>) in the expansion (<ref>), which completes the derivation as
b_jack= - ∑_i=1^n [A(θ), ℓ(X_i;θ)]
+1/2∑_i=1^n [A(θ), ℓ(X_i;θ), ℓ(X_i;θ)]
+ o_p(1/n).
§.§ Derivation of the algorithm 2
The algorithm 2 is considered as an iterative improvement of the residual bias based on the following representation of the derivative
∂/∂λ_k[θ_k^']=
-[θ_k, θ_k^'],
where [ ] denote the covariance with the distribution (<ref>). The expression (<ref>) is obtained by a direct computation when an exchange of integration and derivation is allowed under appropriate regularity conditions. The idea behind (<ref>) is the same as that of local case sensitivity formulae; applications of similar formulae have a long history in statistics (<cit.>).
Let us define λ^* as a value of λ that gives zero bias for all parameters θ_k; such a value might not exist, but here we introduce it as a hypothetical target of the bias reduction procedure.
b̂_k(θ;n;λ^*)=0.
If we define λ^(m)=λ^*-λ^(m), where λ^(m) is the value of λ at step m.
Then, we expand (<ref>) around λ^(m) as
b̂_k(θ;n;λ^(m))
+∑_k^'∂/∂λ_k^'[θ_k]
λ^(m)_k^'
+ o(λ^(m))=0.
Using (<ref>) and ignore the residual terms, this is expressed as
∑_k^'[θ_k(θ), θ_k^'(θ)]
λ^(m)_k^'=b̂_k(θ;n;λ^(m)).
In a vector form, this gives the equation (<ref>).
§ SUMMARY AND FUTURE PROBLEMS
We proposed algorithms for estimating and correcting the bias of the posterior mean as an estimator of parameters. Two algorithms are introduced, both of which rely on posterior covariance and cumulants derived from MCMC outputs, eliminating the need for additional analytical calculations. The first algorithm is based on a Bayesian infinitesimal jackknife approximation and successfully estimate the bias including the difinitional bias (<cit.>) using the result of a single MCMC run. The second algorithm involves an iterative improvement of a quasi-prior based on the output of the first algorithm; it is shown to be effective in high-dimensional and sparse settings for logistic regression.
In addition to utilizing the recently emerging Bayesian infinitesimal jackknife approximation (<cit.>), the second algorithm is characterized by a hybrid approach that combines bias estimation and correction. An significant aspect of this strategy is that the “bias reduction terms” do not need to completely eliminate the bias. Instead, their role is to reduce the bias sufficiently so that the bias correction by the first algorithm becomes effective. However, the use of the first algorithm within the second algorithm requires justification of the first algorithm in cases where quasi-prior is non-negligible. Preliminary analysis suggests that the success of this approach may depend on the use of a quasi-prior whose logarithm is linear in θ, but further analysis and experimentation are required for the understanding.
There are a number of subjects left for future studies: First, more theoretical analysis is needed along with the development of an appropriate stopping criterion for the algorithm. Additionally, the relationship between the proposed method and existing approaches, such as the Firth method for bias correction of the maximum likelihood estimator (<cit.>), should be explored. Finally, applications to large-scale real-world data and further tests using artificial data are necessary to assess the potential and limitations of the proposed method.
§ ACKNOWLEDGEMENTS
I would like to thank Keisuke Yano for the fruitful discussions and valuable advice.
§ VON MISES EXPANSION
Here we introduce the von Mises expansion and provide an interpretation of the definitional bias b_0 in this context. Let us define a formal posterior for an arbitrary distribution F as
p(θ ; F,n ) =
exp{n ∫ℓ(x; θ) dF(x) }p(θ) /∫exp{n ∫ℓ(x; θ') dF(x) }p(θ') dθ',
where F substitutes for the empirical distribution Ĝ_n in (<ref>); here as usual the empirical distribution is defined by the sum dĜ_n=(1/n)∑_i=1^n δ_X_i of the point measures δ_X_i concentrated on observations X^n=(X_1,X_2, …, X_n).
An estimator T^A(F, n) of A(θ) is defined as a posterior mean as
T^A(F, n)=∫ A(θ)p(θ;F,n) dθ.
Specifically, T^A(Ĝ_n,n)=[A(θ)]. When F=G, T^A(G,n) defines an “ideal value at sample size n” for the estimator; this does not necessarily coincides with the “true value” (or projection) defined as A_0=lim _n →∞T^A(G,n) and can have some bias even for a consistent estimator.
Now that we introduce von Mises expansion (<cit.>) as
T^A(Ĝ,n)=T^A(G,n)+1/n∑_i=1^n T^A_1(X_i;G)
+1/2n^2∑_i=1^n ∑_j=1^n T^A_2(X_i,X_j;G)+o_p (1/n ),
where T^A_1(X_i;G) and T^A_2(X_i,X_j;G) are influence functions for the true distribution G. These influence functions are assumed to satisfy
[T^A_1(X_i;G)]=0, _X_i[T^A_2(X_i,X_j;G,n)]=_X_j[T^A_2(X_i,X_j;G,n)]=0,
where _X_i means frequentist expectation over X_i, keeping other components of X^n fixed.
The dependence of T^A_1 and T^A_2 on the sample size n is omitted here; we will find that it is crucial in the first term T^A(G,n) for our purpose of estimating the bias. If we take the frequentist expectation for both sides and use (<ref>), it gives:
[T^A(Ĝ,n)]= [T^A(G,n)]+1/2n^2∑_i=1^n [T^A_2(X_i,X_i;G)]+o (1/n ).
If we restrict ourselves to a consistent estimator and regard lim_n →∞ T^A(G,n) as the true value of A, the bias of the estimator T^A(Ĝ,n) is given by:
b_0(A,n)=[T^A(G,n)]-lim_n →∞ T^A(G,n) ,
b_2(A,n)=1/2n^2∑_i=1^n [T^A_2(X_i,X_i;G)],
b(A,n)= [T^A(Ĝ,n)]-lim_n →∞ T^A(G,n)=b_0(A,n)+b_2(A,n)+o (1/n ),
where b_0(A,n), b_2(A,n), and b(A,n) may be abbreviated as b_0, b_2, and b, respectively.
It is natural to estimate b_2 using b̂_2 defined by (<ref>), because the formula (<ref>) indicates that the second-order influence function T^A_2(X_i,X_i;G) can be represented by the third-order posterior cumulant. To be precise, we need to consider relations in (<ref>),
_X_i[T^A_2(X_i,X_j;G,n)]=_X_j[T^A_2(X_i,X_j;G,n)]=0,
imposed on the second order-influence function. However, the corresponding correction appears to have little effect in examples. If we consider b̂_2 as an estimator of b_2, the rest part b̂_0 of the proposed estimator should be regarded as an estimator of b_0; we can confirm this in an example of the binomial likelihood.
§ JACKKNIFE BIAS CORRECTION
First, we assume the bias b of the estimator Â(X^n) of statistics A is asymptotically proportional to the inverse of the sample size n as:
[Â(X^n)]=A_0+c/n+r(n),
where A_0 is a “true value” of A_0 defined as a limit lim_n→∞Â(X^n); c is a constant and the residual term r(n) is assumed to be in the order of O(1/n^1+α), α>0.
Let us consider modified data where the observation i is removed and express it as Â(X^-i). Since sample size of X^-i is n-1 and each component comes from the same population as for X^n, (<ref>) indicates
[Â(X^-i)]=A_0+c/n-1+r(n-1).
From (<ref>) and (<ref>), we obtain the desired result as
[ ∑_i=1^n (Â(X^-i) - Â(X) ) ]=n (c/n-1-c/n ) +o(1/n)
= c/n+o(1/n).
apalike
|
http://arxiv.org/abs/2409.03086v1 | 20240904212002 | Intersecting Liminality: Acquiring a Smartphone as a Blind or Low Vision Older Adult | [
"Isabela Figueira",
"Yoonha Cha",
"Stacy M. Branham"
] | cs.HC | [
"cs.HC",
"H.5.0; K.4.2"
] |
University of California, Irvine
Irvine
CA
USA
[email protected]
University of California, Irvine
Irvine
CA
USA
[email protected]
University of California, Irvine
Irvine
CA
USA
[email protected]
§ ABSTRACT
Older adults are increasingly acquiring smartphones. But acquiring smartphones can be difficult, and little is known about the particular challenges of older adults who are additionally blind or losing their vision. We shed light on the social and technical aspects of acquiring smartphones with vision loss, based on deep qualitative interviews with 22 blind or low vision (BLV) older adults aged 60 and over. Through our grounded theory analysis, we found that BLV older adults experience liminality as they acquire smartphones and transition through re-acquiring smartphones as they become blind, and they can transition through liminality by participating in mutual aid within the blind community. We contribute the notion of “Intersecting Liminality,” which explains the marginalizing experience of simultaneously transitioning through vision loss, aging, and technology acquisition. We contend that Intersecting Liminality can serve as a framework that centers the dynamic nature of disability to help our community generate a more nuanced understanding of technology acquisition and more effective assistive interventions.
<ccs2012>
<concept>
<concept_id>10003120.10011738.10011772</concept_id>
<concept_desc>Human-centered computing Accessibility theory, concepts and paradigms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10011738.10011773</concept_id>
<concept_desc>Human-centered computing Empirical studies in accessibility</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003126</concept_id>
<concept_desc>Human-centered computing HCI theory, concepts and models</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003456.10010927.10003616</concept_id>
<concept_desc>Social and professional topics People with disabilities</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003456.10010927.10010930.10010932</concept_id>
<concept_desc>Social and professional topics Seniors</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Accessibility theory, concepts and paradigms
[300]Human-centered computing Empirical studies in accessibility
[500]Human-centered computing HCI theory, concepts and models
[300]Social and professional topics People with disabilities
[300]Social and professional topics Seniors
< g r a p h i c s >
(A) An older adult who has acquired a smartphone
(B) gets pushed into liminal space as they acquire blindness and struggle to transition out (blue striped arrows pointing away from clouds. E.g., attending classes, accepting blindness).
(C) The Intersecting Liminalities of aging, technology acquisition, and blindness make it difficult to escape (red arrows pointing towards clouds. E.g., lack of Android classes at local centers, constantly learning, daunting learning curve).
Three images describing the story of a person going through the life transition of acquiring blindness while acquiring a smartphone and aging. In each image, in the middle there is a cloud signifying a liminal space. Three axes go through the cloud. Each axis represents a transition the BLV older adult goes through. First axis is aging, from being younger to older. Second axis is technology, from being a smartphone non-user to user. The last axis is blindness, the transition from being sighted to being blind.
In image A, there is a person (icon) on the sighted end of the blindness axis, a person on the smartphone user end of the axis, and a person on the older end of the aging axis.
In image B, one person icon is in the middle of the cloud, signifying that the person is in the liminal space on all axes. On each axis, a blue striped arrow points out and a red arrow points into the liminal space cloud. Blue striped arrows point from the BLV older adult and out of the cloud to signify BLV older adults' efforts to exit the liminal space and factors that support the BLV older adult to get out of the liminal space.
In image C, one person is in the middle of the cloud. Red arrows point from out of the liminal space towards the BLV older adult to signify factors that push the BLV older adult back into the liminal space.
Intersecting Liminality: Acquiring a Smartphone as a Blind or Low Vision Older Adult
Stacy M. Branham
September 2024
====================================================================================
§ INTRODUCTION
76% of the 56 million older adults aged 65 and older in the USA <cit.> are using a smartphone <cit.> in 2024. Acquiring a smartphone in older age is important for accessing online information <cit.>, supporting aging in place <cit.>, or increasing communication with healthcare professionals <cit.>.
We know that mobile technology acquisition can be challenging for older adults due to interface complexity <cit.> and a lack of instruction manuals <cit.>, for example.
In response, interventions have been proposed to support older adults in learning smartphones <cit.> and locating features on mobile interfaces <cit.>.
However, what about those who additionally experience disability, specifically vision loss, in older age?
In 2017, in the USA, of those over the age of 65, approximately 4.2 million people were low vision, and another 800,000 were blind[In this paper, we use the term low vision to indicate visual acuity of ≤20/40, and blind to indicate visual acuity of ≤20/200] <cit.>.
For blind people, “mastering these devices is an arduous and long task” <cit.>, since accessing a smartphone entails using features that are often buried in the phone, unfamiliar to the general population, and difficult to configure <cit.>. Despite this growing body of work on blind and low vision (BLV) older adults and smartphones <cit.>, there is insufficient research on how older adults who are BLV acquire smartphones.
Acquiring disability and becoming older have been described as life transitions <cit.>. For example, adults can experience multiple transitions such as changes in health and retirement status as they enter later life <cit.>, and maintaining a positive attitude has been found to support the transition into disability <cit.>.
With life transitions, people can experience liminality, as they do not completely belong to either the pre- or post-transition states. The word liminal is defined as “relating to a transitional or initial stage of a process” and “occupying a position at, or on both sides of, a boundary or threshold” <cit.>.
There have yet to be reports of the liminal experiences of those who go through transitions of acquiring blindness and smartphones in older age, and how experiencing these multiple transitions simultaneously introduces additional complexities in identity shifts during smartphone acquisition.
To address this gap, we conducted a qualitative study to answer the following research questions:
* How do older adults who are BLV acquire smartphones?
* How do older adults who are transitioning into BLV acquire smartphones?
We report on in-depth semi-structured interviews with 22 participants who were 60 years of age or older, who identified as being blind or low vision, and who used a smartphone. Through a constructivist grounded theory analysis <cit.>, using theoretical sampling and abductive reasoning, we read our data through a lens of life transition and liminality, which revealed three main themes:
(1) Older adults who have early life blindness[In this paper, we define early life BLV to mean blindness or low vision that was acquired before the age of 60. We use early life BLV, early life blindness, and early life vision loss interchangeably throughout the paper.] experience liminality as they acquire smartphones,
(2) Older adults with later life blindness experience liminality as they transition through re-acquiring smartphones they already owned as they acquire blindness, and
(3) BLV older adults transition through liminality by engaging in mutual aid with the blind community as they acquire smartphones.
Based on our findings, we present Intersecting Liminality, a framework for articulating the ways in which multiple liminalities–between younger and older, between sightedness and blindness, and between smartphone non-user and user–can intersect and compound (section <ref>).
We argue that in order to holistically understand how older adults with disabilities acquire technology, and to design effective interventions, the multiple liminalities they experience must be considered as inextricably linked.
Our study, to the best of our knowledge, contributes the following:
* Intersecting Liminality, a framework for x
* Discussion about smartphones with a focus on life transitions and liminality
* How the field of HCI can use this framework
§ RELATED WORK
Scholars have studied older adults and technology for decades from many perspectives, such as life course sociology that focuses on life stages <cit.>, lifespan psychology that focuses on the individual
<cit.>, social gerontology <cit.>, medicine <cit.>, feminist theory <cit.>, and more.
Narratives about aging describe a growing digital divide <cit.> and a lack of technological savvy <cit.>. Additionally, older adults increasingly face social isolation <cit.>
and digital exclusion <cit.> as communication moves increasingly online.
While older adults have traditionally been considered passive users of technology <cit.>, recent research has challenged this perspective, considering older adults as actively passive users <cit.>, online content creators <cit.>, and caregivers <cit.>. At the same time, there has been a shift in discourses around older adulthood from foregrounding deficits <cit.>–lack of capability–to foregrounding assets <cit.>.
In this work, we draw on perspectives from social gerontology, namely the concepts of life transitions and liminality, and we concur that older adults are active and capable in their social and technical lives.
§.§ Life Transitions, Liminality, and Technology
Across the life course, a person experiences many life transitions such as becoming an adult, getting married, growing a family, etc. <cit.>.
As one goes through a life transition, they go through a liminal period. van Gennep <cit.> proposed that life transitions comprise a three step process beginning with separating from a previous identity or environment, followed by a liminal or transitional period, and ending with being incorporated into a new social context. Turner <cit.>
further theorized liminal space from an anthropological perspective. As a person goes through a transition, they are separated from social structure and enter a state of ambiguity in which one becomes a “blank slate” to prepare for learning new knowledge from their new group as one incorporates <cit.>. He unpacks what it means to experience liminality, and how there is a symbolism attached to the “liminal persona” that can take on both bizarre and negative connotations, since one is viewed as other and being outside of known social groups during the transition between them <cit.>.
Becoming an older adult and acquiring disability have both been identified as life transitions that carry with them the features of liminality described above. As individuals age, they experience changes in various aspects of life such as health, retirement status, financial status, housing situation, and amount of social interaction <cit.>. These constitute a process of becoming, rather than a state of being older, one in which individuals can resist, move through, or occupy liminal space indefinitely <cit.>. In the latter case, those who are moving from the third to fourth age (i.e., the oldest old) can be said to experience “persistent liminality”
as they grow increasingly isolated and housebound with age <cit.>. Similarly, acquiring disability can be conceptualized as an identity transition from a non-disabled to a disabled person, a process in which a “new sense-of-self” comes into being. This can comprise multiple stages including denial and grieving, building a social support system, successfully being reintegrated into the community, and finally accepting disability and returning to a quality life <cit.>.
There has been increased attention on liminality in social computing research <cit.>. Researchers have conceptualized liminality in terms of physical spaces <cit.>, digital spaces <cit.>, and as being between learning and mastery <cit.>. Researchers have also studied how people appropriate technologies during the process of life transitions <cit.> and how online communities such as social media support people experiencing liminality during life transitions <cit.>. For example, Haimson <cit.> detailed how multiple social media sites work as “social transition machinery” to support people experiencing life transitions <cit.>. Semaan et al. <cit.> found that social media helped veterans experiencing multiple life transitions develop “identity awareness” as they received support from other veterans to reintegrate into civil society <cit.>.
Researchers have also called for designing technology for people who are socially isolated and thus in a perpetual liminal space, since they are in between or outside of communities and not integrated into social practices and patterns <cit.>.
We adopt Turner's process approach to liminality that centers becoming and being othered.
Liminality has not yet been conceptualized or applied in accessible computing in the field of HCI, as virtually no work has documented the experiences of disabled people experiencing life transitions.
§.§ Older adults' smartphone acquisition and adoption
Scholars have studied older adults' technology adoption from multiple angles, such as adopting social media <cit.>, online shopping <cit.>, online banking <cit.>,
communication technologies <cit.>, wearables for health management <cit.>, voice assistants <cit.>, and smart mobile devices <cit.>.
Several studies mentioned that social support is crucial for adopting computing technology, but also presents challenges, such as living far from family <cit.>.
Several factors can affect older adults' adoption of technology in addition to age <cit.>, including identity <cit.>, confidence <cit.>, and stigma associated with disability or aging <cit.>.
Prior work regarding older adults and smartphone adoption has focused on motivations to adopt a smartphone (e.g., the desire for independence) <cit.> and learning preferences (e.g., the desire for instruction manuals) <cit.>. One popular framework for explaining how technology is adopted is the technology acceptance model (TAM) <cit.>, which has been extended for older adults <cit.>. However, these frameworks focus on usability, leading researchers to question whether they are too rigid to apply to older adults <cit.>. For example, in a study of older adults adopting a tablet-based communication technology, Neves at al. conclude that TAM is limited as a theoretical model since it neglects the interplay of social context, human agency (individual choices), and inherent properties of technology <cit.>.
While a great deal of work focuses on how BLV people use or access mobile devices (e.g., <cit.>),
few papers have focused on BLV people's adoption of smartphones in our research community.
In 2015, Rodrigues et al. <cit.> studied how five novice blind people adopt, learn, and use an Android smartphone over an 8-week period. Following the adoption period, they found that although learning smartphones was a long and difficult process, participants were resilient and willing to keep learning, since they valued the benefits the smartphone could provide.
Very little research has highlighted the combination of aging, blindness, and smartphone acquisition, with three notable exceptions.
Piper et al. <cit.> studied older adults over the age of 80, some with vision loss, and found that initial setup and learning of smart mobile devices, mostly tablets, was difficult due to lack of access to one-on-one support. In another study, Piper et al. <cit.> studied how older adults with later life vision loss learn and use computing technology to communicate, where two participants were smartphone users.
Only one work focused specifically on smartphone acquisition challenges of BLV older adults–on a small scale <cit.>.
Our study adds to this corpus by being the first to document the unique experiences of older adults who are blind or losing their vision as they acquire a smartphone.
§ METHODS
§.§ Participants
We recruited 22 participants who identified as being blind or having low vision, who use a smartphone, and who are aged 60 or over. Participants were recruited from local independent living centers, online forums about technology, and snowball sampling.
To maintain confidentiality of our participants' information, we replaced their names with IDs. Participants' ages ranged from 60 to 83 years. Seven participants were women, and 15 were men. Out of 22 participants, six identified as having lost vision later in life at age 60 or later (IDs begin with L), and 16 had been blind since earlier in life (IDs begin with E). Of the six participants with later life vision loss, all used smartphones in some capacity prior to losing vision, although two described re-learning the smartphone from scratch after vision loss. Most participants (n = 18) used iPhone. Of these 18, three participants used a combination of iPhone and BlindShell[BlindShell is a smartphone designed for blind people. <https://www.blindshell.com>]. Four participants used Android. Most participants (n = 18) had been using their smartphones for at least five years and had used several models of smartphones in the past. Refer to <ref> for details.
§.§ Procedure
We conducted audio-recorded, semi-structured interviews with 22 participants over a phone call or Zoom, according to their preferences. Interviews were conducted over a period of 10 months in 2023. Each interview lasted between 50 and 96 minutes (average of 76 minutes, total of over 26 hours). Prior to the interview, we emailed participants a study information sheet for review and acquired verbal consent at the beginning of the interview. Participants also filled out a pre-interview demographic survey. Participants were compensated at a rate of $40 per hour in the form of a gift card. The study was approved by the researchers' Institutional Review Board (IRB).
Interview questions covered topics such as smartphone opinions, accessibility challenges experienced, how the smartphone was used for social communication, and kinds of resources they utilized when learning smartphones. After the first five interviews, the first author engaged in theoretical sampling <cit.> by adding more questions about aging and social communication on the smartphones and by asking targeted questions about losing vision with a smartphone to participants with later life vision loss. Later in the study, the researchers also specifically recruited more Android users.
§.§ Analysis
All interviews were transcribed by either the researchers or professional transcribers. The transcripts were checked for quality and accuracy by the researchers. We followed a constructivist grounded theory method <cit.>, starting with open coding on each of the transcripts. The first two authors open coded data in parallel and met frequently to discuss emerging themes, such as the importance of social support, social exclusion on the smartphone, differences in expectations vs. reality regarding the smartphone, etc. They engaged in focused coding together, frequently meeting to discuss ideas emerging across participant data. As the authors compared focused codes from participants with later life and early life blindness, we recognized the concept of identity transition and revisited the literature to review the concepts of life transition and liminality. We decided to take an abductive approach to the remaining analysis, in such a way that these extant theoretical concepts earned their way into our final narrative <cit.>. All three authors engaged in theoretical sorting through diagramming <cit.>, and in particular, affinity diagramming <cit.> using FigJam <cit.>, to streamline the core ideas into axial codes. The headings and subheadings of our Findings map to the axial and focused codes of our analysis (see <ref>).
§ FINDINGS
We found that BLV older adults experience liminality as they acquire a smartphone, experiencing social exclusion as a result (section <ref>). Older adults who become blind later in life experience multiple liminalities as they both acquire blindness and re-acquire smartphones they used when sighted (section <ref>). Finally, BLV older adults transition through liminality via mutual aid with the blind community as they (re)acquire smartphones (section <ref>). We see especially in sections <ref> and <ref> that BLV older adults experience compounding marginalization, rooted in the multiple liminalities of acquiring blindness, acquiring a smartphone, and aging; from this, we propose Intersecting Liminality, a framework we elaborate in the Discussion (section <ref>).
§.§ Liminality of Acquiring a Smartphone as a Blind Older Adult
§.§.§ Becoming a Smartphone User
As BLV older adults switch from a feature phone to a smartphone, they may experience conflicting understandings of their own identities as smartphone users, finding themselves "stuck" in transition.
From Unimaginable to Indispensable
Many could not imagine themselves as smartphone users, especially since stories about smartphones didn't align with their needs. Multiple participants originally questioned how they could navigate a touchscreen without the tactile buttons characteristic of their feature phones. Additionally, E3 recounted how you're only told you need to get an iPhone (E3), but she did not understand clearly why a blind person should get an iPhone or how it could support her as a blind person: Never the why... If they do a why, it's just you can't relate to it because it's like, well, how does that benefit me as a blind person? (E3). But, once they acquired a smartphone, they perceived their smartphones as indispensable in their daily lives. Multiple participants (e.g., E4 and E12) described the smartphone as something that, once you did it, you kinda wondered, `How did I ever live without this?' (E4) and even feeling lost (E8) without their smartphones.
E1 was reluctant to get a smartphone since she never wanted to be the type of person who runs around with their phone all the time. Always on the phone talking (E1). However, now I cannot go anywhere without my phone, traveling to the doctor or anywhere because I need my phone (E1).
Once BLV older adults acquired a smartphone, their identities shifted from skepticism to not being able to imagine life without one. The transition to smartphone user is not as simple as it seems. Participants worked hard to learn their smartphones, achieving different levels of mastery. While attempting to learn, participants are repeatedly dragged back into a state of ambiguity.
Perpetual State of Learning
Our participants described that acquiring a smartphone posed a big learning curve, after which they were trapped in a state of endless learning, due to accessibility challenges and the smartphone's vast depth–always the beginner, never the expert.
Many participants described hearing they should get an iPhone, but these stories neglected to mention both an inaccessible setup process, often requiring someone sighted to make it useful for a blind person (E9) and a difficult (E1) learning curve. For instance, E3 described tripping up on specialized terminology: sometimes I don’t even know the terminology. Like it will say something about... the rotor, do something with the rotor. I'm like, `What the hell is the rotor?!' (laughs) I don't know what that is! (E3). After the large learning curve, participants described smartphones as still having depth like an ocean (E8), resulting in unlimited (E8) learning. E3 described that there’s just so much that I don't know that I should know. ... [although] I've had this phone a little over a year (E3).
Regardless, E2 still encouraged others to get a smartphone, due to its ability to support being blind–just don't try to learn everything on it (E2). For some BLV older adults, learning a smartphone felt insurmountable due to gesture challenges. E9 described not feeling successful at using a touch screen phone and thus switched to BlindShell: I think there’s two different types [of smartphones]. And I am unable to use one of them with any great degree of success. And that is the type that is a flat screen type (E9).
Thus, participants felt like they were neither a novice nor an expert, and were stuck learning, relearning, and re-acquiring the same smartphone, where it's difficult to reach a state of mastery. E8 described feeling both like an expert and like he was still learning everything on the smartphone: Well, kind of both. I feel those things I've been using I think I learned but I have a lot to learn (E8). Similarly, E6 described having been at it for, ... let's see, five, six years. ... I feel like I'm pretty good with it. Yet I still have issues with iPhone. But I am learning (E6).
Back to Square One
Even if participants felt proficient at using their smartphones, thus being past the transition phase to smartphone user, unforeseen external factors such as unreliable smartphone accessibility pushed them into a liminal state.
Sometimes, participants questioned their ability to use their smartphones due to issues of unknown origins. When issues or glitches (E4) occurred, such as iOS and app updates (as described by nearly all participants), some described themselves as not smart enough to figure out how to do a workaround (E3). Additionally, since sometimes you don't know if it's a bug (E13), participants may fault their abilities for causing the issue:
Is it by design, sort of broken by design? Well, we don't know yet, because your ability to use it is in question, because you just found it. ... So, I wait until I'm sure that whatever I’m doing isn't the problem. (E13)
As a result, some users were compelled to, out of necessity, become a more of a technology geek than your natural inclinations (E7), to deal with smartphone issues.
§.§.§ Social Exclusion
Most participants described largely using their phones for social connection and communication. However, having to conform to less accessible communication preferences of sighted people (i.e., text messaging rather than phone calls) to stay connected with friends and family amplified social isolation and diminished typical support systems, pulling BLV older adults back into liminality as smartphone users.
“Just Call Me” and “Just Let It Go”
Participants described being socially excluded from conversations through breakdowns in digital communication (e.g., text messaging) when conversing with younger, sighted family and friends.
Voice calls were the preferred mode of communication for our participants, who were familiar with this functionality from feature phones and landlines. Yet, staying socially connected often meant participating in group SMS chats that posed accessibility barriers. Since emojis and images litter (E1) group chats, text conversations tended to sound meaningless and be difficult to follow. For example, during holidays, when group chats exploded with text messages containing emojis such as firecracker, skyrocket, [gibberish] (E10), participants described preferring to be left out of group chats, since they couldn't fully participate anyway: My phone is just going on, and on, ... for nothing. ... thankfully, they've kind of dropped me out of that thing, so I don't have to through the holidays, or family parties, or whatever. ... Just call me (E10).
As close family tended to better understand and accommodate disability in smartphone communication, accessibility issues most commonly arose out of the disability awareness gap with more casual friends and the generational gap with younger relatives. E1 explained that people don't understand your level of vision impairment. (sighs) They send it anyway. E6 described having to text, especially with younger relatives: When I have a nephew or a niece having a birthday or whatever I used to just text them because... they don’t answer the phone. They'd rather do text (E6). Her close relatives understood her access needs, but others did not, causing her to be excluded: My yoga class and my other relatives, nephews and nieces, they all communicate back and forth. ... they just do their own thing. Ok fine, I just delete them (E6).
Like E10 and E6, many others described having to accept that they could not participate in certain conversations. Disheartened, they described reluctantly having to just move on (E1) and just let it go (E1). They ultimately felt devalued, powerless, and resigned to being left behind: It took me a while, but I just accept it. ... I'm glad that they communicate with one another. I'm just the auntie; what am I? I'm not a cousin, like their cousins. No big deal (E6). Similarly, E11 put[s] it aside. ... I've got more control over my life than getting upset over something that I don't have control over.
Although BLV older adults described wanting to participate in group conversations via text, many felt forced to not participate and ultimately were excluded.
Speaking Different Smartphone Languages
On top of the social exclusion of socializing on the phone with friends and loved ones, BLV older adults experienced a communication barrier when talking about the smartphone with sighted users. This precluded them from giving assistance to or getting help from typical support systems. E7 shared that he and his wife always have this communication barrier (E7) when he wants to help her with her smartphone since his smartphone interface with a screen reader is different:
She’ll say `how do you do whatever.' And I'll say `I can't explain it to you, because you're not using VoiceOver, and I don't understand how to do it.' ... Because VoiceOver verbalizes everything. But when there's a button that says `delete,' I always say, `look to the delete button.' But, visually there isn't a delete button, it’s a trash can. (E7)
Additionally, when E3 wanted to get assistance from a sighted person on her phone, she described having to take the VoiceOver off. ... That doesn't help me. (laughs) But that's the only way they can help me because they don't know VoiceOver (E3). Thus, she was barred from following along with the sighted helper. The way around this communication barrier is learning both the sighted and blind version of the smartphone, which participants described having to do, out of necessity, or we'll just be that blind person in the corner who everybody feels sorry for. Which I am not interested in being (E7). Thus, BLV older adults turn to the blind community for support (section <ref>).
§.§ Liminality of Re-Acquiring a Smartphone While Acquiring Blindness as an Older Adult
As older adults experience vision loss later in life, they go through additional liminality as their ability and identity shift from sighted to blind. They experience breakdowns in social life and support systems for technology that can lead to them feeling stuck and isolated. Six participants
experienced later life vision loss.
§.§.§ Becoming Blind
The transition from being sighted to blind in later life derailed older adults' lives and caused an identity shift or crisis, as they experienced changes in their vision, social circles, support systems, and daily lives in general. Accepting blindness and embracing the blind community was one way of moving forward in life, while not accepting it could result in being stuck in more precarious liminal space between sighted and blind.
Being “Reborn” into Isolation
Some participants described having to relearn everything in life again, including how to walk, balance, navigate in the world, and interact with people as they acquire a new disabled identity. For instance, L6 equated sight loss to being reborn because he had to learn how to walk again. ... to now feel my way around with a cane, so I have to walk slower, and carefully. ... to learn to use my ears more (L6). Participants expected that they would be done learning at their age: I didn't think that at my age, in my fifties and sixties, ... oh gosh it would be nice to ... stop working and kind of relax. But it's like, now it's back to grade school (L2). However, as they acquired blindness, they described having to put a lot of effort into any of it. Any of the accessibility things in order to recover your life (L2). On top of relearning physical navigation, L3 described re-entering society with a new disabled identity: People do treat you differently. ... It was different to see the world through the perspective of a disabled person. Because until you've been there you can't conceptualize it (L3).
Older adults experiencing vision loss become isolated from their social circle. L3 described having a social group that is pretty small. But I talk to the people at the [local independent living center] and other classmates (L3). He shared that not being able to get in a car (L3) reduced in-person visits with family. L1 also described losing contact with friends due to their passing or self isolating due to feeling sort of embarrassed with the situation that I'm in (L1). L1 and L3 described being homebound (L3) due to their vision loss, leading to further isolation. While L3 goes on weekly walks, L1 shared, I really don't go anywhere. I usually stay in my apartment, and people come here (L1).
Support systems that previously worked for smartphone learning no longer worked after vision loss. As participants continued acquiring blindness, their family's ability to support them declined, since they didn't understand assistive technology:
my son and my daughter are a lot more tech savvy than myself, but ... not in these accessibility things. They don't have any specialty in accessibility, or special knowledge there. ... I don't always have somebody (L5).
Accepting Blindness and the Blind Community
Participants described needing to accept blindness in order to move forward in life. L3 characterized vision loss as a process of acceptance: Losing your sight is ... sort of like the seven steps. Where first there's denial and then acceptance, and so it took a while, just to get used to (L3). For L6, accepting blindness was an active endeavor and was the key to re-joining society and continuing to live:
You can't give up, though. ... I don't have a choice. It's either this, or the graveyard, and I don't wanna die yet; I'm not ready. I'm not going to sit at home and become a couch potato, so, I'm going to get out and walk around and join the rest of the world. (L6)
L6, like other participants, perceived accepting blindness and, in particular, accepting the blind community, as critical to overcoming social isolation that can stem from vision loss:
Being blind, it can be a lonely life, if you let it be. And by having the phone, it allows you to be able to open up your life to other people, and let other people into your life. ... You've got to get to know your [blind] community. Can't be a hermit. (L6)
We see here and in the sections that follow that the supports provided by the blind community to successfully adopt a smartphone further connect the blind older adult to the rest of the world (L6). Moreover, adoption of the accessibility features of a smartphone is a form of acceptance of blindness.
This is precisely why L1 was unwilling to adopt the smartphone or other assistive technologies. L1 described that learning these devices would feel like giving up hope of regaining vision:
If I try to have somebody come in and teach me how to use a cane, it's like I'm giving up that goal of mine, which is to get my vision back, ... so I'm not pushing the other resources available like taking classes on how to use a computer. (L1)
Further, he described a compounding sense of hopelessness that, even if he did accept his blind identity, he might never be able to achieve smartphone adoption:
Even if I accepted the fact that I'll be blind forever, it's very time consuming to go to these places [blind organizations] and learn things ... so if I try to really spend my time really trying to learn all these different things, its like I'm giving up my hope. (L1)
L1 is resigned to residing in liminal space, as a man who is neither sighted nor blind and who is no longer able to use his adopted smartphone. The finality of "blindness" and the foreign complexity of assistive technology make transition seem impossible. If L6 is right, and connecting with the blind community to learn the smartphone is the way out of isolation, then L1 has lost much.
§.§.§ Burden of Re-Acquiring the Smartphone
Our participants reported having to relearn how to use their existing smartphone or switch to another phone altogether. We refer to this as "re-acquiring" the smartphone, to emphasize the additional labor and the push away from technological confidence and back into a liminal state of learning experienced by older adults who are acquiring blindness.
Learning to Walk Again
While L5 chose to make a switch to Apple's iPhone when he acquired vision loss, and did so with relative ease with resources provided by local organizations, this was not the case for the other five participants who lost vision later in life. Four participants (L2, L3, L4, L6) each kept the same smartphone they had previously. Although they had in depth knowledge of the smartphone system as a sighted person, these four participants each needed to relearn the smartphone after acquiring blindness, thus re-acquiring their smartphone.
Relearning the smartphone posed a daunting (L2) task, one akin to learning how to walk all over again (L6) or even a language that is different with my phone, using VoiceOver and finger gestures (L6).
Further, since L3 was just basically using his iPhone for emails, text and some web surfing (L3) when he acquired blindness, he described a double labor of relearning the smartphone and learning accessibility functionality:
There was a very steep learning curve, not only in terms of the accessibility functionalities. But the iPhone in general. My wife is already telling me, ‘you use about two percent of the capacity of that thing.’ ... so I had to learn both the regular stuff about iPhones. And the accessibility. ... I’m still learning things as we speak. I had a class today and it’s going to continue for a while.L3
Aware of the massive effort ahead, some participants described proactively preparing to use their smartphone once their vision loss progressed. While L4 had to learn things the hard way since his blindness occurred very suddenly (L4), L2 could prepare his smartphone for accessibility since he acquired blindness gradually: I actually set up my voice assistant before I lost my sight completely when I knew things were going that way (L2). L3 had low vision, but he shared that becoming blind was a possibility for me. So I want to learn [VoiceOver] as much as possible so should that day ever come, I’m not starting from square one (L3).
For these participants, challenges of age and vision loss intersected, resulting in a feeling of having to learn forever or racing against time to regain technical fluency. Participants noted that it can take older adults longer to learn things like [technology] (L6), and vision loss means you have to memorize everything. ... you have to repeat it over and over and over again (L2).
Paired with a sense that one is nearing the end of life, there was a sense of urgency to make it to the other side of the learning curve:
I have kind of a bit of a hard charging attitude because of my age. ... I've got a couple of years to learn this stuff and after that I need to already know it. I don't want to spend the rest of my life in the learning mode of these technologies and techniques. (L2)
To save time, older adults experiencing vision loss tended to want to keep their current smartphone instead of switching. L2, a long-time Android user, talked about this in terms of optimizing learning resources, especially from the point of view of time:
Each one of those things takes a lot of effort [to learn]. Braille. Victor reader. TalkBack. All of it. O&M and stuff. So I'm trying to minimize – preserve my learning resources in my brain by not having to learn the iPhone. I wish they would quit trying to say switch to iPhone because that’s what we know. Don't want to. (L2)
We see from the examples above how (re)learning the smartphone was not only a matter of intense labor, but it also marked a regression back into liminal space that our participants had already struggled and succeeded to break out of.
Resisting Conversion
Though others quickly locked into a smartphone platform after and even before their impending vision loss, L1 explored numerous alternative smartphone models, which hindered him from learning any one of them more deeply. When he began to lose vision, he purchased an iPhone 4, since he had heard that the iPhone is much better using Siri or the accessibility feature, and I should get an iPhone (L1). From blind organizations, he received several phones over time: iPhone 6, iPhone SE, BlindShell Classic, and BlindShell Classic 2. L1 described using these four smartphones simultaneously, each for different purposes, rather than adopting a single model, since he didn't feel like any one phone was reliable enough, even when people around him suggested he learn one technology:
I would prefer to have an iPhone and the BlindShell Classic 2. ... one of the women that helps me with most of the phone says I should just use the BlindShell, make phone calls on it, do everything on it, but I’m not sure if it’s good enough to do that yet. I just haven’t gotten into it enough. (L1)
The BlindShell phone is specifically designed for older BLV users, and it offers much reduced functionality as compared to the mainstream iPhone. With reduced functionality comes simplification that greatly enhances learnability and usability. Yet, adopting a phone branded for "the blind" that trades off functionality that sighted people take for granted on smart devices is entangled in the acceptance of blind identity, which L1 actively fights. As a result, L1 lacks mastery of any device and, ironically and somewhat tragically, is trapped in a state of reduced access.
§.§ Transitioning via Mutual Aid
Blind community support propels BLV older adults away from the ambiguity of acquiring blindness (section <ref>) as well as acquiring a smartphone (section <ref>). Among knowledgeable peers, they experience a sense of collective belonging and steady forward progress toward digital competency. However, some BLV older adults, especially those with later life vision loss, experience challenges gaining access to and benefiting from these resources.
§.§.§ Going for the Phone, Gaining Community
Participants described seeking resources to learn how to use their smartphone as a BLV person and inadvertently finding the blind community.
Most resources–classes, webinars, “tech talks,” etc.–supporting BLV smartphone adoption were provided by blind advocacy groups, whose instructors had lived experience and were therefore the best source of accurate information: [My instructor was] very much familiar with how to teach [VoiceOver gestures] because she was a blind person (E1).
These resources were not only a source of basic phone know-how, but they also kept students up-to-date about the latest phone features: [I attend] Tech Talk class where [the instructor will] look up different features that are coming out (E4). In this way, the classes represented a step away from liminal space toward phone mastery. For L3, who was less conversant with basic phone features and jargon, these classes helped him to talk to my daughters in the same language regarding technology (L3). In other words, by working with the BLV community, L3 was better able to tap into his traditional social networks–i.e., family–for technical phone support.
§.§.§ Cycle of (non)Acceptance
A key hindrance to BLV older adults' adoption of smartphones was the necessity of accepting one's blindness;
without doing so, they were disconnected from the blind community and unable to tap its resources. L3 recounted when he underwent this pivotal transition:
So here's the transition. ... I thought it was temporary. My sight's going to come back. And reality hit me. And for a while I just stopped using the damn thing [smartphone]. Because I couldn't see it. And it was just way more trouble than it was worth. ... it was the [local blind organization] that really got me going, and oh boy what classes have I taken. (L3)
Once tapped into the BLV community, smartphone learners could grow their skills, and some even became knowledgeable instructors and identity ambassadors themselves. E7, who was a teacher in the blind community, described supporting neighbors new to losing vision in a low vision support group with smartphone re-acquisition.
Beyond imparting technical skills, he instilled in them that blindness was not a loss. We [people who were born blind] had it all the time (E7). He endeavored to shift their perspectives on blindness beyond the I've lost my vision, I got to figure out how to get some of it back (E7).
We can see how engaging with the blind community around one's disability and smartphone adoption identities can kick into motion a virtuous circle that lifts one out of liminality. L4 describes this inflection point from learner to teacher as his pathway out of the hole of vision loss:
I'll be teaching it [class] again, ... because I enjoy it. ... I'm a big believer in volunteering, and helping other people. It's just something that ... to me is one of the secrets to, or strategies for, digging your way out of a hole that you might find yourself in when you lose your vision. (L4)
However, a vicious cycle can also occur.
When people who acquire vision loss are unable to accept their disability identity, they are unable to tap the resources of the community and unable to fully acquire and master their smartphone. This can lead to dire consequences for their social integration and mental well-being:
The first kink in the thing to make it better is getting a stream of knowledge that only comes out of the blind service delivery and the blind community systems. And if you can get that, terrific. If you can't, then you feel like you're all alone and there's nobody there to help you. And then the depression can really set in.
E7
§.§.§ The Hole of Later Life Vision Loss
Older adults with later life vision loss felt excluded from the existing technology support system in the blind community since blind organization classes did not support their needs, introducing friction in their transition into blindness.
For some participants who were avid smartphone users before acquiring blindness, the classes were too introductory. L6 described taking an entry-level VoiceOver course that was more like iPhone 101 (L6). The instructor demonstrated how to navigate to the settings menu, but then proceeded to go through every category that was in the settings, and describe them and what they do (L6). The next lecture covered Siri ... and [had] nothing to do with VoiceOver (L6). Suffice to say, L6 didn't get much out of that class (L6).
E7 explained that the lack of classes that adequately address needs of those with later life vision loss was a matter of economics: funding allocation favors younger BLV adults who are still part of the workforce.
There was a particular lack of support for Android users with vision loss.
Although half of (L2) the students in L2's classes –especially the ones that have become non sighted later on in life–tend to have Android (L2), he described feeling pressured to switch to Apple (L2) since most blind organization classes were about iPhone.
Android classes offered by blind organizations were both scarce and not taught by Android users, which diminished their utility for older adults with vision loss. One Android instructor confessed that ... he's only picking up this Android to teach this class (L2). Another instructor taught TalkBack through the lens of an iPhone:
He doesn't use Android. He uses iPhone. ... so he really didn't know how to use it [TalkBack]. And he said, `Well I'll learn, but let me tell you how it works on iPhone.' ... And I'm thinking, `I don't have an iPhone. I have my Android. I want to hear about how it works on Android. Talk to me about TalkBack.' (L2)
L2 described feeling unable to advocate for his needs in the blind community since he was new to the blind identity: I sort of feel like a newcomer and like I believe in advocating for myself but I also don’t want to appear too brash. Because this is my new community, and I have to take care of it. (chuckle) (L2).
Due to the dominance of the iPhone in the blind community resulting in fewer support opportunities, older adult Android users in particular turned to online resources. L2 described, I've had to look for other [online] sources other than [the local blind organization]. Like Hadley[<https://hadleyhelps.org/workshops/listen-with-talkback-series/android-talkback-talkback-basics>] who has the course on Androids. And different podcasts, such as the Blind Android Users Podcast[<https://blindandroidusers.onpodium.co/>] is very good (L2).
Finally, older adults with later life vision loss who were homebound experienced increased exclusion from local resources.
For L1, in-person support is necessary not just with the phone, but with almost everything. (chuckles) (L1). He shared that he didn't go through any training (L1) for his smartphones since blind organization locations were too far away, and he felt that online resources like YouTube tutorials or manuals were too difficult, inaccessible, and incomplete to use on his own. Thus he found himself in a resource deadlock.
Without access to the blind community, older adults with later life vision loss must rely on familiar resources, such as their family. However, sighted family members may not necessarily be able to support them due to their visual understanding of smartphones:
If you're in a community that doesn't understand the blindness field, and you're trying to get a smartphone, you're going to try to do everything visually. And you’re going to lean on your son, your daughter, your relative, whatever. And hope that they know what’s going on. And mostly they don't because they only know how to operate it visually.E7
Trying to use the smartphone visually hinders older adults with vision loss from seeking blind community support, reducing their ability to relearn their smartphone with blindness.
§ DISCUSSION
§.§ Intersecting Liminality
We present Intersecting Liminality, a framework that articulates the socio-technological experiences of people who are in transition to inhabiting multiple (marginalized) identities, such as disability, older age, and technology use (refer to <ref>). Our data reveal that for some older adults, the experiences of navigating through multiple liminalities simultaneously are fundamentally different from the experiences of navigating them individually. We argue that these liminalities intersect and their effects compound for older adults as they acquire blindness, re-acquire their smartphones, and also continue to age.
We found that participants were experiencing three concurrent life transitions of age, disability, and technology use, each with its own liminal space.
Moreover, participants struggled to move through these spaces, as many internal and external forces pushed them back into liminality.
Prior literature documents some of the challenges and barriers to learning and mastery of smartphones and ICTs for blind adults <cit.> and blind older adults <cit.>.
However, our novel reading of these challenges, through the lens of multiple life transitions and liminalities, enables us to view these phenomena in new light. For example, we now understand that being blind while acquiring a smartphone can be othering and isolating because one's ambiguous identity lies outside the mainstream social practices. We also now understand how acquiring a smartphone while being blind and older can feel like being stuck in “persistent liminality” <cit.> because they cannot rely on traditional social supports and have to learn amidst other challenges such as aging.
Leveraging a framing of life transitions to understand technology acquisition can help us make sense of challenges encountered and strategies for successful adoption.
We found that challenges with acquisition may have less to do with accessibility, but more to do with resistance to accepting a BLV identity.
Yet, prior work on acquiring technology with a disability tends to not include participant perspectives about acquiring disability at the same time as acquiring the technology <cit.>.
Additional to accepting a BLV identity, surprisingly, we found that those who were already smartphone users but who had newly acquired vision loss had the unique experience of having to re-acquire their smartphone.
Participants experienced a shift in identity, going from feeling tech-savvy to novice like being back in grade school (e.g., L2) due to having to relearn the smartphone as a blind person. Our participants with later life blindness already have a smartphone, but they need to learn it in a completely new way. They already have a mental model about features that are available on their smartphone, but they don't have a mental model about the language used to access it as a blind user.
Additionally, in our field, we conceptualize adopting technology as an action a person does once,
such as adopting an Android smartphone for the first time <cit.>. When we include re-acquisition of the smartphone as a legitimate means to acquire, then we can imagine new interventions to support the relearning of technology that might occur differently and more than once due to life transitions.
We argue that identity is consequential to acquiring technology and that we should consider re-acquisition a type of acquisition.
Thus, we recommend that researchers include people who are going through disability transitions, such as Franz et al. <cit.>, in studies and explore how disability transitions and changes in identity interact with technology use.
The Intersecting Liminality framework stitches our findings together. Consider L2's life transitions as depicted in <ref>. L2 was a sighted Android smartphone user throughout his late 50s and 60s (Figure 1A). When he acquired blindness in his early 70s, he could no longer access his phone
(Figure 1B).
Accepting his blindness,
he proactively sought to educate himself about how to use his Android's accessibility features, including taking classes at local blind advocacy organizations (Figure 1B, striped blue arrows pointing away from liminality cloud). Yet, he found that the blind community predominantly used iOS; even his instructor was an iOS user with minimal Android knowledge (Figure 1C, solid red arrows pointing towards liminality cloud).
He found himself trapped in a state of constant learning, akin to that which he experienced in early life.
L2's experience differed from those with early life blindness, who grew up “natively” within the blind community and with iOS VoiceOver. We see how L2 not only experienced liminality as a blind man who does not fit among other blind people (blindness axis), but he is also trapped in a state of persistent technology learning (smartphone axis), marking a slide back into the struggles of youth and never reaching the leisure of retirement (aging axis). To exit liminal space along the aging axis would require additionally exiting liminal space along the other two axes. When liminalities intersect, they interact and amplify the forces pushing one back into ambiguity, obscurity, and otherness, potentially ensnaring one in liminal space.
Intersecting Liminality is akin to the concept of intersectionality <cit.>, as we begin to theorize the complex intersections and interactions between various identity-based processes. In Crenshaw's foundational work, she argues that when people–in particular black women–inhabit multiple marginalized identities, they experience gender discrimination and race discrimination in a compounding way, not separately: “race and gender converge so that the concerns of minority women fall into the void between concerns about women's issues and concerns about racism” (p. 1282) <cit.>. Intersecting Liminality adds dynamism to this model, attending to the transitions between identities, rather than figuring identities as static states. As such, it can account for the forward and backward progress through identity space (e.g,. moving from sighted to blind, and back again in denial) and explore the duality and vacancy of diametrically opposed identities (e.g., the state of being both novice and expert smartphone user, or neither).
Our work aligns with a growing body of intersectionality literature in HCI that calls for greater attention to nuanced and multiple marginalized identities, including class <cit.> and disability <cit.>, which are consequential for technology research and design.
§.§ Putting Intersecting Liminality to Use
Although we studied BLV older adults adopting smartphones, we believe that Intersecting Liminality can be used to explore multiple simultaneous life transitions. Additionally, Intersecting Liminality can help explain findings from prior work and be used for future work with people acquiring disability identities other than blindness and technologies other than smartphones.
Take for instance Franz et al.’s work with older adults with ability changes and their perceptions of and adoption of mobile device accessibility features <cit.>. Although participants with vision loss had more difficulty using their smartphones, Franz et al. <cit.> found that before their study, many of them did not use accessibility features on their mobile devices, likely since they did not identify as disabled. The participants who did use accessibility features were disabled from birth. When the researchers followed up with participants a few weeks after the initial interview to see if they were still using accessibility features demonstrated in the first interview, many participants did not adopt accessibility features, citing reasons such as forgetting how to use features and lack of social support. That the smartphones’ accessibility features were difficult to adopt by participants experiencing age- and disability-related challenges suggests compounding effects from multiple liminalities.
As researchers and designers of assistive and accessible technologies move forward, it is important to consider that disability identity is not static; people with disabilities experience a rich range of other concurrent life transitions. These transitions are not incidental but are in fact essential to the ways in which disabled people acquire, adopt, and interact with technology. When we fail to account for the holistic, dynamic lived identities of people with disabilities at their intersections, the consequences can range from inconvenient–as when the technology prevents a BLV person from seeking support from sighted family and friends–to socially isolating–as when the technology forces them back into liminal space.
§ LIMITATIONS
This study was carried out in the United States of America, thus our findings may not be applicable to other cultures or geographic areas. While 16 of our participants were blind earlier in life, only 6 had later life vision loss. Additionally, we recruited BLV older adults who use smartphones, which excludes experiences of BLV older adults who attempted to acquire a smartphone but did not continue to use one. Most (n=18) of our participants were iPhone users, while few used Android or BlindShell. This is not unexpected, since the iPhone tends to be much more prevalent among blind smartphone users <cit.>.
Further, since our study focuses on the experiences of BLV older adults, we are unable to make comparisons with other populations, such as BLV adults and sighted older adults. Future research should explore if Intersecting Liminality is still a productive lens in the context of, for example, texting versus voice calling preferences of these populations.
§ CONCLUSION AND FUTURE WORK
In this study, we investigated how BLV older adults acquire smartphones and what resources they used to support the process of acquisition. We conducted qualitative semi-structured interviews with 22 BLV older adults, and reading through the lens of liminality, we found three main themes: (1) older adults who are blind experience liminality as they acquire smartphones, (2) later life BLV older adults experience liminality as they re-acquire a smartphone while acquiring blindness, and (3) engaging in mutual aid with the blind community supports transition between identities and out of liminality. We corroborate findings from previous literature on the challenges of technology adoption for BLV older adults. We add novel insights by analyzing our data from perspectives of life transitions and liminality (e.g., acquisition challenges stemming from resisting the acceptance of BLV identity). Finally, we introduce a framework of Intersecting Liminality, which we recommend be applied in future work in HCI, in studies where multiple intersecting identities are involved.
§ ACKNOWLEDGMENTS
xxx
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02570v1 | 20240904094039 | Ricci curvature and normalized Ricci flow on generalized Wallach spaces | [
"Nurlan Abiev"
] | math.DG | [
"math.DG",
"math.DS",
"53C30, 53C44, 53E20, 37C10"
] |
Ricci curvature and normalized Ricci flow ... …]
Ricci curvature and normalized Ricci flow on generalized Wallach spaces
N. A. Abiev Institute of Mathematics NAS KR, Bishkek, prospect Chui, 265a, 720071, Kyrgyzstan
[email protected]
§ ABSTRACT
We proved that the normalized
Ricci flow does not preserve the positivity of Ricci curvature of Riemannian metrics
on every generalized Wallach space with a_1+a_2+a_3≤ 1/2,
in particular on the spaces SU(k+l+m)/SU(k)×SU(l)
×SU(m)
and
Sp(k+l+m)/Sp(k)×Sp(l)
×Sp(m) independently on k,l and m.
The positivity of Ricci curvature is preserved for all original metrics with
Ric>0 on generalized Wallach spaces a_1+a_2+a_3> 1/2
if the conditions 4(a_j+a_k)^2≥ (1-2a_i)(1+2a_i)^-1 hold
for all {i,j,k}={1,2,3}.
We also established that the spaces
SO(k+l+m)/SO(k)×SO(l)×SO(m) satisfy the above conditions for max{k,l,m}≤ 11, moreover,
additional conditions were found to keep Ric>0
in cases when max{k,l,m}≤ 11 is violated.
Similar questions have also been studied for all other generalized Wallach spaces given in the classification of Yuriĭ Nikonorov.
Key words and phrases:
generalized Wallach space,
invariant Riemannian metric, normalized Ricci flow, sectional curvature,
Ricci curvature, dynamical system.
2010 Mathematics Subject Classification: 53C30, 53C44, 53E20, 37C10.
[
N. A. Abiev
September 9, 2024
=====================
§ INTRODUCTION
In this paper we continue our studies <cit.> related to the evolution of invariant Riemannian metrics on a class of homogeneous Riemannian spaces called generalized Wallach spaces (GWS) under the normalized Ricci flow (NRF)
∂∂ tg(t) = -2 Ric_g+ 2g(t)S_g/n,
introduced in <cit.> by Hamilton, where Ric_g is the Ricci tensor and S_g is the scalar curvature of an one-parametric family of Riemannian metrics g(t) on a given Riemannian manifold of dimension n.
According to <cit.>
a generalized Wallach space is
a homogeneous almost effective compact space G/H with a compact semisimple
connected Lie group G and its closed subgroup H
(with corresponding Lie algebras 𝔤 and 𝔥)
such that
the isotropy representation of G/H admits a decomposition into a direct sum
𝔭=𝔭_1⊕𝔭_2⊕𝔭_3 of three
Ad(H)-invariant irreducible modules
𝔭_1, 𝔭_2 and 𝔭_3
satisfying
[𝔭_i,𝔭_i]⊂𝔥 for each i∈{1,2,3},
where 𝔭 is an orthogonal complement of 𝔥 in 𝔤 with respect to a bi-invariant inner product ⟨· ,·⟩=-B(· ,·)
on 𝔤 defined by the Killing form
B(· ,·) of 𝔤.
For any fixed bi-invariant inner product
⟨·, ·⟩ on the Lie algebra 𝔤 of the Lie group G,
any G-invariant Riemannian metric g on G/H can be determined by an Ad (H)-invariant inner product
(·, ·)=.x_1⟨·, ·⟩|_𝔭_1+
.x_2⟨·, ·⟩|_𝔭_2+
.x_3⟨·, ·⟩|_𝔭_3,
where x_1,x_2,x_3 are positive real numbers (see <cit.> and references therein for details).
We complete a brief description of these spaces recalling that one
of their important characteristics is that
every generalized Wallach space can be described by a triple of real numbers
a_i:=A/d_i∈ (0,1/2], i=1,2,3, where d_i=(𝔭_i) and A is some positive constant (see <cit.> for details).
In <cit.> classifications of singular (equilibria) points of (<ref>) and their parametric bifurcations are studied on generalized Wallach spaces
and other classes of homogeneous spaces.
The interesting facts are
that equilibria points are exactly Einstein metrics and vice versa, moreover,
the number of singular points and their types (hyperbolic, semi-hyperbolic, nilpotent or linearly zero) are most responsible for evolution of Riemannian metrics under NRF. In <cit.> we initiated the study of new aspects related to evolution of positively curved Riemannian metrics on GWS.
Historically the classification of homogeneous spaces admitting
homogeneous metrics with positive sectional curvature
were obtained as a result of a joint effort of
Aloff and Wallach <cit.>,
Bérard Bergery <cit.>, Berger <cit.>,
Wallach <cit.> and Wilking <cit.>.
Note that Bérard Bergery's classification odd-dimensional case
was recently revisited by Wolf and Xu in <cit.> and
a new version of the classification with refined proofs was
suggested by Wilking and Ziller in <cit.>.
The classification of homogeneous spaces which admit
metrics with positive Ricci curvature was obtained by Berestovskiĭ in <cit.>,
see also <cit.>
where (in particular) homogeneous Riemannian manifolds of positive sectional curvature
and positive Ricci curvature are described.
Novadays an interesting question is to understand whether
the positivity of sectional or (and) Ricci curvature of Riemannian metrics
on a given Riemannian manifold is preserved or not when metrics are evolved by the Ricci flow.
Cheung and Wallach
showed in Theorem 2 in <cit.> non-maintenance of the sectional curvature for
certain metrics on the Wallach spaces
W_6:=SU(3)/T_max,
W_12:=Sp(3)/Sp(1)×Sp(1)×Sp(1),
W_24:=F_4/Spin(8)
which are the only even-dimensional homogeneous spaces
admitting metrics with positive sectional curvature according to <cit.>.
Böhm and Wilking gave an example of metrics of positive sectional curvature
which can lose even the positivity of the Ricci curvature.
They proved in Theorem 3.1 in <cit.> that
the (normalized) Ricci flow deforms some metrics of positive sectional curvature
on W_12 into metrics with mixed Ricci curvature. An analogous result was obtained for W_24 in Theorem 3 of <cit.>.
In <cit.> we considered similar questions
on a class of generalized Wallach spaces with coincided parameters a_1=a_2=a_3:=a∈ (0,1/2)
(see Section <ref> for definitions).
Note that a complete classification of generalized Wallach spaces
was obtained by Nikonorov <cit.>.
Chen et al. in <cit.> obtained similar results with simple G.
It should be also noted that
the Wallach spaces W_6, W_12 and W_24 are exactly
partial cases of generalized Wallach spaces with a_1=a_2=a_3=a
equal to 1/6, 1/8 and a=1/9 respectively.
The following theorems were proved involving the apparatus of dynamical systems and
asymptotic analysis (by generic metrics we meant metrics satisfying x_i x_j x_k x_i, i,j,k∈{1,2,3}):
On the Wallach spaces W_6, W_12, and W_24, the normalized Ricci flow
evolves all generic metrics with positive sectional curvature into metrics with mixed sectional curvature.
Using the same tools the following results were obtained about the Ricci curvature:
Let G/H be a generalized Wallach space with a_1=a_2=a_3=:a, where a∈ (0,1/4)∪(1/4,1/2).
If a<1/6, then
the normalized Ricci flow
evolves all generic metrics with positive Ricci curvature into metrics with mixed Ricci curvature.
If a∈ (1/6,1/4)∪(1/4,1/2), then the normalized Ricci flow evolves all generic metrics into metrics with positive Ricci curvature.
Let G/H be a generalized Wallach space with a_1=a_2=a_3=1/6.
Suppose that it is supplied with the invariant Riemannian metric (<ref>) such that x_k<x_i+x_j for all indices with {i,j,k}={1,2,3},
then the normalized Ricci flow on G/H with this metric as the initial point, preserves the positivity of the Ricci curvature.
The normalized Ricci flow on a generalized Wallach space G/H
a_1=a_2=a_3=1/4
evolves all generic metrics into metrics with positive Ricci curvature.
Recent time there are some new results on evolution of positively curved Riemannian metrics via Ricci flow. Among them Bettiol and Krishnan showed in <cit.> that Ricci flow does not preserve positive sectional curvature of Riemannian metrics on S^4 and ℂP^2.
Some extended results on positive intermediate curvatures for an infinite series of homogeneous spaces
Sp(n+1)/Sp(n-1)×Sp(1)×Sp(1)
was obtained in <cit.>
by González-Álvaro and Zarei,
where the space W_12 is included as the first member corresponding to n=2.
The group of authors Cavenaghi et al. proved in <cit.> that SU(3)/T_max admits metrics of positive intermediate
Ricci curvature losing positivity under Ricci flow.
They established similar properties for
a family of homogeneous spaces
SU(m+2p)/SU(m)×SU(p)
×SU(p)
as well.
In this paper we consider
generalized Wallach spaces
SO(k+l+m)/SO(k)×SO(l)
×SO(m),
SU(k+l+m)/SU(k)×SU(l)
×SU(m)
and
Sp(k+l+m)/Sp(k)×Sp(l)
×Sp(m)
and other ones listed in Table 1
in accordance with <cit.>,
where GWS n means a generalized Wallach space with (𝔤, 𝔥) positioned in line n.
From the geometrical point of view only a countable number of natural triples (k,l,m),
which correspond to actual generalized Wallach spaces,
can be interesting for us.
Knowing that not every triple (a_1, a_2, a_3)∈ (0,1/2)^3
can correspond to some generalized Wallach spaces,
nevertheless we will consider the problem in a more global context
for any reals a_1, a_2, a_3∈ (0,1/2), using an opportunity
to involve fruitful methods of continuous mathematics.
Such an approach justified itself introducing in <cit.>
a special algebraic surface defined by a polynomial of degree 12 in three real variables
a_1, a_2 and a_3, which provides degenerate singular points to a three-dimensional
dynamical system obtained as a reduction of the normalized Ricci flow on generalized Wallach spaces.
Topological characteristics of that surface such as the number of connected components was found by <cit.>, and a deeper structural analysis was given in <cit.>.
It should be also noted that in <cit.>
results of Theorem <ref> were generalized
to the case a∈ (0,1/2) considering (<ref>) as an abstract dynamical system.
In the present paper we extend Theorems <ref> – <ref> to the case
of arbitrary a_1, a_2, a_3∈ (0,1/2).
Our main results are contained in Theorems <ref> – <ref> and Corollary <ref>.
Let G/H be a generalized Wallach space with a_1+a_2+a_3≤ 1/2.
Then the normalized Ricci flow
evolves some metrics (<ref>) with positive Ricci curvature into metrics with mixed Ricci curvature.
Let G/H be a generalized Wallach space with a_1+a_2+a_3> 1/2.
* The normalized Ricci flow (<ref>) evolves all
metrics (<ref>) with positive Ricci curvature into metrics with positive Ricci curvature if
θ≥max{θ_1, θ_2, θ_3},
* At least some metrics (<ref>) with positive Ricci curvature can be evolved into metrics with positive Ricci curvature, if θ≥max{θ_1, θ_2, θ_3} fails,
where θ:=a_1+a_2+a_3-1/2∈ (0,1),
θ_i:=a_i-1/2+√((1-2a_i)(1+2a_i)^-1)/2,
i=1,2,3.
Note that θ≥max{θ_1, θ_2, θ_3} is equivalent to 4(a_j+a_k)^2≥ (1-2a_i)(1+2a_i)^-1 announced in abstract
for all {i,j,k}={1,2,3}.
For the spaces
SO(k+l+m)/SO(k)×SO(l)×SO(m) (GWS 1 in Table 1)
which satisfy a_1+a_2+a_3>1/2 we proved
the following special theorem under the assumptions k≥ l≥ m>1
based on symmetry.
The following assertions hold for SO(k+l+m)/SO(k)×SO(l)×SO(m),
k≥ l≥ m>1 (see also Table 2):
* The normalized Ricci flow (<ref>) evolves all metrics (<ref>) with Ric>0
into metrics with Ric>0 if either k≤ 11 or one of
the conditions 2<l+m≤ X(k) or l+m≥ Y(k) is satisfied for
k∈{12, 13,14,15,16},
* The normalized Ricci flow (<ref>) evolves some metrics (<ref>) with Ric>0 into metrics with Ric>0 if
X(k)<l+m<Y(k) for k≥ 12.
* There exists a finite number of GWS 1 on which any metric with Ric>0 maintains Ric>0 under NRF (<ref>).
But there are infinitely (countably)
many GWS 1 on which Ric>0 can be kept
at least for some metrics with Ric>0,
where
X(k)=2k(k-2)/k+2+√(k^2-12k+4)-k+2,
Y(k)=2k(k-2)/k+2-√(k^2-12k+4)-k+2 and
2<X(k)<Y(k) for all k≥ 12.
Theorems <ref> and <ref> imply
Under the normalized Ricci flow (<ref>) the positivity of the Ricci curvature
* is not preserved on
GWS 2, 3, 5, 7, 15;
* is preserved on
GWS 4, 6, 8, 9, 10, 11, 12, 13, 14.
§ PRELIMINARIES
In <cit.> the following explicit expressions
Ric_g=. r_1 Id|_𝔭_1+
. r_2 Id|_𝔭_2+
. r_3 Id|_𝔭_3
and
S_g=d_1 r_1+d_2 r_2+d_3 r_3
were found for the Ricci tensor Ric_g and
the scalar curvature S_g of the metric (<ref>)
on a generalized Wallach space G/H,
where
r_i:=1/2x_i+a_i/2(x_i/x_jx_k-x_k/x_ix_j-
x_j/x_ix_k)
are the principal Ricci curvatures,
d_i are the dimensions of the modules 𝔭_i such that d_1+d_2+d_3=n and
a_i are real numbers in the interval (0,1/2] for {i,j,k}={1,2,3}.
Substituting the above mentioned expressions for Ric_g and
S_g into (<ref>) it can be reduced
to the following system of three ordinary differential equations
studied in <cit.>:
dx_1dt = f_1(x_1,x_2,x_3), dx_2dt=f_2(x_1,x_2,x_3), dx_3dt=f_3(x_1,x_2,x_3),
where x_i=x_i(t)>0 (i=1,2,3), are parameters of the invariant metric (<ref>) and
f_1(x_1,x_2,x_3) = -1-a_1x_1 ( x_1x_2x_3- x_2x_1x_3- x_3x_1x_2)+x_1B,
f_2(x_1,x_2,x_3) = -1-a_2x_2 ( x_2x_1x_3- x_3x_1x_2 - x_1x_2x_3)+x_2B,
f_3(x_1,x_2,x_3) = -1-a_3x_3 ( x_3x_1x_2- x_1x_2x_3- x_2x_1x_3)+x_3B,
B:=(∑_i=1^3 1a_ix_i- ∑_{i,j,k}={1,2,3}
j< kx_ix_jx_k)
(∑_i=1^3a_i^-1)^-1.
Recall that the function Vol=c is a first integral of the system (<ref>) for any c>0,
where
Vol:=x_1^1/a_1x_2^1/a_2x_3^1/a_3
(see <cit.>). The surface Vol=1 is important
which corresponds to parameters of unite volume metrics. Denote it Σ.
The case Vol=c could actually be reduced to the case c=1
using homogeneity of the functions f_i and autonomy property in (<ref>).
This could be made by introducing new variables x_i=x_i√(c) and t=τ√(c), then
we get a new system of ODEs in new functions x_i(τ) but of the same form as the original one (<ref>) (see also <cit.>).
Therefore we can consider metrics with Vol=1 without loss of generality.
Being a first integral of the system (<ref>)
the identity Vol=1 allows to reduce (<ref>) to the following system of two ODEs
(see details in <cit.>):
dx_1dt = f_1(x_1,x_2), dx_2dt=f_2(x_1,x_2),
where
f_i(x_1,x_2):=f_i(x_1,x_2, φ(x_1,x_2)),
φ(x_1,x_2):=x_1^-a_3/a_1x_2^-a_3/a_2, i=1,2.
Introduce surfaces Λ_i in (0,+∞)^3 defined by the equations
λ_i:=a_i(x_i^2-x_j^2-x_k^2)+x_jx_k=0 (equivalent to r_i=0)
for {i,j,k}={1,2,3}. Denote by R the set
of points (x_1,x_2,x_3)∈ (0,+∞)^3 satisfying the inequalities λ_i>0 for all i=1,2,3.
Then Ric_g>0 is equivalent to (x_1,x_2,x_3)∈ R.
§ AUXILIARY RESULTS
Introduce the parameter
ω:=(a_i^-1+a_j^-1+a_k^-1)^-1.
We will often use real roots
m_i=m(a_i):=1-√(1-4a_i^2)/2a_i
M_i=M(a_i):=1+√(1-4a_i^2)/2a_i
of the quadratic equation
t^2-a_i^-1t+1=0 which depend on the parameter a_i.
Clearly M_i=m_i^-1.
We also need the following obvious inequalities for all a_i∈ (0,1/2):
0<a_i<m_i<2a_i<1<1/2a_i<M_i<1/a_i.
§.§ Structural properties of some surfaces and curves
related to the set of metrics Ric>0
For an arbitrary generalized Wallach space with a_1,a_2,a_3∈ (0,1/2) the set R
is bounded by the conic surfaces
Λ_1, Λ_2 and Λ_3,
each consisting of two connected components.
Moreover, for all a_i,a_j∈ (0,1/2)
the cones Λ_i and Λ_j
intersect each other along two straight lines:
the first of them is defined by x_i=x_j, x_k=0,
and the second one has an equation
x_i=up, x_j=vp, x_k=p>0, {i,j,k}={1,2,3},
where
[ u=Φ(a_i,a_j):=
ϕ(a_i,a_j), a_i a_j,
a_j, a_i=a_j,; v=Ψ(a_i,a_j):=
ψ(a_i,a_j), a_i a_j,
a_i, a_i=a_j, ]
with ϕ(a_i,a_j):=124a_i^2-1+√(Δ_ij)a_i^2-a_j^2 a_j>0,
ψ(a_i,a_j):=124a_j^2-1+√(Δ_ij)a_j^2-a_i^2 a_i>0 and
Δ_ij:=(1-4a_i^2)(1-4a_j^2)>0.
See Fig. <ref> for illustrations.
The inequality λ_i=a_i(x_i^2-x_j^2-x_k^2)+x_jx_k>0 is equivalent to
x_i^2>T:=x_j^2-a_i^-1x_jx_k+x_k^2.
Any x_i>0 satisfies λ_i>0 if T≤ 0.
If T>0 then λ_i>0 is equivalent to x_i> √(T).
Considering T>0 as a quadratic inequality with respect to x_j
which admits solutions 0<x_j<m_ix_k or x_j>M_ix_k
we conclude that the set R is bounded by the coordinate planes x_1=0, x_2=0, x_3=0
and by two components of the cone Λ_i,
defined by the same equation x_i=√(T) in two different domains
{x_k>0, 0<x_j<m_ix_k} and
{x_k>0, x_j>M_ix_k} of the coordinate plane (x_k,x_j).
Due to symmetry analogous assertions can be obtained for the surfaces
Λ_j and Λ_k using permutations of indices.
Thus we established that R is bounded by the cones Λ_1, Λ_2 and Λ_3.
To find common points of the surfaces Λ_i and Λ_j
consider the system of equations
λ_i=a_i(x_i^2-x_j^2-x_k^2)+x_jx_k=0 and
λ_j=a_j(x_j^2-x_i^2-x_k^2)+x_ix_k=0.
We can pass to a new system
a_i(u^2-v^2-1)+v=0, a_j(v^2-u^2-1)+u=0,
where x_i=ux_k, x_j=vx_k.
It follows then a_iu+a_jv-2a_ia_j=0 or
v=a_ia_j^-1(2a_j-u).
Substituting this expression into one of the previous equations,
we obtain a quadratic equation with respect to u:
(a_i^2-a_j^2)u^2-(4a_i^2-1)a_ju+(4a_i^2-1)a_j^2=0.
The case a_i=a_j leads to a solution u=a_j. It follows then v=a_i.
Assume that a_i a_j.
The case a_i>a_j. Then 4a_i^2-1+√(Δ_ij)>0 and
the latter quadratic equation admits a positive root u^(1),
equal to ϕ(a_i,a_j) shown in (<ref>).
Another root u^(2), associated with the summand -√(Δ_ij)
is out of our interest because of
4a_i^2-1-√(Δ_ij)=-(1-4a_i^2+√(Δ_ij))<0
independently on a_i and a_j.
Substituting u^(1)=ϕ(a_i,a_j)
into
v^(1)=a_ia_j^-1(2a_j-u^(1))
gives v^(1)=ψ(a_i,a_j) in (<ref>).
The case a_i<a_j. In this case u^(1)>0 since 4a_i^2-1+√(Δ_ij)<0.
The second root u^(2) we reject again although it is positive,
because the corresponding v^(2) occurs negative.
Summing a_jλ_i=0 and a_iλ_j=0
we can find another family of solutions x_i=x_j, x_k=0
of the system of λ_i=0 and λ_j=0
which will also be useful in sequel.
It should be noted that there are also negative solutions of the system of λ_i=0 and λ_j=0
non permissible for our aims.
Lemma <ref> is proved.
Assume that 0<a_j<a_i<1/2. Then u and
v in (<ref>)
satisfy 0<u < 1 < v,
where u:=u a_j^-1 and v:=v a_i^-1.
For all a_i a_j both of the inequalities
u<1 and
v>1 are equivalent to
√(Δ_ij)<1-2(a_i^2+a_j^2) or the same
(a_i-a_j)^2(a_i+a_j)^2>0 after raising to square
based on 1-2(a_i^2+a_j^2)>0.
Lemma <ref> is proved.
Denote by Λ_ij and Λ_ji connected components of the cones Λ_i and Λ_j respectively,
which contain the common straight line (<ref>).
For components of Λ_i and Λ_j meeting each other along x_i=x_j, x_k=0
accept notations Λ_ik and Λ_jk respectively.
Then each curve r_i formed as Σ∩Λ_i
consists of two connected components
r_ij:=Σ∩Λ_ij and r_ik:=Σ∩Λ_ik.
By analogy r_j:=Σ∩Λ_j=r_ji∪ r_jk, where
r_ji:=Σ∩Λ_ji and r_jk:=Σ∩Λ_jk.
For an arbitrary generalized Wallach space with a_1,a_2,a_3∈ (0,1/2)
each curve r_i=Λ_i ∩Σ
consists of two connected components r_ij and r_ik, which are smooth curves
and can be represented by parametric equations
[ x_i=ϕ_i (t):=t^-ω a_j^-1(√(t^2-a_i^-1t+1))^
ω(a_j^-1+a_k^-1),; x_j= ϕ_j (t):= tx_k,; x_k= ϕ_k (t):=(√(t^2-a_i^-1t+1))^-1x_i ]
defined for t∈ (0,m_i) and t∈ (M_i,+∞) respectively,
where ω=(a_i^-1+a_j^-1+a_k^-1)^-1 and {i,j,k}={1,2,3}.
In addition,
[ lim_t→ 0+ x_i=lim_t→ 0+ x_k = +∞, lim_t→ 0+ x_j =0,
lim_t→ m_i- x_j= lim_t→ m_i- x_k = +∞, lim_t→ m_i- x_i=0,; lim_t→ M_i+ x_j= lim_t→ M_i+ x_k = +∞, lim_t→ M_i+ x_i=0,
lim_t→ +∞ x_i=lim_t→ +∞ x_j = +∞, lim_t→ +∞ x_k =0. ]
Substituting x_j=tx_k into the equation a_i(x_i^2-x_j^2-x_k^2)+x_jx_k=0
of Λ_i and solving the quadratic equation
(t^2-a_i^-1t+1)x_k^2=x_i^2
with respect to x_k>0, we obtain the third expression in (<ref>).
Then the expression for x_i is obtained using
Vol=x_i^1/a_ix_j^1/a_jx_k^1/a_k=1.
As it follows from Lemma <ref>, the curve r_i=Σ∩Λ_i
consists of two components r_ij=Σ∩Λ_ij and r_ik=Σ∩Λ_ik. This fact is also clear from
t^2-a_i^-1t+1=(t-m_i)(t-M_i)>0,
t∈ (0,m_i)∪ (M_i,+∞),
where m_i=m(a_i) and M_i=M(a_i) are functions given in (<ref>).
The components r_ij and r_ik are connected curves
being continuous images of the connected sets (0,m_i) and (M_i, +∞).
Moreover since x_i=ϕ_i(t), x_j=ϕ_j(t) and x_k=ϕ_k(t)
are differentiable functions of t∈ (0,m_i) ∪ (M_i,+∞),
the components r_ij and r_ik are smooth curves.
Recall that analogous parametric equations
can be obtained for the curves
r_j=Σ∩Λ_j and r_k=Σ∩Λ_k
using permutations of the indices i,j and k in (<ref>).
Let us study behavior of the curves r_ij and r_ik
in neighborhoods of the boundaries of the intervals (0,m_i) and (M_i, +∞).
Clearly x_i(t)→ +∞ as t→ 0+ and x_i(t)→ 0
as t→ m_i- or t→ M_i+.
Analogously
x_i=ϕ_i(t)∼ t^-ω a_j^-1 t^ω(a_j^-1+a_k^-1)
=t^ω a_k^-1→ +∞
as t→ +∞.
Since 1-ω(a_j^-1+a_k^-1)>0,
we
easily can find the limits in (<ref>) for
x_j=ϕ_j(t) = t^1-ω a_j^-1(√((t-m_i)(t-M_i)))^-1+ω(a_j^-1+a_k^-1)
and
x_k=ϕ_k(t)= t^-ω a_j^-1(√((t-m_i)(t-M_i)))^-1+ω(a_j^-1+a_k^-1).
Lemma <ref> is proved.
For each pair of the curves r_i and r_j
corresponding to an arbitrary generalized Wallach space with a_1,a_2,a_3∈ (0,1/2)
the curve I_k defined by the equations
x_i=x_j=τ>0, x_k=τ^-a_k(a_i^-1+ a_j^-1),
{i,j,k}={1,2,3},
does not intersect their components r_ik and r_jk,
but approximates them as accurately as desired at infinity,
at the same time I_k intersects both r_i and r_j on their components
r_ij and r_ji. In addition, I_k can not intersect the curve r_k,
moreover, r_k moves away from I_k in both directions
as τ→ 0+ and τ→ +∞.
See Fig. <ref> for illustrations.
I_k is exactly the curve that belongs to the surface Σ and have a projection x_i=x_j on the coordinate plane x_k=0. Substituting x_i=x_j=τ into the equation
x_i^1/a_ix_j^1/a_jx_k^1/a_k=1 of Σ we find x_k given in (<ref>).
Substituting the expressions (<ref>) into the functions
λ_i=a_i(x_i^2-x_j^2-x_k^2)+x_jx_k
and
λ_j=a_j(x_j^2-x_i^2-x_k^2)+x_ix_k
(recall that the cones Λ_i and Λ_j are defined by λ_i=0 and λ_j=0
respectively) we obtain
λ_i=λ_i(τ)=1-a_iτ^-1-a_k(a_i^-1+ a_j^-1)/2τ, λ_j=λ_j(τ)=1-a_jτ^-1-a_k(a_i^-1+ a_j^-1)/2τ.
In what follows that I_k intersects the curve r_i=Σ∩Λ_i at the unique point with coordinates
x_i^0=x_j^0=τ_0, x_k^0=τ_0^-a_k(a_i^-1+ a_j^-1)
where
τ_0=a_i^(1+a_k(a_i^-1+a_j^-1))^-1
is the unique root
of the equation λ_i(τ)=0.
Using Lemma <ref>, write out the equation
τ_0=x_j^0=tx_k^0=tτ_0^-a_k(a_i^-1+ a_j^-1)
admitting the unique root
t=t_k:=τ_0^1+a_k(a_i^-1+a_j^-1)=a_i,
at which the mentioned point I_k∩ r_i belongs to r_i.
The inequalities 0<t_k=a_i<m_i (see (<ref>)) imply
that I_k must intersect r_i on its component r_ij.
Since such an intersection with r_i happens only once,
we conclude that I_k can not intersect the component r_ik.
Moreover, the value of the limit
lim_τ→ +∞λ_i=1/2lim_τ→ +∞(τ^-1-a_iτ^-2-a_k(a_i^-1+ a_j^-1))=0
shows that r_ik can be approximated by I_k at infinity as accurately as desired.
From the analysis of the function λ_j=λ_j(τ), similar conclusions will follow regarding the components r_ji and r_jk of the curve r_j.
For the curve r_k we have
λ_k(τ)=a_kτ^-2a_k(a_i^-1+ a_j^-1)
+(1-2a_k)τ^22τ^2-a_k(a_i^-1+ a_j^-1).
The equation λ_k(τ)=0 can not admit any solution
due to positiveness of the numerator of the latter fraction.
Therefore I_k never cross r_k.
Moreover, they move away from each other as much as desired in both directions:
lim_τ→ +∞λ_k(τ)=
a_k/2lim_τ→ +∞τ^-2-a_k(a_i^-1+ a_j^-1)+
1-2a_k/2lim_τ→ +∞τ^a_k(a_i^-1+ a_j^-1)=0+∞=+∞,
lim_τ→ 0+λ_k(τ)=
a_k/2lim_τ→ 0+τ^-2-a_k(a_i^-1+ a_j^-1)+
1-2a_k/2lim_τ→ 0+τ^a_k(a_i^-1+ a_j^-1)=+∞+0=+∞.
Lemma <ref> is proved.
For any generalized Wallach space with a_1,a_2,a_3∈ (0,1/2) the following assertions hold:
* For each pair of the curves r_i and r_j (respectively r_k)
their components r_ij and r_ji (respectively r_ik and r_ki)
admits exactly one common point P_ij (respectively P_ik)
with coordinates x_i=α p^0, x_j=β p^0, x_k=p^0
(respectively x_i=γ q^0, x_k=δ q^0, x_j= q^0),
where
[ p^0:=α^-ω a_i^-1β^-ω a_j^-1, α:=Φ(a_i,a_j), β:=Ψ(a_i,a_j),; q^0:=γ^-ω a_i^-1δ^-ω a_k^-1, γ:=Φ(a_i,a_k), δ:=Ψ(a_i,a_k) ]
with the functions Φ and Ψ defined in (<ref>),
{i,j,k}={1,2,3},
whereas their other components r_ik and r_jk
(respectively r_ij and r_kj)
are disjoint approximating each other at infinity as close as desired
(see Fig. <ref>).
* The values t_ij and t_ik of the parameter t,
which correspond to the points P_ij
and P_ik at the parametrization (<ref>),
can be found by the formula
t_ij=β, t_ik=δ^-1.
In addition a_i≤ t_ij<m(a_i) and M(a_i)<t_ik≤ a_i^-1
for all a_i, a_j, a_k∈ (0,1/2).
(1)
By definition r_i=Λ_i ∩Σ and r_j=Λ_j ∩Σ.
It follows then r_i∩ r_j=(Λ_i∩Λ_j)∩Σ, and hence,
the problem of finding r_i∩ r_j
reduces to searching for possible common points
of the straight line Λ_i∩Λ_j with the surface Σ.
As it was proved in Lemma <ref>,
the intersection Λ_i∩Λ_j equal to Λ_ij∩Λ_ji is the straight line
x_i=α p, x_j=β p, x_k=p.
Substituting these expressions into the equation
x_i^1/a_ix_j^1/a_jx_k^1/a_k=1 of Σ we obtain
the following equation regarding to the parameter p:
α^1/a_iβ^1/a_j p^1/a_i+1/a_j+1/a_k=1.
This yields coordinates
x_i=α p^0, x_j=β p^0, x_k=p^0 of an unique point
which belong to r_ij∩ r_ji=(Λ_ij∩Λ_ji)∩Σ
for α, β and p^0 exposed in (<ref>). Let us denote that point by P_ij.
Thus {P_ij}=r_ij∩ r_ji=r_i∩ r_j.
Repeating the same reasoning for the curves r_i and r_k, we find coordinates
x_i=γ q^0, x_k=δ q^0, x_j= q^0
of the unique point P_ik∈ r_ik∩ r_ki=r_i∩ r_k.
In order to highlight disjoint components of the curves r_i and r_j,
recall another common line x_i=x_j, x_k=0
of the cones Λ_i and Λ_j
(more precisely, the common line of their components Λ_ik and Λ_jk),
mentioned in Lemma <ref>.
The components r_ik and r_jk are situated on opposite sides of I_k, therefore
they can not admit a common point. Moreover, r_ik and r_jk approximate
the same curve I_k at infinity by Lemma <ref>, therefore at infinity they
approximate each other.
(2) Assume that r_i is parameterized as in Lemma <ref>.
Let t=t_ij be a value of t which corresponds to the point P_ij
on r_ij.
The coordinates of P_ij are x_i=α p^0, x_j=β p^0, x_k=p^0
as we established.
Then the equalities ϕ_j(t)=β p^0, ϕ_k(t)=p^0 and
ϕ_j(t)=tϕ_k(t) in (<ref>) easily imply
t_ij=β:=Ψ(a_i,a_j).
Now we claim that t_ij∈ [a_i, m_i) for all a_i, a_j∈ (0,1/2).
Indeed if a_i=a_j for i j then t_ij=a_i obviously.
If a_i a_j then assume a_i>a_j without loss of generality.
Then Lemma <ref> implies t_ij>a_i.
The inequality t_ij<m_i is equivalent to
(1-4a_j^2)a_i^2-a_i^2√(Δ_ij)<(a_i^2-a_j^2)
(1-√(1-4a_i^2))
⇔ a_i^2-a_j^2<a_i^2√(1-4a_j^2)-a_j^2√(1-4a_i^2) ⇔ (a_i^2-a_j^2)^2<(a_i^2√(1-4a_j^2)-a_j^2√(1-4a_i^2))^2
⇔ √(Δ_ij)<1-2(a_i^2+a_j^2)
⇔
(a_i-a_j)^2(a_i+a_j)^2>0.
Thus t_ij∈ (a_i, m_i) for a_i a_j.
Coordinates x_i=γ q^0, x_k=δ q^0, x_j= q^0 of the point P_ik
can be clarified by the same way.
The value
t_ik=δ^-1=1/Ψ(a_i,a_k)
is obtained as an unique root of the equation q^0=ϕ_j(t)=tϕ_k(t)=tδ q^0 (see (<ref>)).
Analogously t_ik∈(M_i, a_i^-1].
Indeed if a_i=a_k then t_ik=Ψ(a_i,a_k)^-1=a_i^-1>M_i
due to (<ref>).
If a_i a_k then assume a_i>a_k.
The inequality t_ik<a_i^-1 is clear from Lemma <ref>.
The inequality t_ik>M_i
can be established in similar way as above.
Lemma <ref> is proved.
It is advisable to remove extra pieces of the curves r_1, r_2 and r_3,
retaining after their pairwise intersections.
In fact those extra pieces do not belong to the boundary of the domain R.
Let us consider the example of the curve r_1 which
intersects r_2 on the unique point
P_12(α p^0, β p^0, p^0) known from Lemma <ref>, where α, β and p^0
are defined by formulas (<ref>) and (<ref>) at i=1, j=2.
An analysis of (<ref>) and (<ref>) shows that
the interval [t_12, m_1) corresponds to the extra piece
of r_12 behind the point P_12,
where t_12 can be found by (<ref>).
Therefore (0,t_12] should be taken as an interval of parametrization
for the component r_12
instead of the original interval (0,m_1).
Similarly, the interval (M_1,+∞) corresponding to r_13
before its intersection with r_31 at the point
P_13(γ q^0, q^0, δ q^0), will be reduced to [t_13, +∞),
where γ, δ and q^0 are defined by formulas (<ref>)
and (<ref>) at i=1, j=3.
§.§ Normals to cones
and the vector field associated with the differential system
To study the behavior of trajectories of the system (<ref>)
we need the sign of the standard inner product
( V, ∇λ_i)=f_i ∂λ_i/∂ x_i+f_j ∂λ_i/∂ x_j+f_k ∂λ_i/∂ x_k
on an arbitrarily chosen curve r_i,
where ∇λ_i is the normal to the surface Λ_i evaluated
along r_i and
V=(f_1,f_2,f_3) is the vector field associated with the differential system (<ref>).
For any generalized Wallach space with a_1,a_2,a_3∈ (0,1/2)
for every i=1,2,3 the normal ∇λ_i of the surface Λ_i
is directed inside R
at every point of the curve r_i.
Consider Λ_i ⊂∂(R).
It suffices to consider points of r_i.
Using Lemma <ref> we obtain
∂λ_i/∂ x_i=
a_i(t-m_i)(t-M_i)/tx_i^2>0
for all a_1,a_2,a_3∈ (0,1/2) and all t∈ (0,m_i)∪ (M_i, +∞),
where m_i and M_i are roots of t^2-a_i^-1t+1=0 known from (<ref>).
This means that the normal
∇λ_i:=(∂λ_i∂ x_1, ∂λ_i∂ x_2, ∂λ_i∂ x_3)
is directed into R at every point of the curve r_i on the surface Λ_i⊂∂(R),
because of points of R lie above the surface Λ_i in the coordinate system (i,j,k)
according to Lemma <ref>.
Note also that
∂λ_i/∂ x_j=
-√((t-m_i)(t-M_i))/2tx_i^2 (2a_it-1), ∂λ_i/∂ x_k=
√((t-m_i)(t-M_i))/2tx_i^2 (t-2a_i).
Since the inequalities
0<m_i<2a_i<1/2a_i<M_i hold
for all a_i∈ (0,1/2) (see (<ref>)),
we have ∂λ_i/∂ x_j>0 and
∂λ_i/∂ x_k<0 for t∈ (0,m_i).
Respectively, ∂λ_i/∂ x_j<0 and
∂λ_i/∂ x_k>0
on the component r_ik corresponding to the interval (M_i, +∞).
Similar reasonings can be repeated for Λ_j and Λ_k.
Everything that was established here
also holds for the corresponding working“ subintervals
(0,t_ij]⊂ (0,m_i) and
[t_ik, +∞)⊂ (M_i,+∞)
for t_ij and t_ik evaluated in Lemma <ref>.
Lemma <ref> is proved.
For an arbitrary generalized Wallach space with a_1,a_2,a_3∈ (0,1/2)
for every i=1,2,3 and for all t∈ (0,m_i) ∪ (M_i,+∞)
the sign of the inner product ( V, ∇λ_i)
evaluated at an arbitrary point of the curve r_i
coincides with the sign of the polynomial
h(t)=c_0t^4+c_1t^3+c_2t^2+c_1t+c_0
with coefficients
[ c_0:=a_i^2(1-2a_i+2a_j+2a_k)(2a_i+2a_j+2a_k-1),; c_1:=a_i(1-2a_i)^3-4a_i(4a_i^2+1)(a_j+a_k)^2,; c_2:=(16a_i^4+16a_i^2+1)(a_j+a_k)^2+2a_i(1-a_i)(1-2a_i)^2. ]
As calculations show ( V, ∇λ_i) can be represented by
the following expression:
[ ( V, ∇λ_i)=f_i ∂λ_i∂ x_i + f_j ∂λ_i∂ x_j + f_k ∂λ_i∂ x_k; = a_i2 (a_ia_j + a_ia_k + a_ja_k)x_i^2 x_j^2 x_k^2[
(a_i+a_j)(a_j+a_k)(a_k+a_i)(x_j^4+x_k^4-x_i^4); + a_k(a_i+a_j)(x_i^2-x_j^2)x_ix_j+a_j(a_i+a_k)(x_i^2-x_k^2)x_ix_k
-2a_i(a_j+a_k)(x_j^2+x_k^2)x_jx_k; +{(2a_ia_k+a_ia_j+a_ja_k-a_j)x_j+(2a_ia_j+a_ia_k+a_ja_k-a_k)x_k} x_ix_jx_k; + (a_j+a_k)(2a_i^2-2a_ia_j-2a_ia_k-2a_j a_k+1)x_j^2x_k^2]. ]
Substituting the parametric equations
x_i=ϕ_i(t), x_j=ϕ_j(t), x_k=ϕ_k(t), t∈ (0,m_i)∪ (M_i, +∞),
of the curve r_i found in Lemma <ref>
into the latter expression for ( V, ∇λ_i) we obtain
( V, ∇λ_i)=12a_itx_i^2 (F-G),
where
F := (a_j+a_k)(t-2a_i)(2a_it-1) and
G := (1-2a_i) a_i (t+1)√(t^2-a_i^-1t+1).
The inequalities (<ref>) imply F>0 and G>0
for all t∈ (0,m_i)∪ (M_i, +∞) and all a_i, a_j, a_k∈ (0,1/2).
Therefore for t∈ (0,m_i)∪ (M_i, +∞)
the sign of ( V, ∇λ_i)
coincides with the sign of the following fourth degree polynomial
in t:
h(t):=F^2-G^2=(a_j+a_k)^2(t-2a_i)^2(2a_it-1)^2 -
(1-2a_i)^2a_i(t+1)^2(a_it^2-t+a_i).
Collecting its similar terms
we obtain the polynomial (<ref>) with coefficients in (<ref>).
Lemma <ref> is proved.
Since t=0 is not a root of h(t)=0
introducing a new variable y:=t+t^-1 the equation h(t)=0
can be reduced to a quadratic equation
p(y):=c_0 y^2+c_1 y+(c_2-2c_0)=0.
Two different real roots
t_1=y_0-√(y_0^2-4)2∈ (0,m_i) and
t_2=t_1^-1=y_0+√(y_0^2-4)2∈ (M_i, +∞)
of the equation h(t)=0 correspond to a given root y_0∈(a_i^-1, +∞)
of the equation p(y)=0.
The proof follows from the properties of the functions
x ↦1-√(1-4x^2)2x (increases) and
x ↦1+√(1-4x^2)2x (decreases) for 0<x<1/2.
In an arbitrary generalized Wallach space with a_1,a_2,a_3∈ (0,1/2)
the following assertions hold for the coefficients c_n=c_n(a_1,a_2,a_3), (n=0,1,2), of the polynomial p(y) in (<ref>):
c_2>0 and c_2-2c_0>0 for all a_1,a_2,a_3∈ (0,1/2);
c_0=0 and c_1<0 if a_1+a_2+a_3=1/2;
c_0<0 if a_1+a_2+a_3<1/2;
c_0>0 and c_1<0 if a_1+a_2+a_3>1/2.
The inequality c_2>0 follows from 16a_i^4+16a_i^2+1>0 (see (<ref>)).
Introducing a new variable μ:= a_j+a_k we obtain the inequality
c_2-2c_0=(1+4a_i^2)μ^2+2a_i(1-2a_i)^2>0
true for all a_i,a_j,a_k∈ (0,1/2).
The case a_1+a_2+a_3=1/2. Replacing a_j+a_k by 1/2-a_i in (<ref>),
we obtain
c_0=0 and c_1=-2(1+2a_i)(1-2a_i)^2a_i^2<0.
The case a_1+a_2+a_3<1/2. Clearly c_0<0.
The case a_1+a_2+a_3>1/2. Obviously c_0>0.
The coefficient c_1 can be considered as a function
c_1=W(μ):=a_i(1-2a_i)^3-4a_i(1+4a_i^2)μ^2
of the independent variable μ= a_j+a_k.
Clearly W(μ) admits two zeros μ=-μ_0 and μ=μ_0, where
μ_0:=1-2a_i2√(1-2a_i1+4a_i^2).
Since 0<μ_0<12-a_i
we have W(μ)|_μ=1/2-a_i<0.
Then W(μ)<0 for all μ>1/2-a_i meaning that
c_1<0 at a_i+a_j+a_k>1/2. Lemma <ref> is proved.
§ PROOFS OF THEOREMS <REF> – <REF> AND COROLLARY <REF>
It suffices to consider the system (<ref>) on its invariant surface Σ
defined by the equation Vol:=x_1^1/a_1x_2^1/a_2x_3^1/a_3=1.
Hence we are interested in parameters of metrics which belong to the domain Σ∩ R bounded by
the curves r_i=Σ∩Λ_i for i=1,2,3.
In order to establish behavior of trajectories of the system (<ref>)
with respect to that domain we will study their interrelations (namely, the number of intersection points) with curves forming the boundary of that domain.
§.§ The case a_1+a_2+a_3≤ 1/2
Examples <ref> and <ref>
illustrate Theorem <ref>.
It suffices to consider the border curve r_i only.
Similar reasoning can be repeated for the curves r_j and r_k
by simple permutations of indices {i,j,k}={1,2,3} in (<ref>).
Let us consider the cases a_1+a_2+a_3<1/2 and a_1+a_2+a_3=1/2 separately.
The case a_1+a_2+a_3<1/2.
For a_1+a_2+a_3<1/2 the discriminant
c_1^2+8c_0^2-4c_0c_2
of the quadratic equation p(y)=c_0 y^2+c_1 y+(c_2-2c_0)=0 is positive,
because c_0<0 and c_2>0 by Lemma <ref>.
Hence p(y)=0 admits two different real roots.
We claim that actually a single root of p(y)=0 can belong to (a_i^-1, +∞).
Indeed
p(a_i^-1)=(a_k+a_j)^2(1-2a_i)^2(1+2a_i)^2>0
independently on a_i,a_j,a_k.
It is clear also
p(+∞):=lim_y → +∞ p(y)=-∞
due to c_0<0.
Therefore p(y_0)=0 for some unique y_0∈ (a_i^-1, +∞).
No root of the quadratic polynomial p(y)
can be contained in the interval [0,a_i^-1]
since p(0)=c_2-2c_0>0 by the same Lemma <ref>.
Then another root of p(y)=0 distinct from y_0 must be negative
and is not of interest providing negative roots to h(t)=0.
Therefore the polynomial h(t)=c_0t^4+c_1t^3+c_2t^2+c_1t+c_0 of degree 4 admits two
positive roots t_1∈ (0,m_i) and t_2∈ (M_i, +∞)
which correspond to y_0 according to Lemma <ref>.
Note also that h(t) has a ∩“-shaped graph for t>0 and
h(t)<0 for t∈ (0,t_1)∪ (t_2,+∞)
and h(t)>0 for t∈ (t_1, m_i)∪ (M_i,t_2).
Then according to Lemma <ref> we have
sign( V, ∇λ_i)=
-1, t∈ (0,t_1),
0, t=t_1,
+1, t∈ (t_1, m_i)
sign( V, ∇λ_i)=
+1, t∈ (M_i, t_2),
0, t=t_2,
-1, t∈ (t_2, +∞)
respectively on r_ij and r_ik.
To reach our aims it is quite enough to show that
trajectories of the system (<ref>)
will leave the domain R
across some part of some component of the curve r_i.
Take, for example, the second component r_ik of r_i
(a similar analysis can be given for the component
r_ij of r_i).
As it follows from the formulas above
( V, ∇λ_i)>0 for t∈ (M_i,t_2).
This means that on every point of a part of the curve r_ik
which corresponds to (M_i,t_2) under the parametrization (<ref>)
the associated vector field V of the system (<ref>)
forms an acute angle with the normal ∇λ_i
of the surface Λ_i⊂∂(R)
(in fact its component Λ_ik).
Since the normal ∇λ_i is directed inside R at every point of r_i
due to Lemma <ref>, we conclude that
trajectories of (<ref>) attend R at t∈ (M_i,t_2).
For t∈ (t_2, +∞) clearly
( V, ∇λ_i)<0 meaning
that V forms an obtuse angle with the normal ∇λ_i.
Thus we established that trajectories of the system (<ref>)
originated in R will leave R at least through a part of the curve r_ik and never cross or touch r_ik again for all t>t_2.
This means that there exist metrics that lose the positivity of the Ricci curvature in finite time.
Theorem <ref> is proved for the case a_1+a_2+a_3<1/2.
The case a_1+a_2+a_3=1/2.
Obviously we deal with the linear function p(y)=c_1 y+c_2 since
c_0=0, c_1<0 and c_2>0 by Lemma <ref>.
Moreover, the only root y_0 of p(y)=0 is positive.
Since p(a_i^-1)>0 due to (<ref>)
and
p(+∞):=lim_y → +∞ p(y)=-∞
due to c_1<0 we have y_0∈ (a_i^-1, +∞).
Lemma <ref> implies that the polynomial h(t)=(c_1t^2+c_2t+c_1) t of degree 3 has two roots t_1∈ (0,m_i) and t_2∈ (M_i, +∞) which correspond to y_0.
We are in the same situation as in the considered case a_1+a_2+a_3<1/2.
Further reasoning is similar.
Theorem <ref> is proved.
According to Remark <ref> we had to consider
intervals (0, t_ij]⊂ (0,m_i)
and [t_ik, +∞)⊂(M_i,+∞) in Theorem <ref>
instead of (0, m_i) and (M_i,+∞)
cutting of the extra pieces of the curves r_i, r_j and r_k
that remain after their pairwise intersections.
Although such a replacing does not cancel the existence
of trajectories leaving R in any way, and therefore does not affect
the proof of Theorem <ref>,
we get a little clarification to the qualitative picture of
incoming trajectories.
Such a picture depends on the relative position of t_1 and t_ij within the interval (0,m_i), for instance, if the component r_ij is considered.
If t_ij< t_1 then h(t)<0 on all useful“
part of r_ij corresponding to t∈ (0,t_ij], hence h(t) changes its sign only at the
tail“ of the curve r_ij behind the point P_ij.
Therefore ( V, ∇λ_i)<0 for all t∈ (0,t_ij]
and hence every point on r_ij
emits trajectories towards the exterior of R. The interval (0,t_ij] above
should be replaced by (0,t_ij) if t_ij=t_1.
In the case t_ij>t_1 trajectories
come into R through the part of r_ij
corresponding to t∈ (t_1,t_ij]⊂ (t_1, m_i)
and leave R through its part corresponding to
t∈ (0, t_1).
Analogously, if t_ik<t_2 then trajectories attend R for
t∈ [t_ik, t_2)⊂ (M_i,t_2) and leave R for
t>t_2.
If t_ik>t_2 (respectively t_ik=t_2) then outgoing of trajectories from R
happens through every point of r_ik for t≥ t_ik
(respectively t> t_ik).
Thus analyzing all possible outcomes we conclude
that trajectories of (<ref>)
will leave R in any case whenever a_1+a_2+a_3≤ 1/2:
either through entire curve r_i either through some its part.
In the case a_1+a_2+a_3≤ 1/2
no metric (<ref>) can evolve again into a metric with positive Ricci curvature,
being transformed once into a metric with mixed Ricci curvature,
but some metrics with mixed Ricci curvature can be transformed into metrics with positive Ricci curvature, and then again into metrics with mixed Ricci curvature.
This follows from the proof of Theorem <ref> according to which
no trajectory of (<ref>)
returns back to R leaving R once,
but some trajectories attend R and come back if initiated in the exterior of R.
§.§ The case a_1+a_2+a_3> 1/2
Recall the specific parameters θ_i:=a_i-1/2+1/2√(1-2a_i/1+2a_i)
exposed in the text of Theorem <ref>.
Clearly θ_i∈ (0,1) for all i=1,2,3 and all a_i∈ (0,1/2).
It suffices to consider the curve r_i only.
In formulas (<ref>) for c_0, c_1 and c_2
replace the sum a_j+a_k by 1/2-a_i+θ:
[ c_0=4a_i^2 θ(1-2a_i+θ),; c_1:=a_i(1-2a_i)^3-a_i(4a_i^2+1)(1-2a_i+2θ)^2,; c_2:=14(16a_i^4+16a_i^2+1)(1-2a_i+2θ)^2+
2a_i(1-a_i)(1-2a_i)^2. ]
Since each coefficient c_0,c_1 and c_2 depends on the parameters a_i and θ only
so does the discriminant of the quadratic equation
p(y)=0 in (<ref>).
By this reason denote them by D_i:
D_i:=c_1^2-4c_0(c_2-2c_0)=c_1^2+8c_0^2-4c_0c_2
=-4a_i^2(1+2a_i)^2(1-2a_i)^3
{(1+2a_i)θ^2+(1-4a_i^2)θ-(1-2a_i)a_i^2}.
Since c_0>0 for a_i+a_j+a_k>1/2 and c_2>0 by Lemma <ref>,
the polynomial D_i can accept values of any sign.
Firstly, we find roots of D_i=0.
Observe that the value θ=θ_i
is exactly the unique root of the quadratic equation
(1+2a_i)θ^2+(1-4a_i^2)θ-(1-2a_i)a_i^2= 0.
Its another root is negative.
(1)
The case D_i<0. The inequality D_i<0 is equivalent to
(1+2a_i)θ^2+(1-4a_i^2)θ-(1-2a_i)a_i^2>0
which has solutions θ>θ_i.
Therefore the equation p(y)=0, and hence the equation h(t)=0
has no real roots if θ∈(θ_i,1).
Then h(t)>0 for all t∈ (0,m_i)∪ (M_i,+∞) due to c_0>0,
in other words, by Lemma <ref> the sign of
( V, ∇λ_i) is positive along the curve r_i,
that means there is no arc or even a point on the curve r_i,
across which trajectories of the system (<ref>) could leave the domain R,
because of the normal ∇λ_i of the
cone Λ_i is directed inside R at every point of r_i
independently on values of a_1, a_2 and a_3 due to Lemma <ref>.
The case D_i= 0. Since D_i=0 happens at θ=θ_i only,
for the sum a_j+a_k we have
a_j+a_k=1/2-a_i+θ_i=1/2√(1-2a_i1+2a_i).
The coefficients (<ref>) take the following forms at θ=θ_i:
[ c_0:=-a_i^2((1-2a_i)^2-4(a_j+a_k)^2)
=4a_i^4 ·1-2a_i1+2a_i,; c_1:=a_i(1-2a_i)^3-a_i(4a_i^2+1)·1-2a_i1+2a_i,; c_2:=14(16a_i^4+16a_i^2+1)
·1-2a_i1+2a_i+2a_i(1-a_i)(1-2a_i)^2. ]
In what follows an estimation
y_0-a_i^-1=-c_1/2c_0-a_i^-1
=(1-2a_i)^2(1+2a_i)^2/4a_i^2(1-4a_i^2)=
1-4a_i^2/4a_i^2>0
for the unique root
y_0=-c_12c_0 of the equation
P_2(y)=c_0y^2+c_1y+
c_2-2c_0=0.
Since y_0∈ (a_i^-1, +∞) is the single root of p(y) of multiplicity 2
the polynomial h(t) of degree 4 has two zeros t_1 and t_2 each of multiplicity 2
such that t_1∈ (0,m_i) and t_2∈ (M_i,+∞).
Since c_0>0 then h(t)>0 anywhere on (0,m_i)∪ (M_i,+∞) excepting
the points t_1 and t_2 (the shape of the graph of h(t) is similar to the letter W“ touching the t-axis at the points t_1 and t_2).
Then by Lemma <ref> no trajectory of (<ref>)
can leave R through its boundary curve r_i.
Note that in the case D_i=0 we are able to find the mentioned multiple roots of h(t) explicitly:
t_1,2=1/8a_i^2[-4a_i^2+4a_i+1∓(1+2a_i)√((1+6a_i)(1-2a_i))],
since h(t)=(1-2a_i)(1+2a_i)^-1(a_i^2t^2+(a_i^2-a_i-0.25)t+a_i^2)^2.
Thus uniting the cases θ>θ_i (D_i<0) and θ=θ_i (D_i=0)
we conclude that no trajectory of (<ref>) can leave R through r_i.
The same will hold on the curves r_j and r_k, if
θ≥max{θ_i, θ_j, θ_k}
(all metrics maintain the positivity of the Ricci curvature).
This finishes the proof of the first assertion in Theorem <ref>.
See also Example <ref>.
(2)
To prove the second assertion in Theorem <ref>
assume that θ< θ_i for some i.
Such an assumption equivalent to D_i> 0.
We claim that both roots of p(y)=0
belong to (a_i^-1, +∞) in this case.
We are in a situation where a quadratic function with
a positive discriminant takes positive values at the ends of the interval:
p(a_i^-1)>0, p(+∞):=lim_y → +∞ p(y)=+∞
due to c_0>0.
Therefore, the analysis used in the proof of Theorem <ref>
is not applicable here: such an interval can contain either both roots of the equation
or none.
No root can belong to (-∞, 0).
Suppose by contrary p(y)=0 has roots y_1,y_2∈ (-∞, 0).
Then -c_12c_0=y_1+y_22<0 despite the facts
c_0>0 and c_1<0 established
in Lemma <ref> in the case a_1+a_2+a_3>1/2.
No root can belong to (0, a_i^-1).
There is a similar situation at the ends of the interval:
p(0)=c_2-2c_0>0 by Lemma <ref>
and p(a_i^-1)>0 by (<ref>).
Introduce a function
U(μ):=-c_12c_0-a_i^-1,
where the sum a_j+a_k is replaced by μ=a_j+a_k in c_0 and c_1.
Then U(μ)>0 is equivalent to the inequality
4μ^2-(1-2a_i)<0
admitting solutions μ∈(0,μ),
where μ:=√(1-2a_i)2 is the positive root of U(μ)=0.
Since μ=1/2-a_i+θ the following value of θ
corresponds to μ:
θ_i=2a_i-1+√(1-2a_i)2.
It follows then U>0 for θ∈(0,θ_i)
and U<0 for θ∈(θ_i, 1).
It is not difficult to see that
θ_i=
4a_i^2-1+√((1-4a_i^2)(1+2a_i))2(1+2a_i)>
4a_i^2-1+√(1-4a_i^2)2(1+2a_i)=θ_i.
Therefore, U=-c_12c_0-a_i^-1>0 for values
θ∈(0, θ_i)⊂(0,θ_i),
which correspond to the case D_i>0.
It follows then p(y)=0 has no roots in (0, a_i^-1).
Supposing the contrary we would conclude -c_12c_0=y_1+y_22<a_i^-1 from
0<y_1<a_i^-1 and 0<y_2<a_i^-1 despite U>0.
Thus we showed that both roots of p(y)=0
belong to (a_i^-1, +∞) in the case θ<θ_i.
Denote them by y_1 and y_2. Without loss of generality assume that y_1<y_2.
By Lemma <ref>
two different roots t_1∈ (0, m_i) and t_2=t_1^-1∈ (M_i, +∞) of h(t)=0
correspond to y_1.
Analogously there are roots t_3∈ (0, m_i) and t_4=t_3^-1∈ (M_i, +∞) of
h(t)=0 corresponding to y_2.
It is easy to see that
t_3<t_1<m_i<M_i<t_2<t_4.
Thus each of the intervals (0,m_i) and (M_i,+∞) contains
exactly two roots of the polynomial h(t) which has a W“-shaped graph.
It follows then exactly
two points are expected on each component of the curve r_i at which
the vector field V=(f_1,f_2,f_3) changes its direction
in relation to the domain R (into R“ or from R“).
Consider the component r_ik only with t∈ (M_i, +∞).
Then M_i<t_2<t_4 implies that
trajectories of (<ref>) come into R through r_ik
and never cross or touch r_ik again for all t>t_4.
For the component r_ij with t∈ (0,m_i)
a similar analysis can be carried out
by mirroring of M_i<t_2<t_4.
Thus in the case θ<θ_i
trajectories of (<ref>) can intersect the border curve r_i twice according to the scenario
towards R – from R – towards R again“ (see also Remark <ref>).
A similar scenario will take place on the border curves r_j or (and) r_k,
if θ<θ_j or (and) θ< θ_k
returning trajectories to R even they ever left R once.
Clearly if θ≥θ_j or θ≥θ_k at θ<θ_i
then the curve r_j or r_k is locked for trajectories to leave R as proved before.
After such an analysis we conclude that at least some metrics with positive Ricci curvature can preserve the positivity of the Ricci curvature. The second assertion in Theorem <ref> is also proved.
Theorem <ref> is proved.
Let us comment the behavior of trajectories in more detail
in the case θ<θ_i in Theorem <ref>. Consider the example of r_ik.
As it was noted in Remark <ref>
we are actually interested in the useful“ part of the component r_ik
parameterized by t∈ [t_ik, +∞)⊂ (M_i,+∞),
where t_ik>M_i by Lemma <ref>.
The dynamics of trajectories of (<ref>) depends on which link of the chain
M_i<t_2< t_4<+∞ the value t_ik will be situated.
In the case θ<θ_i the following
situations may happen on the component r_ik of r_i:
Case 1. t_ik∈ [t_4, +∞). Trajectories move towards R through
a part of r_ik which corresponds to t≥ t_ik
(respectively t> t_ik if t_ik=t_4).
No trajectory can leave R.
Case 2. t_ik∈ (t_2, t_4).
Trajectories leave R for t∈[t_ik, t_4) and
return back to R for t>t_4.
Case 3. t_ik∈ (M_i, t_2].
Trajectories come into R
for t∈[t_ik,t_2),
leave R for t∈(t_2, t_4) and return back to R
for t>t_4.
The analysis of all possible outcomes shows that
even if the strong Case 1 never can occur at θ<θ_i,
some trajectories of (<ref>) must return back to R and remain there
for all t>t_4 in any case.
In the case a_1+a_2+a_3> 1/2
some metrics with mixed Ricci curvature can be transformed into metrics with Ric>0, after then lose Ric>0 and restore
Ric>0 again.
This follows from the proof of Theorem <ref> according to which
trajectories of (<ref>) can attend R twice and then remain in R forever.
Clearly θ=a_1+a_2+a_3-12=1k+l+m-2>0 for GWS 1.
(1)
Case A. The condition max{k,l,m}≤ 11 holds.
Without loss of generality consider the difference θ-θ_1 only.
Observe that a_1=kθ/2 and hence
θ-θ_1=θ-a_1+12-12√(1-2a_11+2a_1)=
θ-kθ/2+12-12√(1-kθ1+kθ).
It follows then 1-kθ≥ 0 necessarily to be well defined.
Therefore permissible values of θ∈ (0,1)
must belong to a narrower interval (0, 1/k]⊆ (0,1) for each natural number k.
Under such restrictions the inequality θ-θ_1≥ 0 is equivalent to
the following system of inequalities
T(θ):=k(k-2)^2θ^2-(k^2-4)θ+4≥ 0
0<θ≤ 1/k.
Obviously T(θ)=4>0 is trivial at k=2.
Assume that k 2.
The discriminant
D(k):=(k^2-12k+4)(k-2)^2=(k-(6-4√(2)))
(k-2)^2(k-(6+4√(2)))
of the polynomial T(θ), where 6-4√(2)≈ 0.3431 and 6+4√(2)≈ 11.6569,
is negative for all k∈{1, 3, 4, …, 11},
and hence for k∈{1,2, …, 11}
we have T(θ)>0 that means θ>θ_1.
Then by Theorem <ref>
no point on the curve r_1 can emit a trajectory towards the exterior of R.
The same will hold for the curves r_2 and r_3 if max{k,l,m}≤ 11
that imply θ≥max{θ_1, θ_2, θ_3}.
Example <ref> illustrates the case max{k,l,m}≤ 11.
Case B. The condition max{k,l,m}≤ 11 fails.
Recall our assumption k≥max{l,m} due to symmetry.
Suppose that k≥ 12. For such values of k the discriminant D(k) is positive.
However it does not mean that T(θ)≥ 0.
It may be T(θ)< 0.
Firstly we show that all roots of T belong to (0, 1/k]
for such values of k.
Since there is no double roots, we can apply Sturm's method.
For this aim we construct on [0,1/k] the Sturm's polynomial system of T(θ):
f_0(θ) := T(θ),
f_1(θ) := dT/dθ=2k(k-2)^2θ-(k^2-4),
f_2(θ) := -rem(f_0,f_1)=D(k)/4lc(T)=k^2-12k+4/4k(k-2),
where the symbol rem(f_0,f_1) means the remainder of dividing f_0 by f_1 and lc means the leading coefficient of a polynomial.
At the point θ=0
the number of sign changes in the Sturm system equals 2 since k≥ 12:
f_0(0)=4>0, f_1(0)=4-k^2<0, f_2(0)>0.
At θ=1/k the corresponding number is 0:
f_0(1/k)=8/k>0, f_1(1/k)=(k-2)(k-6)>0, f_2(1/k)>0.
Therefore both roots of the polynomial
T(θ) belong to the interval (0,1/k) in cases k≥ 12.
Denote them Θ^(1)=Θ^(1)(k) and
Θ^(2)=Θ^(2)(k).
Actually we are able to find them explicitly:
Θ^(1)(k)=k+2-√(k^2-12k+4)2k(k-2)
and
Θ^(2)(k)=k+2+√(k^2-12k+4)2k(k-2).
Note that
0<Θ^(1)(k)<Θ^(2)(k)<1/k.
Now the sign of θ-θ_1 will depend on the fact
what link in the chain of these inequalities
will contain θ.
Subcase B1. θ≤Θ^(1) or Θ^(2)≤θ<1/k.
Then θ≥θ_1.
According to Theorem <ref>
no trajectory of (<ref>) can leave R via the curve r_1.
Examples <ref> and <ref> are relevant to Subcase B1.
Subcase B2. Θ^(1)<θ<Θ^(2) is equivalent to θ<θ_1.
By Theorem <ref> there exists a trajectory
of (<ref>) that can leave R but return back to R.
Example <ref> is relevant to Subcase B2.
For further analysis the inequalities should be rewritten in more convenient form in
Subcases B1 and B2
in terms of the parameters k,l and m.
For instance, Subcase B1 can be described by the union of solutions
of the inequalities 2<l+m≤ X(k) and l+m≥ Y(k),
where
X(k):=2k(k-2)/k+2+√(k^2-12k+4)-k+2, Y(k):=2k(k-2)/k+2-√(k^2-12k+4)-k+2
are functions appeared in the text of Theorem <ref> and Table 2.
Imagine X(k) and Y(k) as functions of real independent variable k to apply methods of calculus (see Fig. <ref> for their graphs).
It is easy to establish X(k)>2 for all k≥ 12
(observe that 2 is a horizontal asymptote since lim_k →∞ X(k)/k=0).
Moreover, X(k) decreases and satisfies 2<X(k)≤ 5 for k≥ 12
since X(12)=5 and
X'(k)=-16k^2-5k+2-(k-1)√(k^2-12k+4)/(k+2+√(k^2-12k+4))^2√(k^2-12k+4)<0
due to (k^2-5k+2)^2-(k-1)^2(k^2-12k+4)=4k^3>0.
As for Y(k) it increases since
Y'(k)=16k^2-5k+2+(k-1)√(k^2-12k+4)/(k+2-√(k^2-12k+4))^2√(k^2-12k+4)>0
for all k≥ 12.
In addition, Y(k)≥ 10 for all k≥ 12 with Y(12)=10.
We claim that only the values k∈{12, …, 16} can
satisfy θ≥max{θ_1, θ_2, θ_3} in Subcase B1.
Indeed since X(k) decreases and X(17)≈ 2.93 with the critical case l+m≤ 2
it looses any sense to consider 2<l+m≤ X(k) for k≥ 17: we would obtain l=m=1 for all k≥ 17 contradicting l+m>2.
Analogously due to increasing of Y(k) the numbers l and m in the sum l+m≥ Y(k) will be out
of the range max{l,m}≤ 11 in a short time.
Then the inequalities
Y(k)≤ l+m+k-2
are required to be true for every permutation of the triple (k,l,m)
in order
to keep the condition θ≥max{θ_1, θ_2, θ_3}
for k,l,m≥ 12,
where Y(k):=2k(k-2)k+2-√(k^2-12k+4).
It follows than permissible values of l,m,k can not be large than 16.
Indeed k≥max{l,m} by assumption.
Since the function Y(k)=Y(k)+(k-2) increases for all k≥ 12
we obtain the following inequality
max{Y(k),Y(l),Y(m)}= Y(k)≤ 3k-2.
Introduce the function
Z(k):=Y(k)- (3k-2)=k^2+8k-4-(3k-2)√(k^2-12k+4)/k+2-√(k^2-12k+4).
Its derivative Z'(k) has the numerator
k^3-6k^2-40k+16-(k^2-8k+8)√(k^2-12k+4)>0
due to
(k^3-6k^2-40k+16)^2-(k^2-8k+8)^2(k^2-12k+4)=16k^2(k^3-20k^2+104k-32)>0
for all k≥ 12 (the unique real root of k^3-20k^2+104k-32 is less than 1).
Therefore Z(k) increases for k≥ 12.
Note also that Z(16)≈ -0.0691<0 and Z(17)≈ 4.3137>0
implying that the inequality Y(k)≤ 3k-2 can not be satisfied if k≥ 17.
Collecting all values of (k,l,m) which provide the condition θ≥max{θ_1, θ_2, θ_3} of Theorem <ref> in both Cases A and B
we conclude that on GWS 1 all metrics with Ric>0 maintain Ric>0
under NRF (<ref>)
if (k,l,m) satisfies either max{k,l,m}≤ 11 or one of the inequalities
2<l+m≤ X(k) or l+m≥ Y(k) at 12≤ k≤ 16.
(2) The system of inequalities
X(k)<l+m<Y(k) and k≥ 12 corresponding to Subcase B2
is equivalent to the negation of θ≥max{θ_1, θ_2, θ_3}
and guaranties the existence of some metrics preserving Ric>0
by the same Theorem <ref>.
(3)
Let us estimate the number of GWS 1 on which
all metrics with Ric>0 maintain Ric>0.
It is obvious that there exists a finite number of GWS 1 which satisfy
max{k,l,m}≤ 11.
As we saw above the inequality 2<l+m≤ X(k) corresponding to Subcase B1 can provide only a finite set of solutions in (k,l,m) due to decreasing of X(k).
For instance, X(12)=5 implies l+m≤ 5 etc.
Analogously, another inequality Y(k)≤ l+m corresponding to Subcase B1 admits a finite number of solutions (k,l,m) as well
according to assumed max{l,m}≤ k and established 12≤ k≤ 16.
Therefore the number of GWS 1
on which any metric with Ric>0 remains Ric>0
must be finite.
Obviously, there is no restriction to k
in the inequalities X(k)<l+m<Y(k) besides supposed k≥ 12 in Case B.
Indeed they admit infinitely many solutions in (k,l,m)
since the range (X(k),Y(k)) expands as k grows
providing an increasing number of pairs (l,m) which can satisfy X(k)<l+m<Y(k)
for every fixed k≥ 12.
Therefore there are infinitely (countably) many GWS 1 on which at least some metrics can keep Ric>0.
Theorem <ref> is proved.
All GWS 1
on which any metric with Ric>0 remains Ric>0
can easily be found.
Using values of X(k) and Y(k) given in Table 3
all triples (k, l, m) are found in Tables 4 and 5 which
satisfy 2<l+m≤ X(k) or l+m≥ Y(k)
at fixed k∈{12,… 16} and k≥max{l,m},
where N=[Y(k)] if Y(k) is integer, else N=[Y(k)]+1
(symbol [x] means the integral part of x),
analogously ν(l):=10-l if 10-l>0 else ν(l):=1.
See also Fig. <ref> and <ref>.
The single pair (l,m)=(16,16) corresponds to k=16.
No permissible pair (l,m) at k≥ 17.
Recall that we are using the list of generalized Wallach spaces
given in Table 1 in accordance with <cit.>.
(1) GWS 2, 5, 7.
The proof follows from Theorem <ref>
since a_1+a_2+a_3=1/2 for spaces 2, 5, 7.
An illustration is given in Example <ref>.
(2) GWS 3, 15.
Clearly θ=a_1+a_2+a_3-12=-12(k+l+m+1)<0 for
GWS 3.
Analogously θ=-1/6<0 for GWS 15.
The proof follows from Theorem <ref>.
(3) GWS 4, 6, 8, 9, 10, 11, 12, 13, 14.
For all them θ>0. Desired conclusions follow by checking the conditions
max{θ_1, θ_2, θ_3}<θ<1 directly.
We demonstrate the proof only for GWS 4 which depend
on parameters.
It is easy to show that θ=a_1+a_2+a_3-12=14 is greater than
each of
θ_1=2l√((3l+1)(l-1))-3l^2+2l+1/4l(3l+1),
θ_2=2l√((l+1)(3l-1))-3l^2-2l+1/4l(3l-1) and
θ_3=√(3)/6-1/4
for all l=2,3,….
Corollary <ref> is proved.
§ PLANAR ILLUSTRATIONS OF RESULTS
§.§ The case a_1+a_2+a_3≤ 1/2
Ric>0 is not preserved on every GWS 3.
Take for instance (k,l,m)=(5, 4, 3).
Then a_1=5/26≈ 0.1923, a_2=2/13≈ 0.1538 and a_3=3/26≈ 0.1154 with a_1+a_2+a_3<1/2.
Results are depicted in Fig. <ref> and Fig. <ref>.
In Example <ref> we demonstrate all evaluations in detail.
In sequel we will restrict ourselves to concise comments.
The curve r_1=Σ∩Λ_1 defined by the system
of equations 5(x_1^2-x_2^2-x_3^2)+26x_2x_3=0 and
x_1^1/5 x_2^1/4 x_3^1/3=1
has the following parametric representation according to Lemma <ref>:
x_1=ϕ_1(t)=ψ^35/94/t^15/47,
x_2=ϕ_2(t)=t^32/47/ψ^6/47 and
x_3=ϕ_3(t)=t^-15/47/ψ^6/47,
where ψ:=t^2-26t/5+1, t∈ (0,m_1)∪ (M_1,+∞), m_1=1/5, M_1=5.
Choose the component r_13 corresponding to t∈ (5,+∞).
Coordinates of the point P_13=r_13∩ r_31 can be found using formulas (<ref>) and (<ref>):
x_1=γ q^0≈ 0.3916, x_2=q^0≈ 3.4857, x_3=δ q^0≈ 17.4285,
where
γ=Φ(a_1,a_3)=-27+9√(10)/13≈ 0.1123,
δ=Ψ(a_1,a_3)=50-15√(10)/13≈ 0.1974,
q^0=γ^-ω a_1^-1δ^-ω a_3^-1=
γ^-12/47δ^-15/47≈ 3.4857.
Due to formula (<ref>) the following value
of the parameter t corresponds to the point P_13:
t=t_13=δ^-1=13/50-15√(10)≈ 5.0666.
The polynomial h(t) in (<ref>) takes the form
h(t)= -63375t^4-370500t^3+4529574t^2-370500t-63375.
Respectively
p(y) = -63375y^2-370500y+4656324
has a unique positive root
y_0=-38/13+192/325√(235)≈ 6.1332>a_1^-1=5.2.
The corresponding roots of h(t) are the following:
t_1≈ 0.1676<m_1=0.2 and t_2≈ 5.9656>M_1=5.
The vector field V changes its direction at the unique
point K_2 with coordinates
x_1=ϕ_1(t_2)≈ 1.0717,
x_2=ϕ_2(t_2)≈ 2.7097,
x_3=ϕ_3(t_2)≈ 0.4542.
For P_13 we have M_1 <t_13<t_2.
Therefore h(t)>0 on r_13 if t∈ [t_13,t_2).
This means that trajectories originated in the exterior of R can reach R
across the arc of the curve r_13 with endpoints at P_13 and K_2.
In sequel the mentioned incoming trajectories
and trajectories originated in R,
will leave R at once through an unbounded part of the curve r_13,
which extends from K_2 to infinity
as the image of the interval (t_2, +∞).
Such a conclusion is confirmed by the values of the limits in (<ref>) as well:
lim_t→ M_1+ x_1=0,
lim_t→ M_1+ x_2= lim_t→ M_1+ x_3 = +∞,
lim_t→ +∞ x_1=lim_t→ +∞ x_2 = +∞,
lim_t→ +∞ x_3 =0.
Using coordinates of the normal found in Lemma <ref>,
we can analyze values of the angle
α(t):=180/πarccos ( V, ∇λ_1)/ V ∇λ_1
between V=(f,g,h) and λ_1 on every point of the curve r_13, in degrees.
As calculations show 73.16^∘<α(t)< 90^∘ for the incoming flow at t∈ (t_13, t_2)
and 90^∘<α(t)<93.36^∘ for the outgoing flow at t>t_2.
Therefore, trajectories leave R under small angles not exceeding 4^∘.
Ric>0 is not preserved on every GWS 2.
Take for instance k=5, l=4 and m=3.
Then a_1=5/24≈ 0.2083, a_2=4/24≈ 0.1667 and a_3=3/24=0.125 with a_1+a_2+a_3=1/2.
All necessary evaluations can be repeated in the same order as in Example <ref>.
Observe that 71.62<α(t)≤ 90 for incoming trajectories
at
4.6533≈64/5(45-√(595)√(3))=t_13≤ t≤ t_2=3713+17√(42721)/1200≈ 6.0223
and 90<α(t)<91.43 for outgoing trajectories at t>t_2.
Trajectories leave R under angles not greater than 1.5^∘,
that means they are almost parallel to tangent vectors to r_13.
§.§ The case a_1+a_2+a_3>1/2
The values a_1=5/14 ≈ 0.3571, a_2=3/7≈ 0.4286 and a_3=1/14+√(6)/12≈ 0.2756
satisfy a_1+a_2+a_3>1/2 and θ≥max{θ_1, θ_2, θ_3}.
Therefore trajectories of (<ref>) are expected only incoming into R by Theorem <ref> (see also Fig. <ref>).
Indeed θ=5/14+√(6)/12≈ 0.5613,
θ_1=-1/7+√(6)/12≈ 0.0613,
θ_2=-1/14+√(13)/26≈ 0.0672,
θ_3=1/14-239+14√(6)+7√(1434-84√(6))/48+7√(6)≈ 0.0445.
It should be noted that in general such artificial values of the parameters a_1,a_2,a_3
need not correspond to certain generalized Wallach space as noted in Introduction.
On GWS 1 with k=l=m=2 (a_1=a_2=a_3=a=1/4)
all metrics with Ric>0 preserve Ric>0.
Results are depicted in Fig. <ref> and Fig. <ref>.
Since max{k,l,m}≤ 11 the sufficient condition
θ≥max{θ_1, θ_2, θ_3}
is satisfied.
Indeed θ=0.25 and
θ_1=θ_2=θ_3≈ 0.0387.
Therefore we have only trajectories incoming into R.
Note that this example confirms also Theorem <ref> (Theorem 3 in <cit.>),
because a=1/4>1/6.
On GWS 1 with k=12, l=3, m=2 (a_1=2/5,
a_2=1/10, a_3=1/15≈ 0.0667)
all metrics with Ric>0 preserve Ric>0.
The point (3,2) lies on the straight line l+m=X(12)=5.
Therefore θ=θ_1 meaning that trajectories can only touch
each component of r_1 at some unique point of that component
remaining inside of R (see Fig. <ref>).
Also trajectories of (<ref>) can not leave R via r_2 or r_3 since
θ>max{θ_2,θ_3} due to max{l,m}<11.
Therefore they remain in R.
In detail, the polynomial p(y)=(16y-49)^2 corresponds to r_1
that admits the single root y_0=49/16=3.0625 of multiplicity 2.
Then the corresponding polynomial
h(t)=(16t^2-49t+16)^2
admits two roots t_1=49-9√(17)/32≈ 0.3716 and
t_2=49+9√(17)/32≈ 2.6909 each of multiplicity 2.
Actual values of the parameters are
θ=θ_1=1/15≈ 0.0667,
θ_2=-2/5+√(6)/6≈ 0.0082,
θ_3=-13/30+3√(221)/34≈ 0.0039.
On GWS 1 with k=15, l=14, m=13
(a_1=3/16=0.1875, a_2=7/40=0.175, a_3=13/80=0.1625)
all metrics with Ric>0 preserve Ric>0.
The triple (15, 14,13) satisfies l+m>Y(15)=26.
Therefore θ>θ_1.
Analogously the inequalities k+m>Y(14) and l+k>Y(13) imply
θ>θ_2 and θ>θ_3,
where Y(14)≈ 20.49 and Y(13)≈ 15.29
are known from Table <ref>.
Therefore no trajectory of (<ref>) can leave R.
Supplementary, for
each r_1, r_2 and r_3 their polynomials of the kind h(t) admit no real root
preserving positive sign.
Actual values of the parameters are
θ=1/40= 0.025, θ_1=-5/16+√(55)/22≈ 0.0246,
θ_2=-13/40+√(39)/18≈ 0.0219,
θ_3=-27/80+3√(159)/106≈ 0.0194.
On GWS 1 with k=14, l=7, m=4
(a_1=7/23≈ 0.3043, a_2=7/46≈ 0.1522, a_3=2/23≈ 0.0869)
some metrics with Ric>0 preserve Ric>0.
Since l, m<11 and (k,l,m)=(14,7,4) satisfies X(k)≈ 3.51 <l+m<Y(k)≈ 20.49
then θ>θ_2, θ>θ_3 and θ<θ_1
avoiding direct computations
(actually
θ=1/23≈ 0.0435, θ_1=-9/46+3√(37)/74≈ 0.0509,
θ_2=-8/23+√(30)/15≈ 0.0173,
θ_3=-19/46+√(57)/18≈ 0.0064).
Therefore no trajectory can leave R via r_2, r_3
but there expected exactly two points on each component of r_1
at which trajectories of (<ref>) can change directions relatively to R in accordance
with the scenario towards R - from R - towards R again“
(see Fig. <ref>).
Supplementary, the polynomial h(t) corresponding to r_1 admits four roots
t_1≈ 0.2532, t_2≈ 3.9488,
t_3≈ 0.15, t_4≈ 6.6663 such that
t_3<t_1<t_12=320/161-8√(1110)/161≈ 0.3321
and
t_13=230/7(√(2109)-57)≈ 2.9665<t_2<t_4.
§ FINAL REMARKS
1) Let us return to generalized Wallach spaces with a_1=a_2=a_3=a,
in particular, let us consider the cases
a=1/4, a=1/6, a=1/8 and a=1/9 studied in <cit.>.
As mentioned in Introduction the Wallach spaces W_6, W_12 and W_24
are examples of GWS which correspond to a=1/6, a=1/8 and a=1/9 respectively.
The value a=1/4 is very special at which four different singular points of (<ref>) merge to its unique linearly zero saddle, see <cit.>. Moreover, on the
algebraic surface of degree 12 mentioned in Introduction,
the point (1/4,1/4,1/4) is a singular point of elliptic umbilic type in the sense of Darboux (see <cit.>).
Theorem <ref> generalizes Theorem <ref>.
Indeed generalized Wallach spaces with a=1/4
can be obtained in the special case of GWS 1 with k=l=m=2.
The condition max{k,l,m}≤ 11 is satisfied.
Therefore by Theorem <ref> all initial metrics
with Ric>0
maintain Ric>0.
Although Theorem <ref> covers Theorem <ref>
we have stronger statements in Theorem <ref> that all initial metrics with Ric>0
lose Ric>0 on the narrow class of GWS with a∈ (0,1/6),
in particular on the spaces W_12 and W_24,
whereas Theorem <ref> states such a property only for some initial metrics with Ric>0, but for a wider class of GWS with a_1+a_2+a_3≤ 1/2,
in particular, for the mentioned a=1/8 and a= 1/9 (a_1+a_2+a_3<1/2),
which correspond to GWS 3 with k=l=m=1 and GWS 15 respectively.
Analogously, for a∈ (1/6,1/4)∪ (1/4,1/2) all metrics become
Ric>0 according to Theorem <ref>,
whereas by Theorem <ref> (even by Theorem <ref>)
the similar property can be guarantied
for some metrics only, but again on a wider class of GWS
which satisfy a_1+a_2+a_3>1/2.
Theorem <ref> is consistent with Theorem <ref> and extends it.
According to Theorem <ref> not every metric lose Ric>0 in the case a=1/6 (they are exactly metrics contained in the domain bounded by parameters of Kähler metrics x_k=x_i+x_j, {i,j,k}={1,2,3}, see <cit.>).
This fact does not contradict the conclusions of Theorem <ref>,
where spaces with a=1/6 are contained as a special class of GWS 2 with k=l=m (a_1+a_2+a_3=1/2).
Moreover, Theorem <ref> implies the departure of some metrics
from the domain with Ric>0 at a=1/6, that was not previously stated in Theorem <ref>.
2) In general there is infinitely (uncountably) many triples
(a_1,a_2,a_3)∈ (0,1/2)^3 satisfying a_1+a_2+a_3>1/2
such that trajectories of (<ref>) all remain in R originating there.
Indeed each function
θ_i=θ_i(a_i)=a_i-1/2+1/2√(1-2a_i/1+2a_i), i=1,2,3,
attains its largest value θ^∗_i=θ_i(a^∗)≈ 0.067442248
at the point
a^∗=μ^2-2μ+4/6μ≈ 0.4196433778
where μ=√(19+3√(33))≈ 3.30905648 (see Fig. <ref>).
Then we always have an opportunity to get θ >θ_i for each i
choosing (uncountably many) a_i at least from the interval [a^∗, 1/2)
because
θ=a_1+a_2+a_3-0.5≥ 3a^∗-0.5>1.2-0.5=0.7>θ_i.
Therefore the number of appropriated generalized Wallach spaces, where all metrics with Ric>0 maintain Ric>0,
might be unbounded too if they exist for such (a_1,a_2,a_3)
(compare with the third assertion of Theorem <ref>).
The author expresses his gratitude to Professor Yuriĭ Nikonorov for helpful discussions on the topic of this paper.
amsunsrt
[99]
Ab1
Abiev, N. A.: Two-parametric bifurcations of singular points of
the normalized Ricci flow on generalized Wallach spaces.
AIP Conf. Proc. 1676 (020053), 1–6 (2015)
https://doi.org/10.1063/1.4930479
Ab2
Abiev, N. A.: On topological structure of some sets related to
the normalized Ricci flow on generalized Wallach spaces.
Vladikavkaz. Math. J. 17(3), 5–13 (2015)
https://doi.org/10.23671/VNC.2017.3.7257
Ab7
Abiev, N. A.:
On the evolution of invariant Riemannian metrics on one class
of generalized Wallach spaces under the influence of the normalized Ricci flow.
Mat. Tr. 20(1), 3–20 (2017) (Russian).
[English translation. In: Sib. Adv. Math. 27(4), 227–238 (2017)]
https://doi.org/10.3103/S1055134417040010
Ab_RM
Abiev, N.: On the dynamics of a three-dimensional differential system related to the normalized Ricci flow on generalized Wallach spaces. Results Math. 79, 198 (2024). https://doi.org/10.1007/s00025-024-02229-w
AANS1
Abiev, N. A., Arvanitoyeorgos, A., Nikonorov, Yu. G., Siasos, P.:
The dynamics of the Ricci flow on generalized Wallach spaces.
Differ. Geom. Appl. 35(Suppl.), 26–43 (2014)
https://doi.org/10.1016/j.difgeo.2014.02.002
AANS2
Abiev, N. A., Arvanitoyeorgos, A., Nikonorov, Yu. G., Siasos, P.:
The Ricci flow on some generalized Wallach spaces. In: V. Rovenski, P. Walczak (eds.)
Geometry and its Applications. Springer Proceedings in Mathematics & Statistics. 72, pp. 3–37. Springer, Switzerland (2014)
https://doi.org/10.1007/978-3-319-04675-4
AN
Abiev, N. A., Nikonorov, Yu. G.:
The evolution of positively curved invariant Riemannian metrics on the Wallach spaces under the Ricci flow. Ann. Glob. Anal. Geom. 50(1), 65–84. (2016)
https://doi.org/10.1007/s10455-016-9502-8
AW
Aloff, S., Wallach, N. R.:
An infinite family of 7–manifolds admitting positively curved Riemannian structures.
Bull. Am. Math. Soc. 81, 93–97 (1975)
https://doi.org/10.1090/S0002-9904-1975-13649-4
Bat2
Batkhin, A. B.: A real variety with boundary
and its global parameterization. Program. Comput. Softw. 43(2), 75–83 (2017)
https://doi.org/10.1134/S0361768817020037
Bat
Batkhin, A. B., Bruno, A. D.: Investigation of a real
algebraic surface. Program. Comput. Softw. 41(2), 74–83. (2015)
https://doi.org/10.1134/S0361768815020036
BB
Bérard Bergery, L.:
Les variétés riemanniennes homogènes simplement connexes de dimension impaire
à courbure strictement positive. J. Math. Pures Appl. 55(1), 47–67 (1976)
Berest
Berestovskiĭ V. N.:
Homogeneous Riemannian manifolds of positive Ricci curvature.
Mat. Zametki. 58(3), 334–340, 478 (1995) (Russian).
[English translation. In: Math. Notes 58(3-4), 905–909 (1995)]
https://doi.org/10.1007/BF02304766
BerNik2020
Berestovskiĭ V. N, Nikonorov, Yu., N.:Riemannian Manifolds and Homogeneous Geodesics. Springer Monographs in Mathematics. Springer, Cham, 2020, XXII+482pp.
Be
Berger, M.: Les variétés riemanniennes homogènes normales simplement connexes à courbure strictment positive.
Ann. Scuola Norm. Sup. Pisa. 15(3), 179–246 (1961)
https://eudml.org/doc/83265
Bettiol
Bettiol, R. G., Krishnan, A. M.:
Ricci flow does not preserve positive sectional curvature in dimension four.
Calc. Var. Partial Diff. Eq. 62(1), 13, 21p. (2023)
https://doi.org/10.1007/s00526-022-02335-z
Bo
Böhm, C., Wilking, B.:
Nonnegatively curved manifolds with finite fundamental groups admit metrics with positive Ricci curvature. GAFA Geom. Func. Anal. 17, 665–681 (2007)
https://doi.org/10.1007/s00039-007-0617-8
Bruno
Bruno, A. D., Azimov, A. A.:
Parametric expansions of an algebraic variety near
its singularities. Axioms. 12(5), 469 (2023) https://doi.org/10.3390/axioms12050469
Bruno2
Bruno, A. D., Azimov, A. A.:
Parametric expansions of an algebraic variety near
its singularities II. Axioms. 13(2), 106 (2024) https://doi.org/10.3390/axioms13020106
Caven
Cavenaghi, L. F., Grama, L., Martins, R. M.:
On the dynamics of positively curved metrics on SU(3)/T^2 under the homogeneous Ricci flow.
Mat. Contemp. 60, 3–30 (2024)
http://doi.org/10.21711/231766362024/rmc602
CKL
Chen, Z., Kang, Y., Liang, K.:
Invariant Einstein metrics on three-locally-symmetric spaces.
Commun. Anal. Geom. 24(4), 769–792 (2016)
https://doi.org/10.4310/CAG.2016.v24.n4.a4
ChWal
Cheung, M.-W., Wallach, N. R.:
Ricci flow and curvature on the variety of flags on the two dimensional projective space over the complexes, quaternions and octonions.
Proc. Am. Math. Soc. 143(1), 369–378 (2015)
https://doi.org/10.1090/S0002-9939-2014-12241-6
Gonzales González-Álvaro, D., Zarei, M.:
Positive intermediate curvatures and Ricci flow.
Proc. Am. Math. Soc. 152(6), 2637–2645 (2024)
https://doi.org/10.1090/proc/16752
Ham
Hamilton, R. S.: Three-manifolds with positive Ricci curvature.
J. Diff. Geom. 17, 255–306 (1982)
https://doi.org/10.4310/jdg/1214436922
Lomshakov2
Lomshakov, A. M., Nikonorov, Yu. G., Firsov, E. V.:
Invariant Einstein metrics on three-locally-symmetric spaces.
Matem. Tr., 6(2), 80–101 (2003) (Russian).
[English translation. In: Sib. Adv. Math., 14(3), 43–62 (2004)]
Nikonorov2 Nikonorov, Yu. G.:
On a class of homogeneous compact
Einstein manifolds. Sibirsk. Mat. Zh. 41(1), 200–205 (2000)(Russian).
[English translation. In: Sib. Math. J. 41(1), 168–172 (2000)]
https://doi.org/10.1007/BF02674006
Nikonorov4
Nikonorov, Yu. G.: Classification of generalized Wallach spaces.
Geom. Dedicata. 181(1), 193–212 (2016); correction: Geom. Dedicata.
214(1), 849-851 (2021)
https://doi.org/10.1007/s10711-015-0119-z,
https://doi.org/10.1007/s10711-021-00604-3
Nikonorov1
Nikonorov, Yu. G., Rodionov, E. D., Slavskii, V. V.:
Geometry of homogeneous Riemannian manifolds. J. Math. Sci. (New York)
146(7), 6313–6390 (2007)
https://doi.org/10.1007/s10958-007-0472-z
Stat Statha, M.: Ricci flow on certain homogeneous spaces.
Ann. Glob. Anal. Geom. 62(1), 93–127 (2022)
https://doi.org/10.1007/s10455-022-09843-3
Wal
Wallach, N. R.: Compact homogeneous Riemannian manifolds with
strictly positive curvature. Ann. Math. Second Ser. 96(2), 277–295 (1972)
https://doi.org/10.2307/1970789
Wil Wilking, B.:
The normal homogeneous space SU(3)× SO(3)/U(2) has positive sectional curvature.
Proc. Am. Math. Soc. 127(4), 1191–1194 (1999)
https://doi.org/10.1090/S0002-9939-99-04613-4
WiZi
Wilking, B., Ziller, W.:
Revisiting homogeneous spaces with positive curvature.
J. Reine Angew. Math. 738, 313–328 (2018)
https://doi.org/10.1515/crelle-2015-0053
XuWolf
Xu, M., Wolf, J. A.:
Sp(2)/U(1) and a positive curvature problem.
Diff. Geom. Appl. 42, 115–124 (2015)
https://doi.org/10.1016/j.difgeo.2015.08.002
|
http://arxiv.org/abs/2409.03241v1 | 20240905044224 | Demonstration of Scalability and Accuracy of Variational Quantum Linear Solver for Computational Fluid Dynamics | [
"Ferdin Sagai Don Bosco",
"Dhamotharan S",
"Rut Lineswala",
"Abhishek Chopra"
] | physics.flu-dyn | [
"physics.flu-dyn",
"quant-ph"
] |
Multiple weather images restoration using the task transformer and adaptive mixup strategy
Yang Wen10000-0001-6303-8178 Anyu Lai1 Bo Qian2 Hao Wang1 Wuzhen Shi1 Wenming Cao1
September 9, 2024
==========================================================================================
§ ABSTRACT
The solution for non-linear, complex partial differential Equations (PDEs) is achieved through numerical approximations, which yield a linear system of equations. This approach is prevalent in Computational Fluid Dynamics (CFD), but it restricts the mesh size since the solution of the linear system becomes computationally intractable when the mesh resolution increases. The reliance on the ability of High-Performance Computers (HPC) to scale up and meet these requirements is myopic; such very high-fidelity simulations require a paradigm shift in computing. This paper presents an exploration of quantum methodologies aimed at achieving high accuracy in solving such a large system of equations.
Leveraging recent works in Quantum Linear Solver Algorithms (QLSA) and variational algorithms suitable for Quantum Simulation in HPC, we aspire to push the boundaries of CFD-relevant problems that can be solved on hybrid quantum-classical framework. To this end, we consider the 2D, transient, incompressible, viscous, non-linear coupled Burgers equation as a test problem and investigate the accuracy of our approach by comparing results with a classical linear system of equation solvers, such as the Generalized Minimal RESidual method (GMRES). Through rigorous testing, our findings demonstrate that our quantum methods yield results comparable in accuracy to traditional approaches. Additionally, we demonstrate the accuracy, scalability, and consistency of our quantum method. Lastly, we present an insightful estimation of the resources our quantum algorithm needs to solve systems with nearly 2 billion mesh points.
Quantum Computing, Variational Quantum Algorithm, Computational Fluid Dynamics, Partial Differential Equations
§ INTRODUCTION
Partial Differential Equations (PDE) are a powerful mathematical tool for describing the complex phenomena observed in all aspects of the universe. Its relevance in Science is evidenced through the Maxwell equations <cit.>, Schrodinger equation <cit.>, Engineering, evident from the Fourier heat transfer <cit.> and Navier-Stokes equations<cit.>, and Finance, embodied by the Black-Scholes equation<cit.> is undeniable. Active research is constantly underway to discover new PDEs, even with the aid of data-driven techniques, such as Physics-Informed Neural Networks (PINNs)<cit.>.
However, describing a physical phenomenon is only the first challenge; solving it is even more daunting. This is often insurmountable, especially when the PDE features non-linearity (Burgers equation<cit.>), incomplete closure (Navier-Stokes equations), or is very complex, such as the integro-differential nature of the Boltzmann equation <cit.>.
Non-linear PDEs are specifically prominent in transport mechanics, with applications including plasma fields <cit.>, nanofluid flow <cit.>, and shallow water waves <cit.>. Fluid dynamics and the associated field of Computational Fluid Dynamics (CFD) possess a rich ensemble of such PDEs, which capture complex physical phenomena, such as convection-diffusion, heat transfer, shock wave formation and propagation, etc. However, an analytical solution is only feasible for a grossly simplified version of the original phenomena, which is of little real-world importance.
This has engendered an appetite for numerical recipes that simplify the PDE into a system of algebraic equations. In CFD, for instance, the process of solving the system of PDEs is depicted in Fig. <ref>. It begins by identifying a set of PDEs that describe the observed flow behaviour as the governing equation. Initial conditions and boundary conditions are then specified as dictated by the physics of flow. Since these equations follow the laws of continuum mechanics, the conversion of the governing PDEs to the algebraic system of equations is popularly achieved through grid-based methods, such as the Finite Difference Method (FDM)<cit.>, Finite Volume Methods (FVM)<cit.> or Finite Element Method (FEM)<cit.>. The chosen method determines the manner in which the flow domain is viewed; in FDM, it is distributed collocation points, while in FVM it is interconnected volumes, whereas FEM views the domain as interlinked elements. In transient flow, the strategy employed for time integration often addresses the non-linearity. Explicit method solves each discretized domain one at a time, while the implicit method involves solving the entire system of equations simultaneously, where both the linear and nonlinear terms are treated implicitly. On the other hand, a semi-implicit method combines aspects of both explicit and implicit methods, typically treating the linear terms implicitly and the nonlinear terms explicitly <cit.>. Implicit and Semi-implicit methods are generally utilized as it solves the entire domain simultaneously. With these methods, irrespective of the means, the end result is a linear system of equations, which can be expressed as Ax = b (Fig. <ref>).
It is crucial to note that nearly all PDE systems can be simplified to the above linear system. Consequently, there have been multi-disciplinary efforts to solve the system, which has led to a plethora of methods, direct (Gaussian Elimination, Thomas Algorithm), iterative (Gauss-Seidel, Gauss Jacobi), and Krylov subspace (Conjugate Gradient, Generalized Minimal Residual) methods <cit.>, being developed and utilized over the past several decades. Recent times have also witnessed the rise of code libraries, such as NumPy <cit.>, Intel's MKL<cit.>, etc., with significant efforts going into accelerating the solution to Ax = b. These methods and implementations have also efficiently handled very large systems.
However, in certain CFD studies such as a Direct Numerical Solution (DNS) for turbulence <cit.>, the mesh sizes desired are very small (on the order sub-Kolmogorov scale<cit.>). This leads to massive Ax = b systems that are computationally intractable even with the formidable throughput offered by the modern Exascale supercomputers.
While the previous generations had the option to wait till computational power increased to meet their need, we do not have that luxury. Modern technology is quickly approaching the limit of Moore's law, physically <cit.> and economically <cit.>.
In general, the ultimate aim of computational methods is to provide real-time predictions of the complex physical phenomena being modelled. Such a capability elevates such computational tools from merely a design tool kit to a step towards true virtualization. However, such a vision requires immense computational power, which must be a result of a paradigm shift in computing.
Significant faith is placed in Quantum Computers (QC), which are, theoretically, capable of solving large systems with enormous acceleration <cit.>. Numerous algorithms have been devised for the solution of the Navier-Stokes system of equations, many of which have been discussed in detailed reviews by Gaitan <cit.>. A few of these approaches are focussed on quantum acceleration of traditional Navier-Stokes solution strategies, such as Finite Difference Methods (FDM), Finite Volume Methods (FVM)<cit.>, and Finite Element Methods (FEM) <cit.>. A larger volume of research has been dedicated to quantum approaches to the Lattice Boltzmann Methods (LBM)<cit.>.
Any claim of quantum supremacy can only be established once quantum computers of meaningful capacity have been constructed<cit.>. Meanwhile, our understanding garnered from existing quantum machines can be gainfully utilized to create hybrid algorithms <cit.> capable of running on such hardware. The advent of quantum simulators has given an impetus to the quantum algorithm and code development efforts. It is now feasible to test out the efficacy of a computational algorithm on large emulations of quantum hardware <cit.>, essentially preparing for a time when, inevitably, quantum computers become a reality. Such attempts are historically poetic and reminiscent of Ada Lovelace's efforts to develop code for digital silicon-based computers that didn't exist yet <cit.>.
There have been attempts in the quantum field to develop algorithms for the solution of a linear system (Ax = b) on quantum hardware. One of the original approaches was the Harrow–Hassidim–Lloyd (HHL) algorithm<cit.> which achieved an exponential acceleration over contemporary classical algorithms. Further improvement on the algorithm's precision was obtained by utilizing Quantum Singular Value Transformations (QSVT)<cit.>, such as the the method proposed by Childs, et. al<cit.>. However, it is critical to recognize that these speed-ups do not consider the state preparation and state readout requirements, which are themselves, quite challenging. Moreover, these algorithms are meant to be used for fault-tolerant hardware, which is still a few years away. Even those designed for hybrid architectures<cit.> exhibit discouraging performance on quantum simulators in terms of number of qubits (and circuit depth)<cit.>.
Another promising approach, specifically in the Noisy Intermediate-Scale Quantum (NISQ) era, is the Variational Quantum Linear Solver (VQLS) <cit.>, which has found remarkable success and has also been utilized to solve heat conduction equation<cit.> and Poisson equation <cit.>. However, the VQLS algorithm, being a variational algorithm, loses coherence during the computation due to the measurement of the cost function. This loss of coherence limits the potential quantum advantages that can be harnessed.
This drawback is largely mitigated by the Coherent Variational Quantum Linear Solver (CVQLS), which combines strengths of Variational Quantum Algorithm (VQA) and classical linear algebra. The CVQLS distinguishes itself from the VQLS in three distinct aspects. Firstly, the VQLS utilizes a more general framework for solving optimization problems using quantum computers, while, CVQLS leverages quantum coherence to enhance the accuracy and efficiency of the solution. Secondly, CVQLS encodes the linear system into a quantum state using a coherent process, while VQLS utilizes different embedding techniques. Lastly, the cost function in CVQLS is typically designed for linear systems, while VQLS can use more general cost functions.
Hybrid quantum-classical algorithms are aptly suited to the present NISQ-era and similar methods have been used to solve equations relevant to CFD, such as the Poisson equation <cit.>.
This research work utilizes such a hybrid quantum-classical method, in which CVQLS is used as a quantum linear solver algorithm (QLSA), which works well for current quantum computers/simulators. The algorithm, Hybrid Quantum-Classical Finite Methods (HQCFM), can solve Ax = b while maintaining high accuracy, consistency and scalability. The HQCFM algorithm give more coherent and precise probability distribution for the solution vector, thereby potentially offering greater quantum speedups.
These claims are put to the test by solving the 2D, transient, incompressible, viscous, non-linear, coupled Burgers equation described in Section <ref>. Section <ref> briefly describes the methodology utilised in this research. The accuracy, scalability, and consistency of HQCFM is established in Section <ref>. Section <ref> utilizes the knowledge gained in the simulation of the Burgers equation to develop predictive models that provide insight into the cost associated with extending the HQCFM to complex, real-world problems.
Fig. <ref> clarifies this paper's focus and the exact field where HQCFM will make an impact.
§ PROBLEM DEFINITION
§.§ Significance of the problem
One of the most celebrated nonlinear partial differential equations in the field of fluid mechanics is the Burgers equation<cit.>. It is a model equation that is capable of capturing the interaction between advection and diffusion. It arises in gas dynamics, traffic flow, chromatography and flood waves in rivers.
Due to its similarity with the Navier-Stokes equation, it is considered an important model for testing numerical algorithms for the Navier-Stokes equation. The numerical solution of this equation is an indispensable tool for studying incompressible fluid flow problems.
§.§ Formulation
The Burgers equation is a highly regarded model equation in the field of CFD. It has been shown to model turbulence <cit.> and propagation shockwave in viscous fluids <cit.>.
The 2D governing equations are,
∂ u/∂ t + u∂ u/∂ x+ v∂ u/∂ y = 1/Re[∂^2 u/∂ x^2 + ∂^2 u/∂ y^2]
∂ v/∂ t + u∂ v/∂ x+ v∂ v/∂ y = 1/Re[∂^2 v/∂ x^2 + ∂^2 v/∂ y^2]
For this article, the initial conditions are taken as,
u(x,y,0) = sin(π x) +cos(π y)
v(x,y,0) = x+y
and the boundary conditions as,
u(0,y,t) = cos(π y) u(L_x,y,t) = 1 + cos(π y)
v(0,y,t) = y v(L_x,y,t) = 0.5 + y
u(x,0,t) = 1 + sin(π x) u(x,L_y,t) = sin(π x)
v(x,0,t) = x v(x,L_y,t) = 0.5 + x
The square domain with L_x = L_y =0.5 is chosen for the study. The numerical computations are performed on a uniform grid whose cell size depends on the chosen grid resolution. The Reynolds number, Re, is chosen to be 50. The solution time is chosen to be 0.625, and time steps are chosen appropriately. The Cartesian discretization and boundary conditions schematic is illustrated in Fig. <ref>.
The spatial discretization and time integration scheme used by Srivastava and Tamsir <cit.> are chosen here. Studies <cit.> demonstrate through stability analysis and numerical experiments that semi-implicit schemes exhibit superior stability properties compared to explicit schemes. While implicit methods solve the entire system simultaneously, semi-implicit methods strike a balance between explicit and implicit treatments of different terms within the equations. Semi-implicit methods are often favored for improved stability, especially when dealing with complex systems involving linear and nonlinear components. Thus, the semi-implicit scheme is used for this study.
The discretization results in the creation of two linear systems corresponding to u and v velocities, which must be solved at each time step. Thus,
[A]u^t+Δ t =u^t
[A]v^t+Δ t =v^t
The above linear system is solved once for each velocity component. Thus, in a single time step, 2 linear system solutions are obtained.
The entire process is outlined in Fig. <ref>.
§.§ Setup
The size of the square matrix [A] is referred to as system size. It is related to the number of grid points in the X and Y direction as, n_x = n_y = √(N), where N is the size of the [A] matrix. Thus, a grid of 8 × 8 corresponds to a system size of 64. An artifact of HQCFM and other quantum algorithms is the need for the system sizes to be powers of 2. If this is not the case, then padding is utilized to ensure the system meets the criteria. This padding is a numerical fix and doesn't affect or alter the solution in any way. The size of matrices studied in this research is presented in Tab. <ref>.
§ METHODOLOGY
§.§ Iterative Classical PDE Solver
The classical solver is used to provide a basis for comparison for the HQCFM. The discretization process and the assembly of the Ax = b system remain identical for both methods. The classical and HQCFM perform all the blue boxes in Fig. <ref>. However, instead of solving the linear system through HQCFM (green box in Fig. <ref>), the classical solver utilizes an established classical method, namely, the Generalized Miminal RESidual method (GMRES), further details of which can be found in a recent review by Zou<cit.>.
§.§ Iterative Quantum PDE Solver
This work leverages a Quantum Linear System Algorithm (QLSA) well-suited for the NISQ era. We modified an existing QLSA to solve the system of linear equations arising at each time step for semi-implicit discretization of the 2D, transient, incompressible, viscous, non-linear, coupled Burgers equation. As depicted in Fig. <ref>, the solution of any partial differential equation (PDE) involves solving a system of the form Ax = b. We exploit the fact that the A matrix in this system can be decomposed into a linear combination of unitary matrices. This decomposition allows us to embed these matrices efficiently onto a quantum circuit within a Variational Quantum Algorithm (VQA) framework to reach the target state |ψ⟩. The process of our HQCFM can be summarized as follows,
* We decompose our A matrix using spectral theorem as in "Matrix Analysis" by Roger A. Horn and Charles R. Johnson.<cit.>
* Calculating the Euclidean norm of difference between Quantum |b⟩ and Classical b, using this cost for the optimization process.
* After finding the best θ (variational) parameters for the circuit, we prepare our x vector.
* We have utilized IBM Qiskit AerSimulator (GPU-enabled).
§ RESULTS AND DISCUSSION
In this section, the accuracy of the HQCFM is established by comparing it with classical solver results. The scalability is then tested by using finer mesh sizes. Finally, comments are made regarding the consistency of the HQCFM by studying the difference between its solution and classical solution at every time step.
§.§ Accuracy
The velocity contours for the transient solution of the 2D, transient, incompressible, viscous, non-linear, coupled Burgers equation are depicted in Fig. <ref>. These results were obtained using the HQCFM introduced in this paper to solve the linear system at every time step. Such a demonstration of using a quantum linear equation solver coupled with a transient problem is unprecedented. Moreover, the results obtained also match the expected flow field for the Burgers’ equation with the initial and boundary conditions specified in the problem statement (refer section <ref>)<cit.>.
In order to better appreciate the accuracy of HQCFM, we ran identical cases using the classical solver introduced in Section <ref>. The resulting contours from the HQCFM and classical solver were subtracted from one another to produce a difference plot as shown in Fig. <ref>. Here, Fig.<ref> plots the difference in the computation of the X velocity component between classical and the HQCFM, while Fig. <ref> depicts the difference Y velocity component.
It can be observed that the maximum disagreement between the two occurs in the top right region of the domain. Although a discrepancy is noted, the difference is very small, on the order of e^-5. The tables present the physics-based validation of the results, <ref> and <ref>. In these tables, the values of the u and v velocity at specific points are compared with those reported in the literature. Minor disagreements are observed, but these can be adequately explained. The discrepancy with respect to Srivastava<cit.> arises because of different grid sizes. The results presented are for a 22 × 22 grid, whereas those in the reference are for a 20 × 20 grid. On the other hand, the present approach is semi-implicit, whereas that reported in Bahadir<cit.> is fully implicit. The mismatch is consequently, a result of the difference in the treatment of the non-linearity, hence these differences can be attributed to the numerics.
§.§ Scalability
Although the accuracy of the quantum linear system solver with regards to the classical solver has been established for a 512 × 512 sized system, this is not the top limit. In fact, a number of other simulations with variations in the system size have also been performed to gauge the scalability properties of the HQCFM. The accuracy of these simulations is presented in terms of the L2 norm between the HQCFM and classical results in Fig. <ref>. It is observed that the L2 norm is in the range of 1e^-5 to 1e^-11, nearing machine zero, even for systems as large as 2048. This provides enormous confidence in the ability of the developed quantum solver to scale up to larger problems.
Accuracy alone is not the only factor affecting scalability. Quantum algorithms require a number of qubits to be held in a superposition state, which is a physically arduous process. The number of qubits increases with the system size. In the present research, the number of qubits used by the algorithm for each of the systems is measured and plotted in Fig. <ref>. A more robust breakdown of the number of qubits and quantum gates is provided in Tab. <ref>.
§.§ Consistency
It is worth reiterating that this article demonstrates the developed capability of HQCFM, achieving not only high accuracy and scalability but also exhibiting remarkable consistency in these features. Notably, the quantum solver is integrated into the time loop, requiring evaluation at every time step for the Burgers' equation. This equation's non-linear nature necessitates re-evaluation of the coefficient matrix, A, based on the previous step's results at each iteration.
This emphasizes the critical need for the HQCFM to maintain consistent accuracy throughout the simulation, as any errors can propagate and significantly alter the trajectory toward the steady-state solution. However, as Fig. <ref> illustrates, the solutions obtained using the HQCFM and the classical solver exhibit near-identical convergence behavior towards the steady state. This consistency highlights the effectiveness of the HQCFM in maintaining accuracy even under the iterative nature of the time-stepping algorithm.
§ RESOURCE ESTIMATION
The resources utilized by the HQCFM are listed in Tab. <ref> for the simulations performed in this research. These are crucial for trying to predict resources required for larger simulation systems.
Consider a complex problem like internal flow in turbomachinery, which is characterized by non-linearity and turbulence. Such a flow can only be resolved by high mesh sizes and significant computational power. For a mesh of size 2^30, or approximately 1.69 billion cells, a traditional high-performance computer (HPC) system would require about 19.2 million cores <cit.>.
When the HQCFM presented in this research is run on an ideal quantum computer, this same problem could be tackled using only 30 qubits. However, it is also paramount that we have an estimate of the Quantum resources the HQCFM would utilize to run such a simulation. Such a resource estimation exercise is performed here using an arithmetic progression model based on the data gathered in Tab. <ref>.
An arithmetic progression model can estimate the number of single qubit gates, as depicted in Eq. <ref>.
a_n = a_1 + (n-1) d
Here, a_n is the number of single qubit gates required for simulating the target system size, a_1 is the starting point, which is taken to be the 16 × 16 system. Thus, by consulting the table a_1 = 76. Additionally, n is the number of qubits, and d is the difference between two consecutive systems, which is obtained by taking the difference between consecutive systems' single qubit gate requirement. From consulting the tab. <ref>, d = 19.
Thus, for a n=30, the number of single qubits gates utilized by HQCFM would be 76 + (29) × 19 = 627
The number of multi-quit gates can also be estimated by an arithmetic progression model, as depicted in Eq. <ref>.
d_n = d_1 + (n-1) d
a_n = a_1 + ∑_k=1^n-1 d_k
After rewriting
a_n = a_1 + (n-1) (d_1+d_n)/2
From consulting Tab. <ref>, the value of a_1 = 42, d_1 = 24, and the common difference of differences, d = 6. Here, d_n is computed first using Eq. <ref>, as d_n = 24 + (29) × 6 = 198. Thus, for a n=30, the number of multi-qubits gates utilized by HQCFM would be obtained from Eq. <ref> as, a_n = 42 + (29) * (24 + 198)/2 = 3261.
Thus, it would take 627 single-qubit gates and 3261 multi-qubit gates, totalling 3882 gates, for the HQCFM to perform the internal, turbulent flow inside a turbomachinery device using a grid with 1.69 billion cells. Thus, the HQCFM offers a reduction in hardware requirements compared to traditional HPC while also enabling highly complex simulations.
§ CONCLUSION
The rising demand for true virtualization, either through digital twins or real-time solutions, combined with the surging size and complexity of the associated Partial Differential Equations (PDEs) has challenged and outpaced the existing High Performance Computers. With the onset of the Moore limit, the hope that silicon-based computers can address these need is fading. Next-generation technologies are needed to satisfy current demand and sustain the course of scientific enquiry into the future. Quantum computing, is one of the few options that shows tremendous promise.
In this article, the Hybrid Quantum-Classical Finite Method (HQCFM) is developed based on an existing Quantum Linear Solver Algorithm (QLSA). The focus of the HQCFM was to solve the Ax=b linear system, which is utilized in the present research to solve 2D, transient, incompressible, viscous, non-linear, coupled Burgers equation. Such a demonstration of using a quantum linear equation solver coupled with a transient problem is unprecedented. The performance of HQCFM is measured using 3 metrics, viz., Accuracy, Scalability, and Consistency,
Present results indicate a very high accuracy when compared to classical solvers. The results are also verified with those reported in the literature. Various system sizes are used to judge the scalability of HQCFM. The results indicate a good scale-up behavior. Quantitative information regarding the scale-up is also provided in terms of qubits and quantum gates utilized. The HQCFM also distinguishes itself by running inside a time loop in a transient problem, without propagating any error to the next time step. Obtaining such a high accuracy consistently in a system of linked linear systems is a major breakthrough in the field of quantum computing.
Apart from these scientific findings, this paper also explores the economical aspect of quantum computing. A robust resource estimation, based on information gathered in this research, is used to predict the resources needed by HQCFM to perform complex real-world simulations. These predictions indicate that a 30-qubit quantum computer will be able to outperform an Exascale computer.
§ ACKNOWLEDGMENT
The authors would like to thank Mr. Abhishek Singh (HPC Engineer at BosonQ Psi) and Mr. Aman Mittal (HPC Engineer at BosonQ Psi) for their support with computational resources; Mr. Akshith Chowdary (Quantum Software Developer at BosonQ Psi) and Mr Aakif Akthar (Quantum Algorithm Developer at BosonQ Psi) for their keen insights and oversight during the writing process; Dr. Kandula Eswara Sai (Senior Optimization Scientist at BosonQ Psi) for guidance and support.
unsrt
|
http://arxiv.org/abs/2409.02538v1 | 20240904085721 | A Partition Function Estimator | [
"Ying-Chih Chiang",
"Frank Otto",
"Jonathan W. Essex"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"physics.chem-ph",
"physics.comp-ph"
] |
AIP
Kobilka Institute of Innovative Drug Discovery, School of Medicine, The Chinese University of Hong Kong, Shenzhen, 2001 Longxiang Boulevard, 518172, Shenzhen, China
[email protected]
Advanced Research Computing Centre, University College London, WC1H 9BT, UK
[email protected]
Department of Chemistry, University of Southampton, SO17 1BJ, UK
§ ABSTRACT
We propose a simple estimator that allows to calculate
the absolute value of a system's partition function
from a finite sampling of its canonical ensemble.
The estimator utilizes a volume correction term to compensate
the effect that the finite sampling cannot cover the whole
configuration space.
As a proof of concept, the estimator is applied to calculate
the partition function for several model systems, and
the results are compared with the numerically exact solutions.
Excellent agreement is found, demonstrating that
a solution for an efficient calculation of partition functions
is possible.
A Partition Function Estimator
Jonathan W. Essex
September 9, 2024
==============================
§ INTRODUCTION
How to calculate the partition function of a system has long been
an important question in physics because many quantities, such as
the free energy or the entropy, can then be determined afterwards.
In the past two decades, the question has been answered with
various methods, such as the Wang-Landau algorithm <cit.> and nested sampling <cit.>.
Both methods compute the (cumulative) density of states of a system
and allow a direct integration over the Boltzmann factor to yield
the partition function as well as other quantities of interest.
In particular, the nested sampling algorithm has demonstrated
its power in studying thermodynamic properties in material science <cit.>.
The tested systems include Lennard-Jones clusters <cit.>,
metallic systems <cit.>, alloys <cit.>,
and small water clusters <cit.>.
In these examples, the partition function is rarely the sole goal
of the study, but rather a basic property that one can calculate
apart from the heat capacity or a phase-diagram.
However, calculating the partition function itself is still meaningful,
since the ratio between two partition functions indicates
the free energy difference between two systems (or two states).
For instance, comparing the partition function of a system with
a protein and a free ligand to the partition function of the protein-ligand complex
yields the binding free energy between the protein and the ligand.
This information can guide the design of drug molecules during lead optimization.
Yet, directly calculating the partition function is computationally expensive.
Since the desired property is not a single partition function but the ratio between two,
calculating this ratio using theories tailored to save computational resources seems more reasonable.
For instance, using importance sampling, the ratio between
two partition functions can be obtained by a single sampling on one
of the states, a method known as Zwanzig's equation <cit.>
or free energy perturbation (FEP) <cit.>.
Together with post-processing methods such as Bennett's acceptance ratio (BAR) <cit.>
or multistate Bennett acceptance ratio (MBAR) <cit.> to combine data
from sampling over different states, multistep free energy
perturbation (mFEP) <cit.> has became a standard approach for
binding free energy calculation.
Other commonly used free energy methods include Kirkwood's thermodynamic integration <cit.>,
the Jarzynski equality <cit.>, etc.
A comprehensive review can be found in the literature <cit.>.
These free energy methods enable e.g. the calculation of the binding
free energy between a protein and a ligand <cit.>,
between two proteins <cit.>, or for evaluating the permeability of ligands
through a lipid bilayer <cit.>.
Today, free energy calculations have become an indispensable tool in computer-aided drug design <cit.>.
While the aforementioned achievements makes one question the need for
calculating a single partition function directly, one may still wonder
whether it is possible to calculate a single partition function more effectively,
perhaps reaching a level where the performance of the calculation is comparable
to these advanced methods, at least in some simple systems.
To answer this question, we propose a Partition Function Estimator
(PFE) that can compute the value of a single partition function from a
finite sampling. As a proof of concept, we apply this theory
to several model examples, including the one-dimensional harmonic
oscillator potential, the one-dimensional double-well potential,
the two-dimensional Müller-Brown potential, and up to 30
Lennard-Jones particles in a three-dimensional box.
These examples are chosen because their references can be easily obtained:
For model potentials, exact numerical solutions are available via
brute-force integration, while the partition functions of
Lennard-Jones particles can be obtained using nested sampling
or standard methods such as mFEP-MBAR.
§ THEORY AND METHODS
§.§ The Partition Function Estimator
Consider a system composed of N identical particles of mass m.
The Hamiltonian reads,
H(𝐩,𝐪) = 𝐩^2/2m + U(𝐪)
where 𝐩 and 𝐪 denote the momenta and the spatial
coordinates of the particles, respectively. U(𝐪) denotes
the potential of the system.
The canonical partition function Z of the system is defined as,
Z = 1/N! h^3N∫ e^-β H(𝐩,𝐪) d𝐩 d𝐪 ,
where h is Planck's constant, and β denotes the
inverse temperature T multiplied with the Boltzmann constant
k_B (β=1/k_BT).
Z is a function of β, but this dependency is
dropped for ease of notation.
The integration over phase space in Eq. <ref> is separable,
and the integration over the momenta can be performed analytically.
The partition function of the system hence can be rewritten as <cit.>,
Z = 1/N!( 2π m/β h^2)^3N/2∫ e^-β U(𝐪) d𝐪 .
With the integration over momentum space carried out, we can focus on
the integration over coordinate space, and thus define
Q = ∫ e^-β U(𝐪) d𝐪 ,
to denote the spatial contribution to the partition function Z.
In the same vein, the expectation value for any function f that
depends solely on the coordinates 𝐪, can be expressed as
⟨ f ⟩ = ∫ f(𝐪) e^-β U(𝐪)/Qd𝐪 .
Our discussion below will focus on how Q can be calculated from
a finite sampling of the system's canonical ensemble.
When sampling a system at finite temperature, the probability for
finding a microstate i with energy E_i is proportional to
exp(-β E_i), and consequently the probability for finding
any microstate with energy E is proportional to
g(E) exp(-β E) where g(E) denotes the density of states.
Due to the exponential nature of the Boltzmann factor, the energy
distribution of the system eventually decreases with increasing
energy – see Fig. <ref> for an illustration –
such that above a certain energy level E^*, the sampling becomes
insufficient, i.e. it is impractical to obtain enough samples with
energy E > E^* to reproduce the actual probability distribution
in this energy regime.
In practice, this is unproblematic if one is interested in the
sample average of quantities that don't grow exponentially with
E, such as the system energy itself and most other quantities
of physical interest. For these, the sample average is insensitive
to the high-energy tail of the probability distribution, and it is
of no concern if it was captured insufficiently during sampling.
Yet, in the extreme case one could be interested in evaluating
the sample average of the inverse Boltzmann factor b(E) = exp(+β E). Its
theoretical expectation value is given by
⟨ b(E) ⟩
= ∫ e^+β U(𝐪)e^-β U(𝐪)/Qd𝐪
= L^3N/Q ,
where L^3 is the volume of a cubic box that the particles are confined to.
If we could obtain an estimate for ⟨ b (E) ⟩ via sampling, then
Eq. <ref> could be readily used to calculate Q. However,
in a sample of size n with energies E_i distributed according to
the distribution p(E) ∝exp(-β E),
the sample average of b is given by
b̅ = 1/n∑_i=1^n e^+β E_i ,
which is extremely sensitive to the high-energy samples.
In practice, the insufficient sampling of the high-energy tail leads to a very
large fluctuation of b̅, making it useless as an estimate for
⟨ b ⟩ and thus for determining Q.
Note that here as well as below, we clearly distinguish between the
expectation value (⟨ b ⟩) and the sample average (b̅);
while the former is an exact (theoretical) value, the latter is empirically
obtained from a finite sample and will fluctuate if the sampling is repeated,
perhaps even failing to converge in case of insufficient sampling.
Let us now consider another function f(E;E^*) that reads,
f(E;E^*) = e^+β E θ(E^*-E) ,
where θ(E^*-E) denotes the Heaviside step function,
which is 1 for E ≤ E^* and 0 for E > E^*.
In short, f(E;E^*) is the inverse Boltzmann factor,
but truncated to zero for energies E larger than some
chosen parameter E^*.
With E set to the potential energy, i.e. E=U(𝐪),
the expectation value of f is then given by,
⟨ f(E;E^*) ⟩
= ∫ e^+β E θ(E^*-E) e^-β U(𝐪)/Q d𝐪
= V(E^*)/Q ,
with V(E^*) defined as,
V(E^*) = ∫θ(E^* - U(𝐪)) d𝐪 ,
which is the volume of the coordinate space where the potential
energy is smaller than E^*.
Consequently, Q can be obtained via,
ln Q = ln V(E^*) - ln⟨ f(E;E^*) ⟩ .
Eq. <ref> is the proposed estimator.
We stress that this equation itself is exact, in that no approximations have
been made, and holds for any value of E^*.
Obviously, to make use of this equation, we still need to determine values
for both ⟨ f(E;E^*) ⟩ and V(E^*), along with
finding a “good” value of E^*.
§.§ Finding E^*
Let us first concentrate on the expectation value ⟨ f(E;E^*) ⟩.
We wish to use the sample average of f, i.e.
f̅
= 1/n∑_i=1^n e^+β E_iθ(E^* - E_i) ,
as an estimate for ⟨ f(E;E^*) ⟩. In contrast to Eq. <ref>,
this sample average is not sensitive to the high-energy tail, provided that
E^* is chosen such that energies below E^* are all sufficiently sampled,
cf. Fig. <ref>.
Indeed, if E^* were chosen to be vary large, then Eq. <ref>
would suffer from strong fluctuation due to insufficient sampling.
On the other hand, if E^* were chosen to be very small, then the
number of samples that contribute effectively to Eq. <ref>
would be very small, which again increases its error.
This suggests that there is an optimal choice for E^* that
can minimize the relative error in f̅.
According to Eq. <ref>, ln Q is given by the difference between
ln V(E^*) and ln⟨ f(E;E^*) ⟩. We are now using
lnf̅ as an estimate for the second term. Since this is the term
that comes from the sample average (we will discuss ln V(E^*) in the next section),
it makes sense to choose an E^* that can minimize the standard deviation of
lnf̅. This is given by,
σ_M = √(1/n ⟨ f(E;E^*)^2 ⟩ - ⟨ f(E;E^*) ⟩^2 /⟨ f(E;E^*) ⟩^2) ,
which is just the relative standard error of f(E;E^*), with n being the number of samples.
To minimize this error, we find E^* such that
∂σ_M^2/∂ E^* =
∂/∂ E^*[ 1/n( ⟨ f(E;E^*)^2 ⟩/⟨ f(E;E^*) ⟩^2 -1 ) ]
= 0 .
Expressing the expectation values in the above equation as integrals over energy space
by utilizing the density of states g(E), we have
⟨ f(E;E^*) ⟩ = ∫ e^+β Eθ(E^*-E) g(E) e^-β E/Q d E = 1/Q∫θ(E^*-E) g(E) d E ,
and
⟨ f(E;E^*)^2 ⟩ = ∫ e^+2β Eθ(E^*-E)^2 g(E) e^-β E/Q d E = 1/Q∫ e^+β Eθ(E^*-E) g(E) d E .
Note that in the above equation, we have used the property of the Heaviside
function that θ^2 = θ. Further, the derivative of the Heaviside
function gives the Dirac delta function,
∂/∂ E^*θ(E^*-E) = δ(E^*-E) ,
which is only non-zero when E = E^*.
Using Eqs. <ref>-<ref>, Eq. <ref> results
in the condition
0 = 1/⟨ f(E;E^*) ⟩^2∂/∂ E^*⟨ f(E;E^*)^2 ⟩ - 2 ⟨ f(E;E^*)^2 ⟩/⟨ f(E;E^*) ⟩^3∂/∂ E^*⟨ f(E;E^*) ⟩
= 1/⟨ f(E;E^*) ⟩^2[
g(E^*)/Q e^+β E^* -
2 ⟨ f(E;E^*)^2 ⟩/⟨ f(E;E^*) ⟩g(E^*)/Q] ,
or after simplification,
e^+ β E^* = 2 ⟨ f(E;E^*)^2 ⟩/⟨ f(E;E^*) ⟩ .
According to Eq. <ref>, E^* depends on the expectation values of
f(E;E^*) and f(E;E^*)^2. Both can be estimated from
their respective sample averages. However, as both also depend on
E^*, this equation must be solved iteratively:
First, one collects n samples e.g. via Monte Carlo sampling,
and saves the trajectory and the associated energies E_i.
Then, one calculates f̅ via Eq. <ref>
and similarly f^2, using an initial guess for E^*,
e.g. the maximum energy encountered during sampling.
Afterwards, E^* is updated via
E^* = 1/β( ln 2 + lnf^2 - lnf̅) ,
and this process is repeated until E^* converges.
Notably, each iteration can employ the same trajectory; one
merely has to update E^* and recalculate the sample averages,
which is computationally cheap.
Finally, one can use the optimized E^* to estimate
⟨ f(E;E^*)⟩ and σ_M from
f̅ and f^2.
§.§ Calculating V(E^*)
With E^* determined, we can turn to the calculation of the volume term
V(E^*). According to Eq. <ref>, V(E^*) is the
volume of the coordinate space with E < E^*.
For simple low-dimensional models, such as a harmonic oscillator or
Müller-Brown potential, this volume can be calculated directly
via “binning”, i.e. dividing the space into small bins,
assigning each sample to its corresponding bin, and simply
counting the number of occupied bins.
However, as the dimensionality increases, no trajectory
could cover the entire admissible coordinate space via binning.
Calculating V(E^*) by this or other naïve integration
approaches is hence not possible.
Second, V(E^*) soon becomes very small in comparison with the total volume
of the coordinate space. Thus a simple Monte Carlo integration would also be
insufficient to address this issue.
Fortunately, the nested sampling algorithm <cit.> was designed
to tackle such a problem.
Here, we employ a slightly modified version of nested sampling, making it more
suitable for the purpose of Eq. <ref>.
Briefly, the implemented algorithm is as follows:
0. Randomly generate N “walkers” with energies smaller than a starting energy ceiling E_0. Each walker is an independent copy of the system that will be propagated in the course of the algorithm. The probability distribution for the initial walkers must be uniform over the coordinate space. The starting energy ceiling in principle should be the highest potential energy possible, but for practical purposes it is normally set to a very large value, such as 10^12 k_BT. The initial volume V_0 is then given by the volume of the entire coordinate space, V_0 = L^3N.
1. Select a fraction p (0 < p < 1). This determines the new energy ceiling for the current (i-th) iteration, E_i = p · E_i-1. Use a simple Monte Carlo integration to calculate the volume V(E_i), i.e. the volume of coordinate space with energy smaller than E_i. Assuming that n_i walkers have energies below E_i, we have V(E_i) = (n_i/N) · V(E_i-1).
2. Relax the walkers whose energies are larger than E_i, and propagate them freely below this energy, until they represent a sample with uniform probability over the coordinate space with energy E < E_i. Rejection sampling is employed for this step.
3. Repeat Steps 1 and 2 until E_i < E^* is reached. As it is unlikely for E_i to exactly hit E^*, interpolation may be required to determine V(E^*) from V(E_i) and V(E_i-1).
Compared to the original nested sampling, our implementation is more akin
to an iterative Monte Carlo integration. Our approach differs
from nested sampling in two aspects. First and most importantly,
nested sampling
throws away a fixed number of walkers in each iteration,
leading to a fixed ratio of volume truncation.
For instance, if one throws away the walker with the highest
energy in each iteration, then at the i-th iteration, nested sampling
yields a volume V(E_i) = (1/N)^i · V_0, where the energy E_i is
determined by the energy of the walker that is currently thrown away.
In contrast, our algorithm would yield a volume V(E_i) = ∏ (n_i/N) · V_0.
Thus, any sampling fluctuation reflects on E_i in nested sampling,
whereas it reflects on V(E_i) in our implementation.
Second, nested sampling does not "relax" the walkers as we did in Step 2.
Rather, it duplicates a walker with energy smaller than E_i randomly
and propagates the cloned walker until it becomes uncorrelated from the original one.
This operation seems to be more efficient than ours, but as our
calculation is truncated at E^* rather than proceeding to the lowest potential
energy, the number of iterations required is much smaller.
Third, when applying nested sampling to integrate over the Boltzmann factor,
more walkers are needed in order to better resolve the density of state in energy.
Since PFE does not require any such integration, we can use much less walkers or even
more aggressive energy truncation to calculate V(E^*).
§ RESULTS AND DISCUSSION
§.§ Harmonic Oscillator
Our first example is the one-dimensional harmonic oscillator potential,
U(x)=kx^2/2. Here we set the force constant as k=300 and k_BT=0.59616.
The potential energy curve and the corresponding distributions are depicted in Fig. <ref>(a).
In this system, we employ a simple Metropolis Monte Carlo algorithm for sampling, utilizing a total
of 10^6 steps and a step size of 0.1.
The trajectory and sampled potential energy are recorded every 10 steps.
Hundreds of independent samplings are conducted and processed using PFE (Eq. <ref>)
to derive ln Q. The corresponding average and standard deviation (fluctuation) are then depicted
in Fig. <ref>(b).
Here, different E^* are selected to demonstrate their impact on PFE. The term a% indicates
the criterion for selecting E^*: it is chosen such that for the top a% of samples in energy,
the corresponding value of f(E;E^*) is zero.
The outcomes obtained from the optimal E^* (calculated using Eq. <ref>) are labeled as
“opt" (red curve). As expected, utilizing PFE (Eq. <ref>) using the best choice of E^*
(Eq. <ref>) leads to a converged result after approximately 10^4 steps, while the sampling
fluctuation diminishing rapidly with the step count.
Conversely, the ln Q calculated using the maximum energy sampled as E^* (black curve, 0%)
exhibits substantial fluctuation and fails to converge. However, upon applying the cutoff
through the Heaviside function, PFE results converge towards the exact solution (dashed line),
albeit with slightly inferior performance compared to using the optimal E^*.
This behavior aligns with the notion that high-energy microstates are often undersampled,
leading to convergence challenges.
The standard error of ln⟨ f(E;E^*) ⟩ can also be calculated as shown in
Eq. <ref> and compared with the fluctuation determined from the standard deviation of
ln Q across the 100 sampling repetitions.
These results are displayed in Fig. <ref>(c).
With the exception of the 0% cutoff, the predicted standard error closely matches the fluctuation,
and increasing the sample size results in diminished error.
This intriguing finding indicates that the fluctuation primarily arises from
inaccuracies in computing ln⟨ f(E;E^*) ⟩, as the ln V(E^*) can be fairly
accurately computed from the trajectory histogram using 100 bins.
§.§ Double Well Potential
Next we consider a double well potential defined as follows,
U(x) = 16h/x_0^4 x^2 (x-x_0)^2 ,
where h is the barrier height and x_0 is the position of the second minimum.
In this example, k_BT is maintained at 0.59616, while h and x_0 are configured to 10k_BT
and 3, respectively, to introduce some sampling challenges. For this system, a total of 10^6
steps are sampled using both Monte Carlo and replica-exchange (RE) <cit.> techniques.
The latter is a typical enhanced sampling method that simulates multiple replicas with
various temperatures simultaneously. The coordinates between different replicas can be
swapped to enhance the sampling efficiency.
In the current calculation, 10 replicas are employed in total. The temperatures are linearly
scaled, with the lowest and highest k_BT values set to 0.59616 and 1.9872, respectively.
The potential curve and the corresponding distributions are depicted in Fig. <ref>(a).
Due to the substantial barrier, Monte Carlo sampling encounters difficulties in traversing the barrier,
as evident in the blue distribution. In contrast, RE sampling facilitates frequent barrier crossings,
resulting in a symmetric spatial distribution even for the replica with the lowest temperature,
as observed in the orange distribution. The energy histograms exhibit similarities between both
sampling methods, except in the high-energy region.
Shown in Fig. <ref>(b) are ln Q calculated using PFE.
With the exception of the curve labeled “RE", all results are obtained from an extensive
Monte Carlo simulation. Once again, the optimal E^* (opt) provides the most accurate estimation
of ln Q, although the result is not yet converged after 10^6 sampling steps.
Owing to the sampling difficulty posed by the high barrier, the ln Q plot displays a plateau
around 10^5 steps (ln Q=-0.9), as depicted by the red curve.
This plateau arises because the simulation struggles to surmount the barrier (Boltzmann factor
e^-10= 4.54*10^-5) in shorter sampling durations.
However, with longer sampling durations, simulations eventually overcome the barrier,
exploring the second well. Consequently, the fluctuation increases, and ln Q approaches
the exact solution.
In contrast, ln Q calculated with RE sampling and the optimal E^* shows an early
convergence well before 10^5 steps (orange curve).
This observation underscores how sampling difficulties can influence the convergence rate
of calculations. However, as enhanced sampling methods are extensively documented
in literature <cit.> and out of scope of this study, our discussion centers
on the behavior of PFE.
Interestingly, when the sampling remains confined to one well
(reflected in the plateau-like ln Q values),the deviation from the exact solution is around 0.7.
This magnitude is about the same as ln(2), implying that the primary source of error stems
from halving the size of V(E^*) when the simulation is trapped in one well.
Introducing enhanced sampling techniques here primarily aids in the accurate computation
of V(E^*), rather than to ⟨ f(E;E^*) ⟩.
In Fig. <ref>(c), a comparison is made between the standard error of
ln⟨ f(E;E^*) ⟩ and the fluctuation (standard deviation) of ln Q.
Unlike the previous example of harmonic oscillator, the fluctuation surpasses the standard error,
suggesting that calculating ln V(E^*) can introduce a significant error, even when directly assessed
from the trajectory.
Nonetheless, increasing the number sampling steps can reduce the fluctuation, as
demonstrated by the orange curve in panel(b), where the fluctuation eventually dwindles to insignificance.
§.§ Müller-Brown Potential
Another often used model potential is the Müller-Brown potential, defined as follows:
U(x,y) = ∑^4_k=1 A_k exp[ a_k(x-x^0_k)^2+b_k(x-x^0_k)(y-y^0_k)+c_k(y-y^0_k)^2 ] + U_0 ,
with A=(-200,-100,-170,15), a=(-1,-1,-6.5,0.7), b=(0,0,11,0.6), c=(-10,-10,-6.5,0.7).
In the original Müller-Brown potential, U_0=0. Here we set U_0=147.70 to ensure
the potential minimum is at zero. The potential energy surface is shown in Fig. <ref>(a).
The potential features three local minima, with the highest barrier approximately at 107.
To evaluate the performance of PFE, three different temperatures are employed:
k_BT=100, k_BT=10, and k_BT=2, which correspond to a well-sampled trajectory,
a poorly sampled trajectory, and a trapped trajectory, respectively.
The trajectory histograms (proportional to the population) are depicted in Fig. <ref>(b-d).
It is important to note that in panels (b)-(d), the number of bins employed in each direction is 100,
but the bin width is automatically adjusted, resulting in varying count scales across the panels.
The Monte Carlo simulation is once more employed to compute ln Q (conducted over 10^7 steps with
data saved every 10 steps). The volume V(E^*) is determined based on the 2-dimensional
trajectory histogram, where the area of the populated bins is summed to give V(E^*).
Results are illustrated in Fig. <ref>. Like before, 100 independent trajectories are
processed to derive the mean and standard deviation (fluctuation). Various E^* values are
employed to showcase the effect of incorporating the Heaviside function to truncate the inverse
Boltzmann factor: PFE with the optimal E^* (opt) consistently yields the best results, while PFE
with the highest sampled energy (0%) performs the poorest.
That said, PFE with the optimal E^* steadily converges to the exact solution, exhibiting
smaller fluctuation compared to choices of E^*.
This underscores the significance of selecting E^* to minimize error (as per Eq. <ref>).
Noteworthy observations arise when contrasting the standard error of ln⟨ f(E;E^*) ⟩
with the fluctuation of ln Q: At low temperature (2k_BT, panel f), the sampling is notably
localized in space, enabling an accurate calculation of ln V(E^*).
Consequently, calculating ln V(E^*) scarcely
contributes to the error in ln Q, with data predominantly aligning along the plot's diagonal.
As the temperature rises (10k_BT, panel e), the sampling ventures beyond barriers to explore other
local minima. However, the sampling proves insufficient, as indicated by the histogram in Fig. <ref>(c).
This results in a larger fluctuation, since calculating V(E^*) based on the
trajectory can be less precise. Nonetheless, once the convergence is achieved, the fluctuation
diminishes significantly.
Finally, at sufficiently elevated temperatures (k_BT=100, panel d), the fluctuation and standard
error once again align, although the alignment is no longer diagonal.
This finding demonstrates that calculating V(E^*) entails inherent errors, although they are
diminishable with an increasing number of steps, as exemplified by the red curve in panel (a).
§.§ Lennard-Jones Particles
The last example to be discussed in a more realistic system: Lennard-Jones particles in a 3-dimensional
box with periodic boundary conditions. In this example, the box length is fixed at 25 Å and
Lennard-Jones particles are sequentially inserted into the box.
The parameters are taken from the OpenMM example (Ar atom) <cit.>:
ϵ = 0.238 kcal/mol, σ = 3.4 Å, with the potential shifted to zero at 3σ
for cutoff. The temperature is set to 120 K. The dispersion correction is disabled for simplicity.
Once again we calculate ln Q using PFE with the optimal E^*. A basic Monte Carlo simulation
is employed for data collection. The simulation consists of a 50000-step equilibration
with a step size 1.0 and a 10^6-step production run, with the data being saved every 1000 steps.
The volume V(E^*) is determined using the modified nested sampling algorithm (an iterative
Monte Carlo integration method). This approach involves 200 walkers, 2000 equilibration steps,
and an energy fraction 0.99 at each iteration. The relaxation steps and the number of walkers
to be relaxed are dynamically adjusted during execution.
Moreover, the modified nested sampling algorithm is utilized for the direct integration of ln Q,
using identical parameters as those for V(E^*) calculations.
Additionally, the standard mFEP-MBAR method <cit.> is applied for ln Q calculation
as a reference.
Briefly, mFEP-MBAR computes the free energy difference between two states, e.g. ln (Q_N/Q_N-1)
with N stands for the number of particles. Since Q_1 is simply the box volume, subsequent values
ln Q_2, ln Q_3, ..., ln Q_N are derived by accumulating the mFEP-MBAR outcomes.
The mFEP-MBAR calculation is conducted using molecular dynamics and alchemical functions implemented
in OpenMMTools <cit.>.
Simulation parameters are meticulously selected to ensure a comparable level of precision to PFE.
This leads to a maximum number of steps 21210000 (21 windows, each comprising 10000 equilibration steps
and 1000000 steps production run).
Results of ln Q are depicted in Fig. <ref>(a). This time, 10 independent calculations are
carried out for each method to determine the mean and the corresponding standard deviation (fluctuation).
All three methods yield identical ln Q, which is a straight line that increases with the number of
particles. This outcome is anticipated since the slope provides the chemical potential of the system.
It is noteworthy that the standard deviation is significantly smaller compared to ln Q, implying that
a single calculation is adequate to obtain a precise ln Q.
The multiple independent calculations are primarily conducted to assess the magnitude of ln Q
fluctuations, which are slightly larger than the anticipated standard error, as depicted in
Fig. <ref>(b).
Considering performance metrics, a comparison is made based on the number of steps required to
complete the calculations. Given the use of distinct sampling techniques, viz. Monte Carlo for
PFE and molecular dynamics for mFEP-MBAR, the average number of sampling steps over 10 computations
is reported.
Fig. <ref>(c) presents the step count necessary for ln Q calculation.
While a direct numerical integration over the Boltzmann factor (labeled NS, blue) is expected
to be computationally expensive, it seems odd that mFEP-MBAR demands even more steps.
This discrepancy arises because mFEP-MBAR is designed to efficiently compute the free energy difference
between two states, such as ln (Q_N/Q_N-1). To get ln Q,
the free energy differences are accumulated
over N, resulting in a higher total step count.
What should really surprise the reader is when one compares the step counts needed to compute
ln (Q_N/Q_N-1), i.e. the free energy difference between two states.
In this instance, PFE's performance slightly surpasses that of mFEP-MBAR, as evident from
the red and black curves in Fig. <ref>(d).
This intriguing outcome highlights that calculating ln Q at a given temperature can really be achieved
with the same computational effort as finite sampling.
Last but not least, we discuss the stability of PFE as an estimator.
As shown in Fig. <ref>(a), ln Q is calculated using PFE with varying
fractions of energy for evaluating V(E^*) in each iteration. All other parameters are kept
as previously described. Despite the averages of ln Q being nearly identical across different
parameters in 10 calculations, the fluctuation of ln Q actually increases, see panel (b).
Additionally, Fig. <ref>(c) indicates that adjusting the parameter can marginally
enhance the performance of PFE, although the reduction in effort is not much.
§ CONCLUSION
In this article we proposed a partition function estimator that is composed of an
inverse Boltzmann factor and a Heaviside step function, with a parameter E^*
that can be determined from minimizing the square of the sampling error.
This results in a working equation that evaluates the expectation value of the proposed
function, along with a volume term V(E^*) to account for the energy cutoff
imposed by the Heaviside function. We demonstrate that the volume term can be directly
evaluated from the trajectory for 1- and 2- dimensional examples, while
a modified nested sampling integration method is proposed for examples with higher dimensions.
As a proof of concept, the performance of the estimator is tested using model examples,
including the harmonic oscillator potential, the double-well potential, the Müller-Brown potential,
and Lennard-Jones particles. Good agreement with the numerically exact solution is found across
all the test cases. While this result is anticipated, in the case of Lennard-Jones particles,
a performance comparable to, or even better than, the standard FEP method is noted.
Currently, PFE is not yet ready to handle large biological systems like the standard mFEP-MBAR
can, nor can it calculate the partition function at multiple temperatures in one single run
as nested sampling can. Yet, PFE remains intriguing and warrants further development as it
offers a fresh perspective on addressing a long-standing challenging task.
This work is supported by the Kobilka Institute of Innovative Drug Discovery (KIIDD), School of Medicine, Chinese University of Hong Kong (Shenzhen), and the Shenzhen Science, Technology, and Innovation Commission (SZSTI), grant number JCYJ20230807114206014.
YCC acknowledges the Royal Society for their support of the precursor to this work (NF171278). Special thanks to Prof. Guanglian Li and Prof. Ye Mei for their valuable discussions, to Prof. Livia B. Pártay for a fruitful discussion on nested sampling, as well as to Dr. Lantian Yao, Prof. Tzong-Yi Lee, and staff at KIIDD for providing a supportive working environment that nurtured this work.
*
|
http://arxiv.org/abs/2409.02658v1 | 20240904123241 | Pinning control of chimera states in systems with higher-order interactions | [
"Riccardo Muolo",
"Lucia Valentina Gambuzza",
"Hiroya Nakao",
"Mattia Frasca"
] | nlin.PS | [
"nlin.PS",
"math.OC",
"nlin.AO",
"nlin.CD",
"physics.comp-ph"
] |
corresponding author: [email protected]
Department of Systems and Control Engineering, Tokyo Institute of Technology, Tokyo 152-8552, Japan
Department of Electrical, Electronics and Computer Science Engineering, University of Catania, 95125 Catania, Italy
Department of Systems and Control Engineering, Tokyo Institute of Technology, Tokyo 152-8552, Japan
International Research Frontiers Initiative, Tokyo Institute of Technology, Kanagawa 226-8501, Japan
Department of Electrical, Electronics and Computer Science Engineering, University of Catania, 95125 Catania, Italy
Istituto di Analisi dei Sistemi ed Informatica “A. Ruberti”, IASI-CNR, 00185 Roma, Italy
§ ABSTRACT
Understanding and controlling the mechanisms behind synchronization phenomena is of paramount importance in nonlinear science. In particular, chimera states, patterns in which order and disorder coexist simultaneously, continue to puzzle scholars, due to their elusive nature. Recently, it has been shown that higher-order interactions greatly promote the onset of chimera states, which are easier to found and more resilient when the system units interact in group. In this work, we show that the higher-order framework is fertile not only for the emergence of chimera states, but also for their control. Via pinning control, a technique consisting in applying a forcing to a subset of the nodes, we are able to trigger the emergence of chimera states with only a small fraction of controlled nodes, at striking contrast with the case without higher-order interactions. We show that our setting is robust for different higher-order topologies and types of pinning control and, finally, we give a heuristic interpretation of the results via phase reduction theory. Our numerical and theoretical results provide further understanding on how higher-order interactions shape nonlinear dynamics.
Keywords Chimera states, pinning control, higher-order interactions, synchronization, hypergraphs
Pinning control of chimera states in systems with higher-order interactions
Mattia Frasca
September 9, 2024
===========================================================================
§ INTRODUCTION
Understanding the mechanisms underlying self-organization phenomena on networks is a paramount task in the study of complex systems, which is complemented by the development of efficient methods to control such dynamics <cit.>. This is particularly relevant in the framework of synchronization dynamics, where, depending on the applications, it is fundamental to achieve a synchronized state, e.g., in power grids <cit.>, or to break it into an asynchronous one, e.g., in neuroscience <cit.>, where synchronization is often associated to pathological states. The network framework remains still relevant in the modeling of complex systems, nonetheless, over the past years scholars have started considering more complex structures such as hypergraphs and simplicial complexes <cit.>. This is because networks do not capture interactions beyond the pairwise setting, i.e., two-by-two, while many systems have shown evidence of higher-order, i.e., group, interactions <cit.>. Examples come from, but are not limited to, neuroscience <cit.>, ecology <cit.> and social behaviors <cit.>. Higher-order interactions have been proven to greatly affect the collective behavior, for instance, in random walks <cit.>, synchronization dynamics <cit.>, contagion <cit.> and pattern formation <cit.>, to name just a few. Given the ubiquity of group interactions <cit.>, it is important to understand how to control the dynamics in such systems. While significant progress has been made in the control of networks <cit.>, the investigation into the control of systems with higher-order interactions has only recently begun <cit.>.
The focus of this work is an intriguing type of synchronization pattern called chimera state, which consists of the coexistence of coherent and incoherent domains of oscillations. Coexistence of coherence and incoherence was first observed by Kaneko for globally coupled chaotic maps <cit.> and was then found in several numerical settings with global <cit.> and nonlocal <cit.> coupling schemes. Despite all the previous research on the subject, the work that historically is considered to be the first to characterize the emergence of chimera states is the well-known paper by Kuramoto and Battogtokh <cit.>, made popular by a successive work of Abrams and Strogatz, who, with a creative intuition, compared the coexistence of different dynamical state to the chimera, a mythological creature in which parts of different animals coexisted <cit.>. Besides the pure theoretical relevance of such an astonishing phenomenon, a great part of the interest has been generated by the existence of analogous patterns in real systems: for instance, in Josephson junctions <cit.> and electronic circuits <cit.>, laser <cit.>, mechanical <cit.> and nano-electromechanical systems <cit.>, to name a few. Particular attention has been devoted to neuroscience <cit.>, specifically to unihemispheric sleeping patterns in animals <cit.>. Except for some particular configurations in which robust chimera patterns are induced by the network structure <cit.>, in both numerical and experimental settings chimera states are often elusive and characterized by a rather short lifetime. Hence, there is a vast literature on networked systems, consisting in looking for different settings (e.g., parameter ranges, network topologies, coupling configurations, etc.) making such patterns easier to find and with a longer lifetime. Moreover, after the first definition by Kuramoto and Battogtokh <cit.>, several kinds of chimera states have been defined, e.g., amplitude chimeras <cit.> or phase chimeras <cit.>. We will not thoroughly discuss such studies, inviting the interested reader to consult a book <cit.> and a review <cit.> on the subject. In the context of higher-order interactions, chimera states have been proven to be enhanced in some pioneering works considering both pairwise and higher-order interactions <cit.>. This claim was further corroborated in <cit.> for systems with pure higher-order interactions, where the emergence of chimera states on higher-order topologies is compared with the absence of such patterns when the interactions are pairwise.
In this work, we consider the setting studied in <cit.> and implement a control to further trigger the emergence of chimera states. Our control approach will rely on the so called pinning control, a technique used to drive networks onto a desired dynamical state by using a control input applied to a small subset of nodes <cit.>. Such technique has been successfully used in the framework of opinion dynamics <cit.>, epidemics <cit.>, pattern formation <cit.> and synchronization dynamics <cit.>, to name a few. The latter includes the control of chimera states with pairwise interactions, which we hereby extend to the higher-order framework. Indeed, in <cit.> it was shown that it is possible to trigger the emergence of chimera states via pinning. Nonetheless, to achieve such task, at least half of the network nodes need to be controlled. In what follows, after introducing the setting in Sec. <ref>, we show that higher-order interactions considerably facilitate the work of the controllers and chimera states can be obtained by controlling only a small fraction of the nodes. Such results are shown in Sec. <ref> for two different kinds of pinning approaches, that we named additive pinning and parametric pinning. Moreover, we show that, rather than the number of nodes, what matters is the size of the hyperedges, i.e., the group of nodes interacting with each other. Then, before the discussion of some potential future directions, we give a heuristic interpretation of the results based on the phase reduction theory <cit.> in Sec. <ref>.
§ THE MODEL AND THE SETTING
In this Section, we introduce the system exhibiting chimera states, which is analogous to that studied in <cit.>. We consider coupled Stuart-Landau oscillators, a paradigmatic model for the study of synchronization dynamics, given that it is the normal form of the supercritical Hopf-Andronov bifurcation <cit.>. The coupling takes place through pure higher-order interactions, namely, by mean of a higher-order topology called nonlocal hyperring, which is a generalization of the nonlocal pairwise coupling <cit.>. The type of chimera state that we will hereby consider is that of phase chimeras, states have been first observed by Zajdela and Abrams <cit.>, which consist in oscillation patterns where the amplitude and the frequency of each oscillator are the same, but the phases exhibit a chimera behavior, i.e., a part of the oscillators have the same phase, while the other phases are distributed along the unit circle. The peculiarity of such pattern is that, once obtained, it does not vary, because the frequency is the same for all the oscillators. Hence, we would observe the same exact pattern after each period. For this reason, we find the description given by Zajdela and Abrams, "frozen patterns of disorder", perfectly fitting. The reader can find a thorough characterization of these patterns in the Refs. <cit.>. On a side, let us note that multitailed phase chimeras have only been found in the pairwise setting <cit.>, but not yet in the higher-order one. In what follows, every chimera state discussed and shown in the figures will be a phase chimera. For sake of simplicity, we will refrain from using the word "phase" and will call them simply "chimeras".
§.§ Stuart-Landau oscillators coupled via nonlocal hyperrings
We consider a system made of N interacting Stuart-Landau units. In the absence of any interaction, each unit i of the system is described by the following equations
ẋ_i = α x_i - ω y_i -(x_i^2 + y_i^2 )x_i=f(x_i,y_i),
ẏ_i = ω x_i + α y_i - (x_i^2 + y_i^2 )y_i=g(x_i,y_i) ,
where α is a bifurcation parameter and ω is the frequency of the oscillators. Let us stress that the units are homogeneous, meaning that the parameters α and ω are the same for each and every system. Each isolated system exhibits a stable limit cycle for α>0, which is the case we will consider throughout this study.
To model the higher-order interactions we use a generalization of the links (or edges) called hyperedges, whose structure can be encoded by using adjacency tensors, a generalization of the adjacency matrices for networks <cit.>. From the literature dealing with simplicial complexes <cit.>, we adopt the convention that a hyperedge of (d+1) nodes (i.e, encoding a (d+1)-body interaction) is called a d-hyperedge. As an example, let us consider the 3-rd order[Let us note that in tensor algebra the order (or rank) of a tensor is given by its indices, e.g., a scalar is a 0-rank tensor, a vector is a 1-rank tensor, a matrix is a 2-rank tensor, etc. Hence, A^(3)={A_ijkl^(3)} would be a 4-rank tensor and the adjacency matrix, given that it is a matrix, a 2-rank tensor. However, here we follow that notation that is most commonly used in the literature on higher-order interactions.] adjacency tensor (i.e., encoding 4-body interactions) A^(3)={A_ijkl^(3)}. We have that A_ijkl^(3)=1 if nodes i,j,k,l are part of the same hyperedge and 0 otherwise. This is the analogous of the adjacency matrix for networks, which is, indeed, a 1-st order adjacency tensor. The chosen higher-order topology is that of nonlocal d-hyperrings, a pure higher-order structure introduced in <cit.> and consisting of hyperedges of size d+1 (d is the order of the interaction) attached through one node and set in a circular structure. In Fig. <ref>, we depict a 3-hyperring, in panel a), together with its pairwise counterpart, namely, the clique-projected network (cpn) obtained by projecting each hyperedge into a clique having the same size, in panel b). Observe that a hyperring is a uniform hypergraph, the hyperedges all having the same size.
Following the analyses carried out in previous works <cit.>, we assume the coupling to involve only the first state variable of each oscillator, i.e., x. Let us stress that other coupling configurations can be considered, as discussed in Appendix <ref>. With the above assumption, the equations for systems (<ref>) coupled via a generic d-hyperring read
ẋ_i =f(x_i,y_i) + ε∑_j_1,...,j_d=1^N A_i,j_1,...,j_d^(d)(h^(d)(x_j_1,...,x_j_d) - h^(d)(x_i,...,x_i) ),
ẏ_i =g(x_i,y_i)+ ε∑_j_1,...,j_d=1^N A_i,j_1,...,j_d^(d)(h^(d)(x_j_1,...,x_j_d)- h^(d)(x_i,...,x_i) ),
where ε>0 is the coupling strength[Note that such configuration involves only interactions of order d, i.e., (d+1)-body, hence it is not necessary to denote the coupling strength with ε_d.]. Because we consider identical oscillators, together with the fact that the coupling is diffusive like, i.e., it vanishes when all oscillators are in the same state, system (<ref>) admits a fully synchronous solution. Such a coupling is a special type of non-invasive interaction <cit.>. Moreover, we will consider coupling functions such that the higher-order interaction cannot be decomposed into pairwise ones[It was shown by Neuhäuser et al. that the higher-order coupling functions need to be nonlinear, otherwise the many-body interaction can be decomposed into pairwise ones <cit.>. Successively, it was further shown that such assumption may not be enough and, if the nonlinear functions h are the sum of nonlinear terms which separately account for each unit, e.g., h^(d)(x_j_1,...,x_j_d)=h(x_j_1)+...+h(x_j_d), they can still be decomposed into pairwise ones <cit.>.]. Eq. (<ref>) can be rewritten in compact form for each unit i as
Ẋ⃗̇_i=F⃗(X⃗_i)+𝔻∑_j_1,...,j_d=1^N A_i,j_1,...,j_d^(d)(H⃗^(d)(X⃗_j_1,...,X⃗_j_d)-H⃗^(d)(X⃗_i,...,X⃗_i)),
where X⃗_i=(x_i,y_i)^⊤, F⃗=(f,g)^⊤, H⃗^(d)=(h^(d),h^(d))^⊤ and 𝔻=ε D=ε[ 1 0; 1 0 ]. As stated previously, in Appendix <ref> we report additional results for different coupling matrices D.
To highlight the effects of higher-order interactions, we will perform a comparison between the dynamics on the hyperring and on its respective clique-projected network (cpn), as in <cit.>. The equations for the dynamics with pairwise interactions are
ẋ_i =f(x_i,y_i) + ε∑_j=1^N A_i,j^(1)(h^cpn(x_j) - h^cpn(x_i) ),
ẏ_i =g(x_i,y_i)+ ε∑_j=1^N A_i,j^(1)(h^cpn(x_j)- h^cpn(x_i) ),
where the coupling functions for the dynamics on the clique-projected network h^cpn is determined from its corresponding h^(d) (see below).
We will perform pinning control on hyperrings of 4 different orders, namely, 3-,4-,5- and 6-hyperrings, involving 4-,5-,6- and 7-body interactions, respectively. In line with other previous works, e.g., <cit.>, we will consider odd-degree polynomials as coupling functions. The coupling functions for each hyperring and its clique-projected network are the following
h^(3)(x_j_1,x_j_2,x_j_3)=x_j_1x_j_2x_j_3 , h^cpn(x_j)=x_j^3 ,
h^(4)(x_j_1,x_j_2,x_j_3,x_j_4)=x_j_1^2x_j_2x_j_3x_j_4 , h^cpn(x_j)=x_j^5 ,
h^(5)(x_j_1,x_j_2,x_j_3,x_j_4,x_j_5)=x_j_1x_j_2x_j_3x_j_4x_j_5 , h^cpn(x_j)=x_j^5 ,
h^(6)(x_j_1,x_j_2,x_j_3,x_j_4,x_j_5,x_j_6)=x_j_1^2x_j_2x_j_3x_j_4x_j_5x_j_6 , h^cpn(x_j)=x_j^7 ,
where the adjacency tensor accounts for all the permutations of the indexes. As an example, let us explicit the equations for the 4-body case (3-hyperring) and its corresponding clique-projected network. The equations for a 3-hyperring are the following
ẋ_i =α x_i - ω y_i -(x_i^2 + y_i^2 )x_i + ε∑_j_1,j_2,j_3^N A_i,j_1,j_2,j_3^(3)(x_j_1x_j_2x_j_3 - x_i^3),
ẏ_i =ω x_i + α y_i - (x_i^2 + y_i^2 )y_i+ ε∑_j_1,j_2,j_3^N A_i,j_1,j_2,j_3^(3)(x_j_1x_j_2x_j_3 - x_i^3 ),
while those for the clique-projected network are
ẋ_i =α x_i - ω y_i -(x_i^2 + y_i^2 )x_i + ε∑_j^N A_i,j(x_j^3 - x_i^3 ),
ẏ_i =ω x_i + α y_i - (x_i^2 + y_i^2 )y_i+ ε∑_j^N A_i,j(x_j^3 - x_i^3 ).
The equations for interactions of different orders can be constructed analogously by means of the coupling functions (<ref>) (see also the SM of <cit.>).
§.§ Hyperedge-based local order parameter
To quantify the synchronization of an ensemble of oscillators it is common to use the Kuramoto order parameter <cit.>, which gives a global measure of how much the oscillators are synchronized. However, chimera states involve the coexistence of coherent (i.e., synchronized) and incoherent (i.e., desynchronized) oscillators and the respective regions are localized. Hence, a global measure of the synchronization does not provide useful information on the chimera state. For this reason, scholars have proposed a local Kuramoto order parameter, which measures the synchronization in a given region of the network, by quantifying the differences between neighboring oscillators, as was done, for instance, in <cit.>. In the case of hyperrings, a natural extension of the local order parameter would need to account for the synchronization inside each hyperedge rather than some arbitrary neighborhood. Partially inspired by a work by Shanahan <cit.>, where the order parameter is defined with respect to the communities of a network, we hereby define a hyperedge-based local order parameter as follows:
ℛ_i^ℰ(t)=|1/ℰ_i∑_j∈ℰ_i e^ϑ_j(t)| ,
where is the imaginary unit, ϑ_j is the (time evolving) phase of the j-th oscillator and ℰ_i is the hyperedge(s) node i is part of. By taking as example a 3-hyperring, Fig. <ref>a), we can see that, if node i is a junction node, then it is part of 2 different hyperedges and will have 6 neighboring nodes, while non-junction nodes will have only 3 neighbors. From that, we can observe that non-junction nodes will have the same hyperedge-based local order parameter ℛ_i^ℰ. The number of nodes with the same ℛ_i^ℰ in each hyperedge increases with the order of the hyperring: for instance, in 3-hyperrings they will be 2, while in 6-hyperrings they will be 5.
In analogy with the hyperedge-based local order parameter, for the clique-projected network we define a clique-based local order parameter as follows
ℛ_i^𝒞(t)=|1/𝒞_i∑_j∈𝒞_i e^ϑ_j(t)| ,
where 𝒞_i is the clique(s) node i is part of.
§.§ Pinning control
Chimera states are often elusive patterns, emerging only for limited ranges of the parameters and specific initial conditions <cit.>. Higher-order interactions greatly enhance the possibility of observing such a behavior <cit.>. Nonetheless, ad hoc initial conditions remain a fundamental prerequisite for the chimera to emerge. Our goal is, hence, to induce chimera states in settings where they would not spontaneously appear. For this, we put to use a popular technique in control theory, called pinning control, which consists in externally acting on a subset of the nodes to drive the dynamics of the whole ensemble of nodes towards a desired state <cit.>, and has been successfully applied in the context of chimera states on networks <cit.>. In our setting, the pinning will act as a perturbation on a subset of the nodes, with the goal of developing a region of incoherence, while leaving the unperturbed nodes in their synchronous state.
In order to implement the control, we need to determine which nodes to pin and for how much time. Hence, let us define the pinning time t_p, i.e., the time during which the control will be active, and the number of pinned nodes, N_p< N. We will denote all the nodes with the index i=1,...,N and the subset of pinned nodes with i_p=1,...,N_p. Let us observe that, in the context of pairwise interactions, a chimera state is obtained when about half of the nodes are controlled <cit.>. In Sec. <ref>, where we show the numerical results, the reader will appreciate the great advantage offered by the presence of higher-order interactions. The pinning control setting is schematically depicted in Fig. <ref>.
§.§.§ Control protocol I: additive pinning control
Let us now proceed with setting up the control protocol. The first type of pinning control is called additive, and it was successfully implemented in <cit.> to control chimera states with pairwise interactions. In such a setting, the pinned nodes receive an input in an additive fashion. The equations read
Ẋ⃗̇_i=F⃗(X⃗_i)+𝔻∑_j_1,...,j_d=1^N A_i,j_1,...,j_d^(d)(H⃗(X⃗_j_1,...,X⃗_j_d)-H⃗^(d)(X⃗_i,...,X⃗_i)) +U⃗_i,
where U⃗_i=0⃗ for non controlled nodes and, for i=i_p, U⃗_i_p=(u_i_p(t),u_i_p(t))^⊤, with
u_i_p(t)= λ_i_p[Θ(t)-Θ(t-t_p)],
where λ_i_p are parameters drawn from a uniform distribution of a given interval and Θ is the Heaviside step function, whose value is 1 when the argument is positive and 0 when it is null or negative. This way, the control is active as long as t≤ t_p.
§.§.§ Control protocol II: parametric pinning control
The second type of pinning control we are going to implement consists in acting on the controlled nodes by modifying the parameters of the dynamical system. This protocol, which we will call parametric pinning control, is given by the following equations
Ẋ⃗̇_i=F⃗_i(X⃗_i)+𝔻∑_j_1,...,j_d=1^N A_i,j_1,...,j_d^(d)(H⃗(X⃗_j_1,...,X⃗_j_d)-H⃗^(d)(X⃗_i,...,X⃗_i)),
where F⃗_i= F⃗ for the nodes that are not pinned, while, for i≡ i_p=1,...,N_p, it is given by F⃗_i= (f_i_p,g_i_p)^⊤, where
f_i_p( x_i_p, y_i_p) = α x_i_p(t) - Ω_i_p(t) y_i_p(t) -(x_i_p^2(t) + y_i_p^2(t) )x_i_p(t),
g_i_p( x_i_p, y_i_p) = Ω_i_p(t) x_i_p(t) + α y_i_p(t) - (x_i_p^2(t) + y_i_p^2(t) )y_i_p(t) .
The frequency of the controlled nodes, Ω_i_p(t), is given by
Ω_i_p(t)=ωΘ(t-t_p)+ω_i_pΘ(t_p-t),
where ω_i_p is new frequency induced by the pinning and Θ is the Heaviside step function, whose value is 1 when the argument is positive and 0 when it is null or negative. This ensures that Ω_i_p=ω_i_p for t≤ t_p and it switches to ω as soon as the control is switched off. In our numerical implementation, the new frequencies ω_i_p will be drawn from a uniform distribution of a given positive interval. As a last remark, let us note that the control acts only on the frequency and not on the amplitude, because we have numerically verified that acting on the amplitude has no effect whatsoever.
§ NUMERICAL RESULTS
In this section we show the numerical results of our pinning approaches. We start by comparing the case of a 3-hyperring, where chimera states occur with both pinning approaches, with its corresponding clique-projected network, where chimera states do not emerge. Then, we will exploit the hyperring structure and develop a pinning strategy that maximizes the width of the incoherence region while the number of controlled nodes remains low. For the latter, we will show the results only for additive pinning, leaving the discussion of the analogous results obtained through parametric pinning in Appendix <ref>.
§.§ Comparison between higher-order and pairwise interactions
Let us proceed in testing our pinning approaches to control the emergence of chimera states on a 3-hyperring and compare it with the clique-project network case. In Figs. <ref> and <ref>, we show the results for the additive pinning on a hyperring and a clique-projected network, respectively, while, in Figs. <ref> and <ref>, we show the results for the parametric pinning on a hyperring and a clique-projected network, respectively. For both pinning approaches, we see that the control induces a chimera state when the topology is higher-order (Figs. <ref> and <ref>). Moreover, such state persists for long integration times[In the figures, we show the temporal evolution until 1000 time steps, while the chimera persists until about 4000 time steps. After such integration time, the chimera state turns into an incoherent state where variation between adjacent phases is smooth, but ℛ_i^ℰ is low. We have found this state to persist with constant ℛ_i^ℰ until 20000 time steps (the maximum integration time tested).]. These features can be visualized through the hyperedge-induced local order parameter ℛ_i^ℰ, which shows that the nodes which have been controlled are not oscillating coherently with respect to their neighbors sharing the same hyperedge(s) and that such incoherence persists. On the other hand, the pairwise case does not yield chimeras (Figs. <ref> and <ref>). In fact, the initial decoherence induced by the control is quickly reabsorbed by the system and, although a clear trace of the pinning remains, the difference between the phases of adjacent oscillators is small and the variation smooth. This can be visualized through the clique-based order parameter ℛ_i^𝒞. Note that in <cit.> the latter state was distinguished from a chimera one through the total phase variation. Our approach based on a local order parameter is complementary.
Additionally, let us stress that all results hereby shown, regardless of the order of the hyperring, coupling and pinning approach, are due to the higher-order topology and no chimera states are found when performing the simulations with the same setting but on the corresponding clique-projected network, exactly as in the figures shown in this section. Note, though, that there is one particular exception discussed in Appendix <ref>, where the observed pattern is not due to the higher-order topology but due to the coupling configuration, and, in fact, it is found also in the corresponding pairwise system. Such results provide further evidence that higher-order interactions promote the presence of chimera states and are consistent with the existing literature <cit.>.
Let us conclude by pointing out that in previous works chimera states were obtained for specific values of the initial conditions, while random or uniform initial conditions did not yield the same result. In our numerical study, the initial conditions do not matter as long as the system starts on the synchronous solution, i.e., on the limit cycle of the Stuart-Landau oscillator (<ref>). Hence, we will choose the initial conditions to be uniform and without noise, i.e., (x_j^0=1,y_j^0=0) for every oscillator.
§.§ Scaling of the pinned subset with respect to the hyperring size
It is already remarkable that the higher-order framework favors the onset of chimera states with respect the pairwise one, but here we also show that the percentage of nodes required to obtain a chimera state, by exploiting the interactions of the hyperring, is small. Hence, we set up a pinning protocol in which we try to maximize the number of nodes affected by one single controller. Due to the hyperring structure, a way can be to pin every other junction node, so that each controlled node can, in principle, affect two hyperedges, as schematically shown in Fig. <ref>. To observe how the number of pinned nodes scales with the size of the hyperring, we keep constant the number of hyperedges, so that the total number of nodes increases, but the backbone of the structure remains unchanged. Given that each pinned node is part of 2 hyperedges, in principle, we are able to control all the nodes in these hyperedges. E.g., in a 3-hyperring, with each pinned nodes we would control 7 nodes, in a 4-hyperring 9 nodes, in a 5-hyperring 11 nodes and in a 6-hyperring 13 nodes. For brevity, we present here the results for the additive pinning. The results for the parametric pinning are similar and can be found in Appendix <ref>.
In Fig. <ref>, we show the results obtained with such control scheme on d-hyperrings with d=3,4,5,6, i.e., 4-, 5-, 6- and 7-body interactions, where we have fixed the number of hyperedges. Indeed, we can observe that we obtain a chimera state by inducing a large region of incoherence (more than half of the nodes) with a control that involves only a small fraction of the nodes. Moreover, the pinned nodes N_p are kept constant for every structure, meaning that N_p does not scale with the number of nodes, but rather with the number of hyperedges, which allows to control large structures with only a handful of nodes. The pinned nodes are ≃ 8.8% of the total nodes for the 3-hyperring (panels a) and b)), ≃ 6.6% of the total nodes for the 4-hyperring (panels c) and d)), ≃ 5.3% of the total nodes in the 5-hyperring (panels e) and f)) and ≃ 4.4% of the total nodes in the 6-hyperring (panels g) and h)). In the latter case, our pinning scheme does not work as well as for the other structures and the coupling strength need to be significantly increased in order to observe a chimera state. However, we can observe from Fig. <ref>h) that the chimera is not anymore stable and the front of incoherence enlarges. Let us stress that stable chimera states can be easily obtained also in 6-hyperrings by reducing the distance between the controlled nodes, as in the previous section, instead of the pinning scheme of Fig. <ref>.
Let us note that the parameters λ_i_p need to be all positive or all negative in order for this pinning scheme to yield persistent chimera states. When such random inputs are drawn in a symmetric interval with respect to the 0, e.g., [-2,2] as in Fig. <ref>, a chimera states forms but vanishes at about 400 time units. On the other hand, when the control inputs have the same sign, e.g., [0,2] as in Fig. <ref>, the chimera is persistent until about 4000 time units, i.e., 10 times longer.
In Appendix <ref>, we show analogous results for the parametric pinning, except for the case 6-hyperring, where chimera states do not emerge by pinning every other junction node, even with stronger couplings. The fact that our results are robust with respect to different control approaches is a good indication of the pervasiveness of the phenomenon.
Let us conclude the Results Section by pointing out that the chimera behavior can be further enhanced by increasing the number of pinned nodes and/or reducing the distance between them. Moreover, by increasing the magnitude of the parameters λ_i_p, we can also obtain chimera states through controlling even less nodes than in the simulations hereby shown. However, we have presented a setting in which the parameters λ_i_p have a magnitude comparable with the involved parameters, in order to make it suitable for applications.
§ HEURISTIC INTERPRETATION THROUGH PHASE REDUCTION THEORY
In this section we will give a heuristic interpretation of the results based on the phase reduction approach <cit.>, which consists of reducing a given oscillatory system to a phase, i.e., Kuramoto-type, model <cit.>. In a nutshell, starting from a system of N highly dimensional units in a limit cycle regime, e.g.,
Ẋ⃗̇_i=F⃗_i(X⃗_i)+ε∑_j=1^N A_ijG⃗_ij(X⃗_j,X⃗_i),
under given assumptions (see <cit.> for details), we can reduce to a system of N interacting phase oscillators of the form
ϑ̇_i=ω_i+ε∑_j=1^N A_i,jZ⃗(ϑ_i)·G⃗_ij(ϑ_j,ϑ_i),
where ω_i is the frequency of the i-th oscillator and Z⃗ is the phase sensitivity function. The key in the reduction process is to find an expression for Z⃗, which has an analytical expression only for some specific cases, among which the Stuart-Landau model <cit.> and weakly nonlinear oscillators <cit.>, while it needs to be obtained numerically for general oscillators.
The phase reduction theory has been applied also to systems with higher-order interactions <cit.>, obtaining higher-order versions of the Kuramoto model, which exhibit much richer behaviors than the pairwise one. Indeed, the first evidence of higher-order-induced exotic behaviors, which triggered the interest of the community towards this new framework, has come from higher-order phase models (although not derived through phase reduction) <cit.>. In what follows, we will apply the phase reduction to our model on a 3-hyperring and on the corresponding clique-projected network, Eqs. (<ref>) and (<ref>) respectively, to give an intuition of why it is easier to induce chimera behavior via pinning when the topology is higher-order.
For the Stuart-Landau model, the phase sensitivity function is Z⃗(ϑ)=(-sin(ϑ),cos(ϑ))^⊤ <cit.>. Let us consider system (<ref>) in polar coordinates, i.e., X⃗_i=(cos(ϑ_i),sin(ϑ_i)), and proceed with the reduction by computing the following
Z⃗(ϑ_i)·Ẋ⃗̇_i = sin^2(ϑ_i)ϑ̇_̇i̇+cos^2(ϑ_i)ϑ̇_i=
-αcos(ϑ_i)sin(ϑ_i)+ωsin^2(ϑ_i)+cos(ϑ_i)sin(ϑ_i)+ωcos^2(ϑ_i)+αcos(ϑ_i)sin(ϑ_i) -cos(ϑ_i)sin(ϑ_i)+
ε∑_j_1,j_2,j_3^N A_i,j_1,j_2,j_3^(3)( cos^3(ϑ_i)sin(ϑ_i)-cos(ϑ_j_1)cos(ϑ_j_2)cos(ϑ_j_3)sin(ϑ_i)+cos(ϑ_j_1)cos(ϑ_j_2)cos(ϑ_j_3)cos(ϑ_i)-cos^4(ϑ_i) ),
which gives
ϑ̇_i=ω+ε∑_j_1,j_2,j_3^N A_i,j_1,j_2,j_3^(3)Φ( ϑ_i, ϑ_j_1, ϑ_j_2, ϑ_j_3 ),
where
Φ( ϑ_i, ϑ_j_1, ϑ_j_2, ϑ_j_3 )= cos^3(ϑ_i)sin(ϑ_i)-cos(ϑ_j_1)cos(ϑ_j_2)cos(ϑ_j_3)sin(ϑ_i)+cos(ϑ_j_1)cos(ϑ_j_2)cos(ϑ_j_3)cos(ϑ_i)-cos^4(ϑ_i).
Observe that ω is the same for all the oscillators because we started from identical Stuart-Landau systems. If we apply the same procedure to the system on the clique-projected network, i.e., Eq. (<ref>), we obtain
ϑ̇_i=ω+ε∑_j^N A_i,jΨ( ϑ_i, ϑ_j ),
where
Ψ( ϑ_i, ϑ_j ) =(cos(ϑ_i)-sin(ϑ_i))(cos^3(ϑ_j)-cos^3(ϑ_i)).
Through the averaging method <cit.>, the 4-body coupling (<ref>) can be approximated as
Φ( ϑ_i, ϑ_j_1, ϑ_j_2, ϑ_j_3 ) ≃3/8( sin(ϑ_j_1+ϑ_j_2+ϑ_j_3-3ϑ_i) + cos(ϑ_j_1+ϑ_j_2+ϑ_j_3-3ϑ_i) -1),
while the pairwise coupling as (<ref>)
Ψ( ϑ_i, ϑ_j ) ≃3/8( sin(ϑ_j-ϑ_i)+cos(ϑ_j-ϑ_i)-1 )= √(2)3/8sin(ϑ_j-ϑ_i+π/4)-3/8.
The coupling given by Eq. (<ref>) steers the system towards synchronization, because Ψ( ϑ_i, ϑ_j ) has form of the well-known Kuramoto-Sakaguchi coupling, i.e., sin(ϑ_j-ϑ_i+α), which is known to be attractive for |α|<π/2 <cit.>. On the other hand, Eq. (<ref>) allows for a much richer dynamics, given that there are many more combinations of the phases and the coefficients such that the coupling term vanishes, as it is the case of the higher-order Kuramoto model <cit.>. This fact, gives an intuition not only of the much richer dynamics observed when higher-order interactions are present <cit.>, but also of why higher-order systems favor the presence of chimera states and it is much easier to induce such behavior via pinning, compared to the pairwise case. Moreover, given the form of the higher-order coupling terms, once such state is achieved, it is more difficult for the higher-order system to steer towards synchronization, which provides a qualitative explanation of why the chimera states are also persistent. Let us remark that the first intuition of this behavior was given by Zhang et al. in <cit.>, where it was shown, for the higher-order Kuramoto model, that, when the systems leaves the attraction basin of the synchronous state, it is more difficult to synchronize again because higher-order interactions cause a shrinking of such attraction basin, which becomes smaller. In our case, the control pushes the system away from the synchronous solution creating a chimera state and the higher-order interactions favor the persistence of this state.
As a further corroboration of the results shown in this work and of the correctness of our phase reduction approach, let us conclude this section by applying our pinning protocols to the reduced phase models and show that the outcome confirms our claims. We will show the results of additive pinning (<ref>), leaving those for the parametric approach (<ref>) in Appendix <ref>.
In Figs. <ref> and <ref>, we show the result of the additive pinning procedure on a 3-hyperring and its corresponding clique-projected network, respectively. The setting is analogous to that of Figs. <ref> and <ref>, and so are the results: while the control induces a chimera state in the higher-order phase reduced model, no chimera emerges in the pairwise setting. The same results are obtained also when performing parametric pinning control (see Appendix <ref>).
§ DISCUSSION
In this work we have shown how pinning control can be applied to higher-order system to trigger the emergence of chimera states and how higher-order interactions are a key feature for the chimera state to develop and persist. It was already known from previous works that higher-order interactions enhanced chimera states, however, the set of parameters, initial conditions and couplings allowing for such behavior remained limited. Thanks to our pinning schemes, that we called additive pinning and parametric pinning, we were able to overcome such limitation and observe chimera patterns for a wide range of settings. Moreover, and this is the most remarkable result, the higher-order framework makes possible to control the presence of chimeras by only acting on a small fraction of the nodes, at striking contrast with the network case, where half of the nodes are to be controlled to achieve this objective. Lastly, our heuristic interpretation of the results goes beyond this work and provides a possible explanation of other previous results regarding synchronization patterns observed in higher-order systems, in particular the claim made by Zhang et al. that higher-order interactions shrink the attraction basin of the synchronized state <cit.>.
Our results clearly show that it is easier and more efficient to trigger the emergence of chimera states when higher-order interactions are present. A further study could be to determine how much energy is needed to control the chimera state in comparison with the pairwise setting, by relying on energy aware controllability measures <cit.>. Another efficient control strategy could be to apply an intermittent pinning, analogously to the occasional coupling setting developed in the framework of amplitude death <cit.>, where techniques from piecewise-smooth systems could be used <cit.>. Another interesting direction would be to apply pinning control to directed higher-order structures, such as directed <cit.> and m-directed hypergraphs <cit.>, which have been proven to greatly affect nonlinear dynamics in the context of synchronization <cit.> and Turing pattern formation <cit.>, respectively. Pinning approaches in systems with directed higher-order interactions have been developed in some pioneering works <cit.>, but not yet in the context of chimera states. Lastly, the recent implementation of higher-order interactions in electric circuits <cit.> opens the way to further applications in this direction.
In conclusion, this work is one of the first in which tools from control theory are applied to systems with higher-order interactions and it shows the numerous possibilities offered by this novel framework. We believe that there is plenty of exciting research to be done in this direction and that the ground we built with this work is the basis for further studies shedding more light on the interplay between dynamics with higher-order interactions and its control.
§.§.§ Acknowledgements
R.M. and H.N. acknowledge JSPS, Japan KAKENHI JP22K11919, JP22H00516, and JST, Japan CREST JP-MJCR1913 for financial support. The contribution of L.V.G. and M.F. to this work is framed into the activities of the project "CoCoS: Control of Complex Systems", funded by the University of Catania, under the PIA.CE.RI. 2024-26 initiative. We are grateful to Timoteo Carletti for useful discussions, including pointing us to Ref. <cit.>. R.M. is particularly grateful to Giacomo Baggio and Francesco Lo Iudice for the discussions on pinning control and to Iván León and Yuzuru Kato for the discussions on the phase reduction.
97
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[D’Souza et al.(2023)D’Souza, di Bernardo, and Liu]d2023controlling
author author R.M. D’Souza, author M. di Bernardo, and author Y.Y. Liu, title title Controlling complex networks with complex nodes, @noop journal journal Nature Reviews Physics volume 5, pages 250–262 (year 2023)NoStop
[Sajadi et al.(2022)Sajadi, Kenyon, and Hodge]sajadi2022synchronization
author author A. Sajadi, author R.W. Kenyon, and author B.M. Hodge, title title Synchronization in electric power networks with inherent heterogeneity up to 100% inverter-based renewable generation, @noop journal journal Nature communications volume 13, pages 2490 (year 2022)NoStop
[Asllani et al.(2018)Asllani, Expert, and Carletti]asllani2018minimally
author author M. Asllani, author P. Expert, and author T. Carletti, title title A minimally invasive neurostimulation method for controlling abnormal synchronisation in the neuronal activity, @noop journal journal PLoS Comput. Biol. volume 14, pages e1006296 (year 2018)NoStop
[Battiston et al.(2020)Battiston, Cencetti, Iacopini, Latora, Lucas, Patania, Young, and Petri]battiston2020networks
author author F. Battiston, author G. Cencetti, author I. Iacopini, author V. Latora, author M. Lucas, author A. Patania, author J.-G. Young, and author G. Petri, title title Networks beyond pairwise interactions: structure and dynamics, @noop journal journal Phys. Rep. (year 2020)NoStop
[Battiston et al.(2021)Battiston, Amico, Barrat, Bianconi, Ferraz de Arruda, Franceschiello, Iacopini, Kéfi, Latora, Moreno, Murray, Peixoto, Vaccarino, and Petri]battiston2021physics
author author F. Battiston, author E. Amico, author A. Barrat, author G. Bianconi, author G. Ferraz de Arruda, author B. Franceschiello, author I. Iacopini, author S. Kéfi, author V. Latora, author Y. Moreno, author M.M. Murray, author T.P. Peixoto, author F. Vaccarino, and author G. Petri, title title The physics of higher-order interactions in
complex systems, @noop journal journal Nat. Phys. volume 17, pages 1093–1098 (year 2021)NoStop
[Bianconi(2021)]bianconi2021higher
author author G. Bianconi, @noop title Higher-Order Networks:An introduction to simplicial complexes (publisher Cambridge University Press, year 2021)NoStop
[Boccaletti et al.(2023)Boccaletti, De Lellis, Del Genio, Alfaro-Bittner, Criado, Jalan, and Romance]boccaletti2023structure
author author S. Boccaletti, author P. De Lellis, author C.I. Del Genio, author K. Alfaro-Bittner, author R. Criado, author S. Jalan, and author M. Romance, title title The structure and dynamics of networks with higher order interactions, @noop journal journal Physics Reports volume 1018, pages 1–64 (year 2023)NoStop
[Bick et al.(2023a)Bick, Gross, Harrington, and Schaub]bick2023higher
author author C. Bick, author E. Gross, author H.A. Harrington, and author M.T. Schaub, title title What are higher-order networks? @noop journal journal SIAM Review volume 65, pages 686–731 (year 2023a)NoStop
[Andrew et al.(2021)Andrew, Fagan, Ballyk, and Rosen]rosen1989
author author R.D. Andrew, author M. Fagan, author B.A. Ballyk, and author A.S. Rosen, title title Seizure susceptibility and the osmotic state, @noop journal journal Brain Res. volume 498, pages 175–180 (year 2021)NoStop
[Wang et al.(2013)Wang, Arteaga, and He]wang2013
author author M. Wang, author D. Arteaga, and author B.j. He, title title Brain mechanisms for simple perception and bistable perception, @noop journal journal Proc. Natl. Acad. Sci. U.S.A. volume 110, pages E3350–E3359 (year 2013)NoStop
[Petri et al.(2014)Petri, Expert, Turkheimer, Carhart-Harris, Nutt, Hellyer, and Vaccarino]petri2014homological
author author G. Petri, author P. Expert, author F. Turkheimer, author R. Carhart-Harris, author D. Nutt, author P.J. Hellyer, and author F. Vaccarino, title title Homological scaffolds of brain functional networks, @noop journal journal J. Royal Soc. Interface volume 11, pages 20140873 (year 2014)NoStop
[Sizemore et al.(2018)Sizemore, Giusti, Kahn, Vettel, Betzel, and Bassett]sizemore2018cliques
author author A.E. Sizemore, author C. Giusti, author A. Kahn, author J.M. Vettel, author R.F. Betzel, and author D.S. Bassett, title title Cliques and cavities in the human connectome, @noop journal journal J. Comp. Neurosci. volume 44, pages 115–145 (year 2018)NoStop
[Grilli et al.(2017)Grilli, Barabás, Michalska-Smith, and Allesina]grilli_allesina
author author J. Grilli, author G. Barabás, author M.J. Michalska-Smith, and author S. Allesina, title title Higher-order interactions stabilize dynamics in competitive network models, @noop journal journal Nature volume 548, pages 210–213 (year 2017)NoStop
[Iacopini et al.(2024)Iacopini, Foote, Fefferman, Derryberry, and Silk]iacopini2024not
author author I. Iacopini, author J.R. Foote, author N.H. Fefferman, author E.P. Derryberry, and author M.J. Silk, title title Not your private tête-à-tête: leveraging the power of higher-order networks to study animal communication, 10.1098/rstb.2023.0190 journal journal Phil. Trans. R. Soc. Lond. B volume 379, pages 20230190 (year 2024)NoStop
[Centola et al.(2018)Centola, Becker, Brackbill, and Baronchelli]centola2018experimental
author author D. Centola, author J. Becker, author D. Brackbill, and author A. Baronchelli, title title Experimental evidence for tipping points in social convention, @noop journal journal Science volume 360, pages 1116–1119 (year 2018)NoStop
[Carletti et al.(2020a)Carletti, Battiston, Cencetti, and Fanelli]carletti2020random
author author T. Carletti, author F. Battiston, author G. Cencetti, and author D. Fanelli, title title Random walks on hypergraphs, @noop journal journal Phys. Rev. E volume 101, pages 022308 (year 2020a)NoStop
[Schaub et al.(2020)Schaub, Benson, Horn, Lippner, and Jadbabaie]schaub2020random
author author M.T. Schaub, author A.R. Benson, author P. Horn, author G. Lippner, and author A. Jadbabaie, title title Random walks on simplicial complexes and the normalized Hodge 1-Laplacian, @noop journal journal SIAM Rev. volume 62, pages 353–391 (year 2020)NoStop
[Tanaka and Aoyagi(2011)]tanaka2011multistable
author author T. Tanaka and author T. Aoyagi, title title Multistable attractors in a network of phase oscillators with three-body interactions, 10.1103/PhysRevLett.106.224101 journal journal Phys. Rev. Lett. volume 106, pages 224101 (year 2011)NoStop
[Skardal and Arenas(2020)]skardal2019higher
author author P.S. Skardal and author A. Arenas, title title Higher-order interactions in complex networks of phase oscillators promote abrupt synchronization switching, @noop journal journal Comm. Phys. volume 3 (year 2020)NoStop
[Millán et al.(2020)Millán, Torres, and Bianconi]millan2020explosive
author author A.P. Millán, author J.J. Torres, and author G. Bianconi, title title Explosive higher-order Kuramoto dynamics on simplicial complexes, @noop journal journal Phys. Rev. Lett. volume 124, pages 218301 (year 2020)NoStop
[Iacopini et al.(2019)Iacopini, Petri, Barrat, and Latora]iacopini2019simplicial
author author I. Iacopini, author G. Petri, author A. Barrat, and author V. Latora, title title Simplicial models of social contagion, @noop journal journal Nat. Comm. volume 10, pages 2485 (year 2019)NoStop
[Carletti et al.(2020b)Carletti, Fanelli, and Nicoletti]carletti2020dynamical
author author T. Carletti, author D. Fanelli, and author S. Nicoletti, title title Dynamical systems on hypergraphs, @noop journal journal J. phys. Complex. volume 1, pages 035006 (year 2020b)NoStop
[Muolo et al.(2023a)Muolo, Gallo, Latora, Frasca, and Carletti]muolo2023turing
author author R. Muolo, author L. Gallo, author V. Latora, author M. Frasca, and author T. Carletti, title title Turing patterns in systems with high-order interaction, @noop journal journal Chaos Solit. Fractals volume 166, pages 112912 (year 2023a)NoStop
[Liu et al.(2011)Liu, Slotine, and Barabási]liu2011controllability
author author Y.Y. Liu, author J.J. Slotine, and author A.L. Barabási, title title Controllability of complex networks, @noop journal journal nature volume 473, pages 167–173 (year 2011)NoStop
[Liu and Barabási(2016)]liu2016control
author author Y.Y. Liu and author A.L. Barabási, title title Control principles of complex systems, @noop journal journal Reviews of Modern Physics volume 88, pages 035006 (year 2016)NoStop
[Chen et al.(2021)Chen, Surana, Bloch, and Rajapakse]chen2021controllability
author author Can Chen, author Amit Surana, author Anthony M Bloch, and author Indika Rajapakse, title title Controllability of hypergraphs, @noop journal journal IEEE Transactions on Network Science and Engineering volume 8, pages 1646–1657 (year 2021)NoStop
[De Lellis et al.(2022)De Lellis, Della Rossa, Lo Iudice, and Liuzza]de2022pinning
author author P. De Lellis, author F. Della Rossa, author F. Lo Iudice, and author D. Liuzza, title title Pinning control of hypergraphs, @noop journal journal IEEE Control Systems Letters volume 7, pages 691–696 (year 2022)NoStop
[De Lellis et al.(2023)De Lellis, Della Rossa, Iudice, and Liuzza]de2023pinning
author author P. De Lellis, author F. Della Rossa, author F. Lo Iudice, and author D. Liuzza, title title Pinning control of linear systems on hypergraphs, @noop journal journal European Journal of Control volume 74, pages 100836 (year 2023)NoStop
[Xia and Xiang(2024)]xia2024pinning
author author R. Xia and author L. Xiang, title title Pinning control of simplicial complexes, @noop journal journal European Journal of Control volume 77, pages 100994 (year 2024)NoStop
[Kaneko(1990)]kaneko
author author K. Kaneko, title title Clustering, coding, switching, hierarchical ordering, and control in a network of chaotic elements, @noop journal journal Physica D volume 41, pages 137–172 (year 1990)NoStop
[Hakim and Rappel(1992)]hakim1992dynamics
author author V. Hakim and author W.J. Rappel, title title Dynamics of the globally coupled complex ginzburg-landau equation, @noop journal journal Physical Review A volume 46, pages R7347 (year 1992)NoStop
[Nakagawa and Kuramoto(1993)]nakagawa1993collective
author author N. Nakagawa and author Y. Kuramoto, title title Collective chaos in a population of globally coupled oscillators, @noop journal journal Progress of Theoretical Physics volume 89, pages 313–323 (year 1993)NoStop
[Chabanol et al.(1997)Chabanol, Hakim, and Rappel]chabanol1997collective
author author M.L. Chabanol, author V. Hakim, and author W.J. Rappel, title title Collective chaos and noise in the globally coupled complex ginzburg-landau equation, @noop journal journal Physica D: Nonlinear Phenomena volume 103, pages 273–293 (year 1997)NoStop
[Kuramoto(1995)]kuramoto1995scaling
author author Y. Kuramoto, title title Scaling behavior of turbulent oscillators with non-local interaction, @noop journal journal Prog. Theor. Phy. volume 94 (year 1995)NoStop
[Kuramoto and Nakao(1996)]kuramoto1996origin
author author Y. Kuramoto and author H. Nakao, title title Origin of power-law spatial correlations in distributed oscillators and maps with nonlocal coupling, @noop journal journal Phys. Rev. Lett. volume 76 (year 1996)NoStop
[Kuramoto and Nakao(1997)]kuramoto1997power
author author Y. Kuramoto and author H. Nakao, title title Power-law spatial correlations and the onset of individual motions in self-oscillatory media with non-local coupling, @noop journal journal Physica D volume 103 (year 1997)NoStop
[Kuramoto et al.(1998)Kuramoto, Battogtokh, and Nakao]kuramoto1998multiaffine
author author Y. Kuramoto, author D. Battogtokh, and author H. Nakao, title title Multiaffine chemical turbulence, @noop journal journal Phys. Rev. Lett. volume 81 (year 1998)NoStop
[Kuramoto et al.(2000)Kuramoto, Nakao, and Battogtokh]kuramoto2000multi
author author Y. Kuramoto, author H. Nakao, and author D. Battogtokh, title title Multi-scaled turbulence in large populations of oscillators in a diffusive medium, @noop journal journal Physica A volume 288 (year 2000)NoStop
[Kuramoto and Battogtokh(2002)]kuramoto2002coexistence
author author Y. Kuramoto and author D. Battogtokh, title title Coexistence of coherence and incoherence in nonlocally coupled phase oscillators, @noop journal journal Nonlinear Phenom. Complex Syst. volume 5 (year 2002)NoStop
[Abrams and Strogatz(2004)]abrams2004chimera
author author D.M. Abrams and author S.H. Strogatz, title title Chimera states for coupled oscillators, @noop journal journal Phys. Rev. Lett. volume 93, pages 174102 (year 2004)NoStop
[Domínguez and Cerdeira(1993)]cerdeira_prl
author author D. Domínguez and author H.A. Cerdeira, title title Order and turbulence in rf-driven josephson junction series arrays, @noop journal journal Phys. Rev. Lett. volume 71 (year 1993)NoStop
[Gambuzza et al.(2014)Gambuzza, Buscarino, Chessari, Fortuna, Meucci, and Frasca]vale_chimeras
author author L.V. Gambuzza, author A. Buscarino, author S. Chessari, author L. Fortuna, author R. Meucci, and author M. Frasca, title title Experimental investigation of chimera states with quiescent and synchronous domains in coupled electronic oscillators, @noop journal journal Phys. Rev. E volume 9, pages 032905 (year 2014)NoStop
[Gambuzza et al.(2020)Gambuzza, Minati, and Frasca]gambuzza2020experimental
author author L.V. Gambuzza, author L. Minati, and author M. Frasca, title title Experimental observations of chimera states in locally and non-locally coupled stuart-landau oscillator circuits, @noop journal journal Chaos, Solitons and Fractals volume 138, pages 109907 (year 2020)NoStop
[Hagerstrom et al.(2012)Hagerstrom, Murphy, Roy, Hövel, Omelchenko, and Schöll]hagerstrom2012experimental
author author A. M Hagerstrom, author T.E. Murphy, author R. Roy, author P. Hövel, author I. Omelchenko, and author E. Schöll, title title Experimental observation of chimeras in coupled-map lattices, @noop journal journal Nature Physics volume 8, pages 658–661 (year 2012)NoStop
[Martens et al.(2013)Martens, Thutupalli, Fourriere, and Hallatschek]martens2013chimera
author author E.A. Martens, author S. Thutupalli, author A. Fourriere, and author O. Hallatschek, title title Chimera states in mechanical oscillator networks, @noop journal journal Proceedings of the National Academy of Sciences volume 110, pages 10563–10567 (year 2013)NoStop
[Matheny et al.(2019)Matheny, Emenheiser, Fon, Chapman, Salova, Rohden, Li, Hudoba de Badyn, Pósfai, Duenas-Osorio, Mesbahi, Crutchfield, Cross, D’Souza, and Roukes]matheny2019exotic
author author M.H. Matheny, author J. Emenheiser, author W. Fon, author A. Chapman, author A. Salova, author M. Rohden, author J. Li, author M. Hudoba de Badyn, author M. Pósfai, author L. Duenas-Osorio, author M. Mesbahi, author J.P. Crutchfield, author M.C. Cross, author R.M. D’Souza, and author M.L. Roukes, title
title Exotic states in a simple network of nanoelectromechanical oscillators, @noop journal journal Science volume 363, pages eaav7932 (year 2019)NoStop
[Chouzouris et al.(2018)Chouzouris, Omelchenko, Zakharova, Hlinka, Jiruska, and Schöll]chimera_neuro
author author T. Chouzouris, author I. Omelchenko, author A. Zakharova, author J. Hlinka, author P. Jiruska, and author E. Schöll, title title Chimera states in brain networks: Empirical neural vs. modular fractal connectivity, @noop journal journal Chaos volume 28, pages 045112 (year 2018)NoStop
[Majhi et al.(2019)Majhi, Bera, Ghosh, and Perc]majhi2019chimera
author author S. Majhi, author B.K. Bera, author D. Ghosh, and author M. Perc, title title Chimera states in neuronal networks: A review, @noop journal journal Physics of life reviews volume 28, pages 100–121 (year 2019)NoStop
[Rattenborg et al.(2000)Rattenborg, Amlaner, and Lima]rattenborg2000behavioral
author author N.C. Rattenborg, author C.J. Amlaner, and author S.L. Lima, title title Behavioral, neurophysiological and evolutionary perspectives on unihemispheric sleep, @noop journal journal Neuroscience & Biobehavioral Reviews volume 24, pages 817–842 (year 2000)NoStop
[Asllani et al.(2022)Asllani, Siebert, Arenas, and Gleeson]bram_malb_chim
author author M. Asllani, author B.A. Siebert, author A. Arenas, and author J.P. Gleeson, title title Symmetry-breaking mechanism for the formation of cluster chimera patterns, @noop journal journal Chaos volume 32, pages 013107 (year 2022)NoStop
[Muolo et al.(2023b)Muolo, O'Brien, Carletti, and Asllani]muolo_2023_chimera
author author R. Muolo, author J.D. O'Brien, author T. Carletti, and author M. Asllani, title title Persistence of chimera states and the challenge for synchronization in real-world networks, @noop journal journal to appear in The European Journal of Physics B (year 2023b)NoStop
[Zakharova et al.(2016)Zakharova, Kapeller, and Schöll]zakharova2016amplitude
author author A. Zakharova, author M. Kapeller, and author E. Schöll, title title Amplitude chimeras and chimera death in dynamical networks, in @noop booktitle Journal of Physics: Conference Series, Vol. volume 727 (organization IOP Publishing, year 2016) p. pages 012018NoStop
[Zajdela and Abrams(2023)]zajdela_abrams
author author E.R. Zajdela and author D.M. Abrams, title title Phase chimera states: frozen patterns of disorder, @noop journal journal arxiv preprint https://arxiv.org/abs/2308.06190 (year 2023)NoStop
[Zakharova(2020)]zakharova2020chimera
author author A. Zakharova, @noop title Chimera Patterns in Networks. Interplay between Dynamics, Structure, Noise, and Delay (publisher Springer, year 2020)NoStop
[Parastesh et al.(2021)Parastesh, Jafari, Azarnoush, Shahriari, Wang, Boccaletti, and Perc]parastesh2021chimeras
author author F. Parastesh, author S. Jafari, author H. Azarnoush, author Z. Shahriari, author Z. Wang, author S. Boccaletti, and author M. Perc, title title Chimeras, @noop journal journal Physics Reports volume 898, pages 1–114 (year 2021)NoStop
[Kundu and Ghosh(2022a)]kundu2022higher
author author S. Kundu and author D. Ghosh, title title Higher-order interactions promote chimera states, @noop journal journal Physical Review E volume 105, pages L042202 (year 2022a)NoStop
[Li et al.(2023)Li, Ghosh, and Lei]ghosh_chimera2
author author X. Li, author D. Ghosh, and author Y. Lei, title title Chimera states in coupled pendulum with higher-order interactions, @noop journal journal Chaos, Solitons and Fractals volume 170, pages 113325 (year 2023)NoStop
[Bick et al.(2023b)Bick, Böhle, and Kuehn]bick_nonlocal1
author author C. Bick, author T. Böhle, and author C. Kuehn, title title Phase oscillator networks with nonlocal higher-order interactions: Twisted states, stability, and bifurcations, @noop journal journal SIAM J. Appl. Dyn. Syst. volume 22 (year 2023b)NoStop
[Muolo et al.(2024)Muolo, Njougouo, Gambuzza, Carletti, and Frasca]muolo2024phase
author author R. Muolo, author T. Njougouo, author L.V. Gambuzza, author T. Carletti, and author M. Frasca, title title Phase chimera states on nonlocal hyperrings, @noop journal journal Physical Review E volume 109, pages L022201 (year 2024)NoStop
[Grigoriev et al.(1997)Grigoriev, Cross, and Schuster]grigoriev1997pinning
author author R.O. Grigoriev, author M.C. Cross, and author H.G. Schuster, title title Pinning control of spatiotemporal chaos, @noop journal journal Physical Review Letters volume 79, pages 2795 (year 1997)NoStop
[Sorrentino et al.(2007)Sorrentino, di Bernardo, Garofalo, and Chen]sorrentino2007controllability
author author F. Sorrentino, author M. di Bernardo, author F. Garofalo, and author G. Chen, title title Controllability of complex networks via pinning, @noop journal journal Physical Review E volume 75, pages 046103 (year 2007)NoStop
[Lo Iudice et al.(2022)Lo Iudice, Garofalo, and De Lellis]iudice2022bounded
author author F. Lo Iudice, author F. Garofalo, and author P. De Lellis, title title Bounded partial pinning control of network dynamical systems, @noop journal journal IEEE Transactions on Control of Network Systems volume 10, pages 238–248 (year 2022)NoStop
[Ancona et al.(2023)Ancona, De Lellis, and Lo Iudice]ancona2023influencing
author author C. Ancona, author P. De Lellis, and author F. Lo Iudice, title title Influencing opinions in a nonlinear pinning control model, @noop journal journal IEEE Control Systems Letters volume 7, pages 1945–1950 (year 2023)NoStop
[Du Toit and Craig(2015)]du2015selective
author author E.F. Du Toit and author I.K. Craig, title title Selective pinning control of the average disease transmissibility in an hiv contact network, @noop journal journal Physical Review E volume 92, pages 012810 (year 2015)NoStop
[Yang et al.(2019)Yang, Xu, Feng, and Fu]yang2019feedback
author author P. Yang, author Z. Xu, author J. Feng, and author X. Fu, title title Feedback pinning control of collective behaviors aroused by epidemic spread on complex networks, @noop journal journal Chaos volume 29 (year 2019)NoStop
[Buscarino et al.(2019)Buscarino, Corradino, Fortuna, and Frasca]buscarino2019turing
author author A. Buscarino, author C. Corradino, author L. Fortuna, and author M. Frasca, title title Turing patterns via pinning control in the simplest memristive cellular nonlinear networks, @noop journal journal Chaos volume 29 (year 2019)NoStop
[Porfiri and di Bernardo(2008)]porfiri2008criteria
author author M. Porfiri and author M. di Bernardo, title title Criteria for global pinning-controllability of complex networks, @noop journal journal Automatica volume 44, pages 3100–3106 (year 2008)NoStop
[Yu et al.(2013)Yu, Chen, Lu, and Kurths]yu2013synchronization
author author W. Yu, author G. Chen, author J. Lu, and author J. Kurths, title title Synchronization via pinning control on general complex networks, @noop journal journal SIAM Journal on Control and Optimization volume 51, pages 1395–1416 (year 2013)NoStop
[Gambuzza and Frasca(2016)]gambuzza2016pinning
author author L.V. Gambuzza and author M. Frasca, title title Pinning control of chimera states, @noop journal journal Physical Review E volume 94, pages 022306 (year 2016)NoStop
[Nakao(2016)]nakao2016phase
author author H. Nakao, title title Phase reduction approach to synchronisation of nonlinear oscillators, @noop journal journal Contemporary Physics volume 57, pages 188–214 (year 2016)NoStop
[Nakao(2014)]nakao2014complex
author author H. Nakao, title title Complex ginzburg-landau equation on networks and its non-uniform dynamics, @noop journal journal Eur. Phys. J. Spec. Top. volume 223, pages 2411–2421 (year 2014)NoStop
[Gambuzza et al.(2021)Gambuzza, Di Patti, Gallo, Lepri, Romance, Criado, Frasca, Latora, and Boccaletti]gambuzza2021stability
author author L.V Gambuzza, author F. Di Patti, author L. Gallo, author S. Lepri, author M. Romance, author R. Criado, author M. Frasca, author V. Latora, and author S. Boccaletti, title title Stability of synchronization in simplicial complexes, @noop journal journal Nat. Comm. volume 12, pages 1–13 (year 2021)NoStop
[Neuhäuser et al.(2020)Neuhäuser, Mellor, and Lambiotte]neuhauser2020multibody
author author L. Neuhäuser, author A.W. Mellor, and author R. Lambiotte, title title Multibody interactions and nonlinear consensus dynamics on networked systems, @noop journal journal Phys. Rev. E volume 101, pages 032310 (year 2020)NoStop
[Kuramoto(1975)]kuramoto1975
author author Y. Kuramoto, title title Self-entrainment of a population of coupled non-linear oscillators, in @noop booktitle International Symposium on Mathematical Problems in Theoretical Physics, editor edited by editor Huzihiro Araki (publisher Springer Berlin Heidelberg, address Berlin, Heidelberg, year 1975) pp. pages 420–422NoStop
[Shanahan(2010)]shanahan2010metastable
author author M. Shanahan, title title Metastable chimera states in community-structured oscillator networks, @noop journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 20 (year 2010)NoStop
[Kundu and Ghosh(2022b)]ghosh_chimera_high-order
author author S. Kundu and author D. Ghosh, title title High-order interactions promote chimera states, @noop journal journal Physical Review E volume 105, pages L042202 (year 2022b)NoStop
[Kuramoto and Nakao(2019)]kuramoto2019concept
author author Y. Kuramoto and author H. Nakao, title title On the concept of dynamical reduction: the case of coupled oscillators, @noop journal journal Philosophical Transactions of the Royal Society A volume 377, pages 20190041 (year 2019)NoStop
[León and Nakao(2023)]leon2023analytical
author author I. León and author H. Nakao, title title Analytical phase reduction for weakly nonlinear oscillators, @noop journal journal Chaos, Solitons & Fractals volume 176, pages 114117 (year 2023)NoStop
[Ashwin and Rodrigues(2016)]ashwin2016hopf
author author P. Ashwin and author A. Rodrigues, title title Hopf normal form with sn symmetry and reduction to systems of nonlinearly coupled phase oscillators, @noop journal journal Physica D: Nonlinear Phenomena volume 325, pages 14–24 (year 2016)NoStop
[León and Pazó(2019)]leon2019phase
author author I. León and author D. Pazó, title title Phase reduction beyond the first order: The case of the mean-field complex ginzburg-landau equation, @noop journal journal Physical Review E volume 100, pages 012211 (year 2019)NoStop
[León et al.(2024)León, Muolo, Hata, and Nakao]leon2024higher
author author I. León, author R. Muolo, author S. Hata, and author H. Nakao, title title Higher-order interactions induce anomalous transitions to synchrony, @noop journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 34 (year 2024)NoStop
[Bick et al.(2016)Bick, Ashwin, and Rodrigues]bick2016chaos
author author C. Bick, author P. Ashwin, and author A. Rodrigues, title title Chaos in generically coupled phase oscillator networks with nonpairwise interactions, @noop journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 26 (year 2016)NoStop
[Skardal and Arenas(2019)]skardal2019abrupt
author author P.S. Skardal and author A. Arenas, title title Abrupt desynchronization and extensive multistability in globally coupled oscillator simplexes, @noop journal journal Phys. Rev. Lett. volume 122, pages 248301 (year 2019)NoStop
[Kuramoto(1984)]Kuramoto
author author Y. Kuramoto, @noop title Chemical oscillations, waves, and turbulence (publisher Springer-Verlag, New York, year 1984)NoStop
[Zhang et al.(2023)Zhang, Skardal, Battiston, Petri, and Lucas]zhang2023deeper
author author Y. Zhang, author P.S. Skardal, author F. Battiston, author G. Petri, and author M. Lucas, title title Deeper but smaller: Higher-order interactions increase linear stability but shrink basins, @noop journal journal arXiv preprint arXiv:2309.16581 (year 2023)NoStop
[Lindmark and Altafini(2018)]lindmark2018minimum
author author G. Lindmark and author C. Altafini, title title Minimum energy control for complex networks, @noop journal journal Scientific reports volume 8, pages 3188 (year 2018)NoStop
[Baggio et al.(2022)Baggio, Pasqualetti, and Zampieri]baggio2022energy
author author G. Baggio, author F. Pasqualetti, and author S. Zampieri, title title Energy-aware controllability of complex networks, @noop journal journal Annual Review of Control, Robotics, and Autonomous Systems volume 5, pages 465–489 (year 2022)NoStop
[Ghosh et al.(2022)Ghosh, Mondal, and Sujith]ghosh2022occasional
author author A. Ghosh, author S. Mondal, and author R.I. Sujith, title title Occasional coupling enhances amplitude death in delay-coupled oscillators, @noop journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 32 (year 2022)NoStop
[Coraggio et al.(2021)Coraggio, De Lellis, and di Bernardo]coraggio2021convergence
author author M. Coraggio, author P. De Lellis, and author M. di Bernardo, title title Convergence and synchronization in networks of piecewise-smooth systems via distributed discontinuous coupling, @noop journal journal Automatica volume 129, pages 109596 (year 2021)NoStop
[Gallo et al.(1993)Gallo, Longo, Pallottino, and Nguyen]gallo1993directed
author author G. Gallo, author G. Longo, author S. Pallottino, and author S. Nguyen, title title Directed hypergraphs and applications, @noop journal journal Discret. Appl. Math. volume 42, pages 177–201 (year 1993)NoStop
[Gallo et al.(2022)Gallo, Muolo, Gambuzza, Latora, Frasca, and Carletti]gallo2022synchronization
author author L. Gallo, author R. Muolo, author L.V. Gambuzza, author V. Latora, author M. Frasca, and author T. Carletti, title title Synchronization induced by directed higher-order interactions, @noop journal journal Comm. Phys. volume 5 (year 2022)NoStop
[Della Rossa et al.(2023)Della Rossa, Liuzza, Lo Iudice, and De Lellis]della2023emergence
author author F. Della Rossa, author D. Liuzza, author F. Lo Iudice, and author P. De Lellis, title title Emergence and control of synchronization in networks with directed many-body interactions, @noop journal journal Physical Review Letters volume 131, pages 207401 (year 2023)NoStop
[Dorchain et al.(2024)Dorchain, Segnou, Muolo, and Carletti]dorchain2024impact
author author M. Dorchain, author W. Segnou, author R. Muolo, and author T. Carletti, title title Impact of directionality on the emergence of turing patterns on m-directed higher-order structures, @noop journal journal arXiv preprint https://arxiv.org/abs/2408.04721 (year 2024)NoStop
[Shi et al.(2023)Shi, Qin, Yang, Ma, and Li]shi2023synchronization
author author T. Shi, author Y. Qin, author Q. Yang, author Z. Ma, and author K. Li, title title Synchronization of directed uniform hypergraphs via adaptive pinning control, @noop journal journal Physica A: Statistical Mechanics and its Applications volume 615, pages 128571 (year 2023)NoStop
[Li et al.(2024)Li, Lin, and Wang]li2024synchronization
author author K. Li, author Y. Lin, and author J. Wang, title title Synchronization of multi-directed hypergraphs via adaptive pinning control, @noop journal journal Chaos, Solitons & Fractals volume 184, pages 115000 (year 2024)NoStop
[Rizzello and De Lellis(2024)]rizzello2024pinning
author author R. Rizzello and author P. De Lellis, title title Pinning control in networks of nonidentical systems with many-body interactions, @noop journal journal IEEE Control Systems Letters (year 2024)NoStop
[Vera-Ávila et al.(2024)Vera-Ávila, Rivera-Durón, Soriano-Garcia, Sevilla-Escoboza, and Buldú]vera2024electronic
author author V.P. Vera-Ávila, author R.R. Rivera-Durón, author M.S. Soriano-Garcia, author R. Sevilla-Escoboza, and author J.M. Buldú, title title Electronic implementation of simplicial complexes, @noop journal journal Chaos, Solitons & Fractals volume 183, pages 114915 (year 2024)NoStop
§ NUMERICAL RESULTS FOR DIFFERENT COUPLING SCHEMES
In the Main Text, we have considered systems of the form of Eq. (<ref>), where the coupling matrix is D=[ 1 0; 1 0 ]. Let us observe that such setting is the one in which it is easier to observe chimera states induced by the initial conditions. However, through our pinning approach, it is possible to observe chimera states also for different configurations of the coupling. In what follows, we give a brief survey of which configurations yield chimera states, obtained by performing parametric pinning control on a 3-hyperring of 204 nodes, where the parameters are α=1 and ω=1, the coupling strength is ε=0.01, and by pinning N_p=18 nodes spaced every 3 for t_p=100 time units and with the parameters ω_i_p drawn from a uniform distribution of the interval [0.5,2.5]. We have performed 10 simulations for each identical setting except for the parameters ω_i_p, which changed at every iteration.
The following coupling configurations always lead to chimera states for the examined range of parameters, couplings and pinning features: D = [ 0 0; 0 1 ] , [ 0 1; 0 1 ] , [ 0 1; 1 0 ] , [ 1 1; 0 0 ] , [ 0 0; 1 1 ] , [ 1 1; 0 1 ].
The following configurations lead 40%-60% of times to chimeras, depending on the parameters ω_i_p:
D = [ 1 0; 0 0 ] , [ 1 0; 0 1 ] , [ 1 1; 1 0 ] , [ 0 1; 1 1 ]. Despite not being as easy as in the former case, chimera states in the latter can be achieved by increasing the coupling strength, the intensity of the parameters ω_i_p, the pinned nodes and the pervasiveness of the pinning (e.g., pinning every node instead of 1 every 3).
The following coupling configurations never lead to chimera states for the examined range of parameters, couplings and pinning features: D = [ 1 0; 1 1 ] , [ 1 1; 1 1 ].
Particularly interesting are the following coupling configurations: D = [ 0 1; 0 0 ] , [ 0 0; 1 0 ], where a chimera state emerges, but it is unstable and the incoherence region grows until the whole system develops a fully incoherent state. Moreover, such behavior is independent on the number of pinned nodes and occurs also when only one node is perturbed, which makes this setting interesting for applications in which incoherence needs to be achieved. Let us note that, in this case, higher-order interactions do not play a role, but the key feature is the coupling configuration. In fact, we obtain the same result in the pairwise setting.
Lastly, let us point out that also different hyperring topologies and the additive pinning configuration lead to analogous behaviors and that no chimera nor incoherent states are observed when the higher-order interactions are "flattened" onto the corresponding clique-projected networks with none of the two pinning approaches. Again, let us stress that all the above does not apply to the coupling configurations leading to full incoherence, namely, [ 0 1; 0 0 ] , [ 0 0; 1 0 ], whose behavior is determined by the coupling and not by the presence of higher-order interactions. In fact, the same behavior is observed also for pairwise interactions.
§ SCALING WITH RESPECT TO THE NUMBER OF NODES FOR THE PARAMETRIC PINNING CONTROL APPROACH
In this Appendix, we complement the Main Text by showing the results obtained for the case of parametric pinning, which are qualitatively analogous to those of Sec. <ref> obtained through the additive pinning approach, i.e., those on the scaling of the fraction of pinned nodes with respect to the hyperring size. In this setting, we are able to keep the number of pinned nodes constant as we increase the size of the structure. Again, let us stress that the total number of nodes in the hyperring increases, but the number of hyperedges is kept constant.
In Fig. <ref>, we show the results obtained with such control scheme on d-hyperrings with d=3,4,5,6, i.e., 4-, 5-, 6- and 7-body interactions, where we have fixed the number of hyperedges. Indeed, we can observe that we obtain a chimera state by inducing a large region of incoherence (more than half of the nodes) with a control that involves only a small fraction of the nodes. Moreover, the pinned nodes N_p are kept constant for every structure, meaning that N_p does not scale with the number of nodes, but rather with the number of hyperedges, which allows to control large structures with only a handful of nodes. The parameters ω_i_p are the same for all the simulations. The pinned nodes are ≃ 8.8% of the total nodes for the 3-hyperring (panels a) and b)), ≃ 6.6% of the total nodes for the 4-hyperring (panels c) and d)), ≃ 5.3% of the total nodes in the 5-hyperring (panels e) and f)) and ≃ 4.4% of the total nodes in the 6-hyperring (panels g) and h)). In the latter case, the pinning scheme consisting in controlling one every 2 junction nodes does not yield a chimera state, not even with stronger coupling, as shown in Fig. <ref>g-h), where, moreover, we see that the region of incoherence enlarges. This does not change if we increase the number of pinned nodes, but only if we reduce the gap between them.
§ PARAMETRIC PINNING CONTROL OF THE REDUCED PHASE MODEL
In this Appendix, we complement the numerical results of Sec. <ref> of the Main Text on additive pinning control for the phase reduced model by showing analogous results for the parametric pinning approach. In Figs. <ref> and <ref>, we show the results of the parametric pinning procedure on a 3-hyperring and its corresponding clique-projected network, respectively. The setting is analogous to that of Figs. <ref> and <ref> of the Main Text. We see that, while the control induces a chimera state in the higher-order phase reduced model, only a slight incoherence emerges in the pairwise setting, which is not a chimera, as can be seen by looking at the clique-based local order parameter.
|
http://arxiv.org/abs/2409.03556v1 | 20240905141701 | MaskVal: Simple but Effective Uncertainty Quantification for 6D Pose Estimation | [
"Philipp Quentin",
"Daniel Goehring"
] | cs.RO | [
"cs.RO",
"cs.CV",
"cs.LG"
] |
[
Shana Yunsheng Li
5 September 2024
=====================
empty
empty
§ ABSTRACT
For the use of 6D pose estimation in robotic applications, reliable poses are of utmost importance to ensure a safe, reliable and predictable operational performance. Despite these requirements, state-of-the-art 6D pose estimators often do not provide any uncertainty quantification for their pose estimates at all, or if they do, it has been shown that the uncertainty provided is only weakly correlated with the actual true error. To address this issue, we investigate a simple but effective uncertainty quantification, that we call MaskVal, which compares the pose estimates with their corresponding instance segmentations by rendering and does not require any modification of the pose estimator itself. Despite its simplicity, MaskVal significantly outperforms a state-of-the-art ensemble method on both a dataset and a robotic setup. We show that by using MaskVal, the performance of a state-of-the-art 6D pose estimator is significantly improved towards a safe and reliable operation. In addition, we propose a new and specific approach to compare and evaluate uncertainty quantification methods for 6D pose estimation in the context of robotic manipulation.
§ INTRODUCTION
6D pose estimation is an essential part of various fields such as augmented reality, mobile robots and especially manipulation robots. When such robots are used in safety-critical areas or areas that require reliable and predictable performance, as required in the manufacturing industry, a reliable uncertainty quantification (UQ) that assesses the correctness of an estimated pose is crucial. In industrial robot applications, such quantification is necessary to ensure successful grasping and to avoid collisions with the target object itself or its peripherals, thus avoiding damage and downtime. On top of that, it reduces the number of unsuccessful grasps, thus reducing overall grasp times and enables active vision strategies. Without such quantification, the deployment of vision-based manipulation robots in weakly structured environments is difficult, if not impossible.
In this context, the desired property of an uncertainty quantification is that it has a high correlation with the true underlying pose error. This allows the specification of thresholds that can guarantee a successful grasp. Furthermore, it is also desirable that while ensuring a successful grasp, not too many valid poses that would also lead to a successful grasp are discarded and thus increase the process time unnecessarily. Along with this, such an uncertainty quantification should require little or no additional computational power and process time in order to remain within industrial requirements.
Against this background, current state-of-the-art 6D pose estimators do not provide an uncertainty quantification at all <cit.>, or if they do, the provided uncertainties are prone to be overconfident <cit.>.
In this paper, we investigate a simple but effective uncertainty quantification for 6D pose estimation that requires no modification of the estimator itself, that we call MaskVal, which compares and validates an estimated pose by rendering with its corresponding mask detection. In the case of a common two-stage pose estimator that already relies on an instance segmentation in the first stage, the masks are already given and the additional computational effort is comparatively small. Despite MaskVal's simplicity, we demonstrate through an evaluation on a dataset and a robotic setup that MaskVal achieves state-of-the-art performance and even outperforms the recently proposed ensemble method <cit.>. As a further contribution of the paper, we propose specific performance metrics that allow a detailed analysis and evaluation of uncertainty quantification for 6D pose estimation and by that naturally enhance the common area under the curve analysis established by <cit.>.
§ RELATED WORK
With the deployment of neural networks in real-world applications, the need for reliable uncertainty quantification (UQ) of their predictions has risen, and different approaches have emerged. These approaches can be categorized into those that are more suited to capture the uncertainty that stems from the noise of the data, the aleatoric uncertainty, or the uncertainty that stems from the data dependent model parametrization itself, the epistemic uncertainty <cit.>.
In the field of 6D pose estimation leading and state-of-the-art models often focus mainly on the core 6D pose task itself and do not provide an uncertainty at all as in <cit.>. In contrast to that <cit.> directly provide an uncertainty for their estimation, which can be categorized as an aleatoric uncertainty, by integrating an uncertainty value directly in their training loss function that rewards the model to provide high values when the pose error is expected to be high for an given input. However, it has been shown that the uncertainties provided by this approach only weakly correlate with the actual true error <cit.>.
A comparable approach, that can also be seen as modeling aleatoric uncertainty, is also not to learn to predict single target value, but the first and second moments of a probability distribution, where the first moment is then taken as the target prediction and the second moment is the uncertainty measure. In <cit.> they learn to predict a Gaussian distribution and in the context of 6D pose estimation in <cit.>, they learn to predict a Bingham distribution that is especially suited to model rotational uncertainties. A disadvantage on focusing on aleatoric uncertainty alone is, that the results in <cit.> have recently shown that aleatoric uncertainty is unreliable to find out-of-distribution data. This is especially problematic in the field of 6D pose estimation, since estimators nowadays are often trained on synthetic data only, which is prone to have a domain gap towards the encountered real data.
In contrast to these approaches where model parametrization is deterministic, Bayesian deep learning applies a probability distribution over model parameters to capture epistemic uncertainty related to data. As the full computation of the posterior of such neural networks is intractable, due to their large parameter space, Bayesian approximations like Monte-Carlo-Sampling, Dropout-Monte-Carlo-Sampling and Deep Ensembles have emerged <cit.>. Furthermore, <cit.> shows that Deep Ensembles are capable of modeling both aleatoric and epistemic uncertainty and achieve the best uncertainty behavior in their experiments. Against this background <cit.> trains and deploys an ensemble of differently initialized neural networks with the same architecture as an approximation. A practical disadvantage of this approach is that no pre-trained models can be used as they need to be retrained. As an alternative to this, <cit.> applies dropout at inference to already trained models to obtain a distribution over parameters and capture epistemic uncertainty.
In the specific context of 6D pose estimation, <cit.> uses an ensemble of neural networks to model uncertainty, where the models have different architectures and have also been trained on different datasets. Uncertainty is then quantified by computing the ADD error <cit.> between the different ensemble poses or by a learned metric.
A general drawback of these approaches is that they require the inference of multiple networks, which can be done in parallel, but still require higher computational
resources, and may conflict with industry constraints.
Against this background, we investigate a computationally
inexpensive approach that does not require any modification of the pose estimators, by comparing corresponding instance segmentations and rendered pose estimations in 2D image space. In doing so it can as well be interpreted as an ensemble approach, but on segmentation instead on pose level. Since the majority of state-of-the-art pose estimators already consist of an instance segmentation in their first stage, the existing instance segmentation can be used and no further computational effort is needed for an ensemble.
Our method can be categorized as a special case of the general render-and-compare approach, which is well established in the evaluation of object pose estimation.<cit.>, for example, compares the cosine distance of image features of the rendered object with observed object image features via an autoencoder architecture to evaluate pose candidates. <cit.>, on the other hand, uses a render-and-compare approach to access the alignment of rendered object contours and image edges for pose validation. Closest to our investigated approach, the works of <cit.> also compare the instance segmentations with their corresponding pose estimates by rendering in the 2D image space. However, their focus is on self-supervised retraining and, to the best of our knowledge, no other work explicitly addresses this approach for direct pose uncertainty quantification as we investigate here.
§ PROBLEM FORMULATION AND EVALUATION METHODOLOGY
In the following, we describe the problem setting and propose a methodology to evaluate the properties of an uncertainty quantification method for 6D pose estimation for industrial robotic applications.
Consider the goal of a 6D pose estimation system to provide poses p̂_(i) for target objects that enable a robot to successfully grasp and place them. Therefore, suppose a scene of known objects with an associated set of ground truth poses O = {p̅_(1), …, p̅_(O)} that are to be grasped by the robot. The 6D pose estimator provides a set of pose estimations P = {(p̂_(1), u_(1)), …, (p̂_(N), u_(N)) } with corresponding uncertainties u_(i). The uncertainties shall enable the pose estimator to reject pose estimations which are not accurate enough for a successful grasp (invalid) and provide a set of filtered poses P_u = {p̂_(1), …, p̂_(M)} based on an uncertainty threshold u_T. In this context, an optimal uncertainty quantification enables the objective that the filtered set contains only valid poses, but at the same time no valid poses of the pose estimator are discarded.
To evaluate towards this objective, we first compute a pose error by a pose error function e = E(p̅, p̂), that enables the categorization, if a pose will lead to a successful grasp or not for a specific error threshold e_t. Furthermore, this enables a general assessment of the uncertainty quantification by computing the correlation between the uncertainties and the true error e as in <cit.>. Further, we define a true positive (TP) for the case when
E(p̂_(i), p̅_(j), ) ≤ e_t
a false positive (FP) when
E( p̂_(i), p̅_(j)) > e_t
and a false negative (FN) for each case where there is no or no true positive pose estimation for a ground truth pose. For the sake of simplicity, we assume at this point that the association between i-th estimated pose and the j-th ground truth pose is known. Given this categorization, we can now compute the common performance metrics average precision (AP) as
AP = #_TP(P_u)/#_TP(P_u) + #_FP(P_u)
and the average recall (AR) as
AR = #_TP(P_u)/#_TP(P_u) + #_FN(O)
where #(·) returns the number of TPs, FPs and FNs respectively with regard to a specific pose set. To additionally measure how many actual true positive poses in P were not rejected, we further compute the ratio of true positives, which we call the average recall uncertainty (ARU), in P_u and P as
ARU = #_TP(P_u)/#_TP(P).
For our initially supposed objective of an optimal uncertainty quantification for a given pose estimator, an optimal uncertainty quantification would enable an AP of 1 and simultaneously an ARU of 1 for a specific error threshold. By that the provided set of estimated poses of the pose estimator is best possibly used.
§ MASKVAL
To create an uncertainty respectivley certainty measure for a pose estimation of an object of interest, we compare the pose with its corresponding instance segmentation, by rendering the object's model transformed by the pose into the 2D image plane and compute the corresponding intersection over union (IOU). Specifically, given an RGB or RGB-D image I^w × h, with width w and height h ∈ℕ, a pose estimator g provides a set of 6D poses P = {p̂_(1), …, p̂_(N)} and an instance segmentation network s provides a set of instance segmentations S = { is_(1), …, is_(K)}, where is ∈{0, 1}^w × h is a binary mask. Furthermore, given a renderer r(p̂_(i), M_(i), K, w, h) = (D_(i), v_(i)) where D ∈ℝ_≥ 0^w × h and v ∈ [0,1], which renders the depth of the target object model M_(i) transformed by the pose p̂_(i) to the 2D image space with camera matrix K and provides a visibility ratio v_(i) that reflects the ratio of how much of the object is visible in the image due to field of view (FoV) limitations. Then we can compute the pairwise mask IOUs for the N poses and K instance segmentations as
iou_ik = ∑_n=1^w∑_m=1^h( δ_1(D_(i)) ⊙ is_(k))_nm/∑_n=1^w∑_m=1^h(δ_1(D_(i)) ⊕ is_(k) )_nm
where we define ⊙ as the elemt-wise AND operator, ⊕ as the elemt-wise OR operator and
δ_1(D_(i)) = 1_{D_nm > 0}
is an indicator function that maps an input matrix to an binary output matrix, where an entry is 1 if D_nm is greater than 0 and 0 otherwise.
Using finally a matching algorithm J that takes as input the queried pose index i and the iou ∈ [0,1]^N × K matrix to establish the most likely pose instance segmentation association, we choose the pose certainty to be
J(i, iou) = iou_ik^* = c_(i)
with c_(i)∈ [0,1] and where J can be realized by a greedy algorithm, for example. The corresponding uncertainty u_(i) can then simply be retrieved by the relation
u_(i) = 1 - c_(i).
For the common case of a two-stage pose estimator, where the pose estimation builds directly up on the instance segmentation, the association is known, the matching algorithm J can be omitted and the certainty simplifies to
iou_(ik; i=k) = c_(i).
To account for the case where the object is barely visible in the image due to FoV limitations, we assume that the mask IOU is less reliable as an uncertainty quantification and therefore we decrease the certainty by the visibility ratio and get the uncertainty as
u_(i)=
1 - c_(i) v_(i) v_(i) < α
1 - c_(i) v_(i)≥α
where α∈ [0,1] sets the sensibility towards the visibility.
To get a visual impression of MaskVal, we depict an exemplary uncertainty quantification for a data sample of the robotic experiments from section <ref> in Fig. <ref>.
§ EXPERIMENTS
In the experiment section we evaluate and compare our method MaskVal against the recent state-of-the-art uncertainty quantification method for 6D poses from <cit.> on our objectives from section <ref>. We conduct the evaluation and comparison on a test dataset with known ground truth as well as on a robotic setup for two representative parts of the automotive internal logistics, namely antennas and handles, whose synthetic versions are depicted in Fig. <ref>. In this context, the antennas represent a rather difficult part due to their reflective surface and the absence of well distinctive features, whereas the handles represent a rather common simple part.
§.§ Datasets
We use only synthetic data for the training of all algorithms used, as it is one of the most interesting scenario for industry <cit.>.
Furthermore, we also only use synthetic data for our dataset evaluation part for the uncertainty quantification methods. The reason for this is, that it cannot be guaranteed, that real annotated pose data do not contain errors in a range of values, which is essential and part of our evaluation. This makes a fine-grained evaluation of the uncertainty quantification methods difficult, as errors may stem not from the method itself, but from underlying annotation errors. Therefore, we have only used synthetic data to exclude any such unfavorable effects. The experiments on the robotic setup shall compensate the lack of real data at this point.
For the generation of the synthetic data, we have used the NVISII-based data generation pipeline of <cit.>. Using this pipeline, we have created two training and validations data sets, S1 and S2, as well as an overall test dataset for each part, as specified in Tab. <ref>.
The training and validation dataset S1 and the overall test data set are created by the photorealistic lightweight synthetic scenes of <cit.>. The training and validation dataset S2 differs from S1 in that the objects are not spawned in a physical room with a physical background, but in front of a 2D image, which is the common render & paste approach <cit.>. To evaluate the uncertainty quantification methods on various conditions, the test dataset contains heterogenous objects with clutter and occlusions as well as random textures on the target objects. Exemplary images of the datasets are depicted in Fig. <ref>.
§.§ Implementation Details
§.§.§ 6D Pose Estimation
For the underlying 6D pose estimation we use the state-of-the art two-stage RGB-based pose estimator GDR-Net <cit.>, which is the winner of the BOP Challenge 2022 <cit.>. For the first stage of GDR-Net, the instance segmentation, we use the basic Mask R-CNN algorithm <cit.>. Both GDR-Net and Mask R-CNN were trained on dataset S1. Note that GDR-Net by itself does not provide an uncertainty quantification for its pose estimations.
§.§.§ Uncertainty Quantification via Pose Ensemble
We benchmark our method against the recent state-of-the-art uncertainty quantification method from <cit.>, which we call Ensemble-ADD, that uses a heterogeneous ensemble of 6D pose estimators to provide an uncertainty via a disagreement metric. Following their approach, we selected their well-performing ADD-based disagreement metric for our comparison and obtained a heterogeneous ensemble by training a second GDR-Net on a different dataset, namely dataset S2. Consistent with our approach, the GDR-Net that was trained on dataset S1 provides the final pose estimations. Since the ADD-based disagreement metric is unbounded, we normalize the obtained ADD disagreement for better comparability to obtain the uncertainty u in an interval of [0,1]. To do this, we choose the normalization minimum as the minimum ADD disagreement obtained on the test set and set the normalization maximum to 5 cm to exclude high outliers.
§.§.§ MaskVal
Since GDR-Net is a two-stage pose estimator that already relies on an instance segmentation in the first stage, we can directly compare the poses with their corresponding instance segmentation provided by Mask R-CNN as described in section <ref>. For the renderer we use the off-screen C++ renderer of the BOP-Toolkit, for fast CPU-based rendering. Furthermore, we set the parameter α = 0.8, which means that the visibility ratio will be taken into account, when it drops below this value.
§.§ Metric Details for Dataset Evaluation
For realizing the general metric description of section <ref> for the dataset evaluation, we follow <cit.> and define E(p̂_(i), p̅_(i)) as the maximum distance error (MDD) as
E_MDD(p̂_(i), p̅_(i), ℳ_(i) )=x ∈ℳ_(i)maxp̂_(i) x- p̅_(i) x _2
where a pose p is defined as a homogeneous transformation matrix consisting of a rotation matrix R ∈ SO(3) and a translation vector t ∈ℝ^3 and ℳ_i∈ℝ^N × 3 is the target object's model point cloud.
For the realization of the performance metrics we also follow <cit.> and specify
AP = 1/N∑_n=1^NTP_u,n/TP_u,n+FP_u,n
AR = 1/N∑_n=1^NTP_u,n/TP_u,n+FN_n
ARU = 1/N∑_n=1^NTP_u,n/TP_n
where TP_u,n, FP_u,n and FN_n is the number of TPs, FPs and FNs in the filtered set of estimated poses for an image n of dataset N and TP_n, is the number of TPs regarding the unfiltered set of poses, respectively. For the specific definition of a TP, FP and FN in image datasets, we follow <cit.> and also set the visibility threshold to θ_v = 0.85. This means that objects are only counted as a false negative, if there is no provided pose estimation and simultaneously the object's visibility is equal or greater than 85 %. Thereby the pose estimator is only penalised by FNs for objects for which it should definitely be able to provide a valid pose.
§.§ Results on Dataset
The results of Mask R-CNN on the test are depicted in Tab.<ref>. Since Mask R-CNN was trained and tested on the same synthetic data domain, the results are good as expected.
To get an impression of the overall performance of GDR-Net trained on S1, we show the pose error distribution on the test set for the antenna and handle in a violin plot in Fig. <ref>. Despite that the majority of pose errors are quite small and will likely represent a successful grasp, there is nevertheless a significant amount of poses that will likley lead to an unsuccessful grasp, including collisions with the target object or the environment. As a solution for an industry deployment such an error distribution is insufficient. Depending on the specific task and gripper combination, that only allows for a certain maximum error, the goal is to reliably discard all poses with an error greater than this, while keeping the poses with a smaller error.
In Fig. <ref> we show how well the uncertainty methods MaskVal and Ensemble-ADD are able to achieve this goal. Specifically, based on their provided uncertainties for the pose estimates, we have set the uncertainty threshold such that the AP of equation (<ref>) is equal to or greater than 0.99 for each error threshold e_t∈ [0, 0.03] m and plotted the corresponding AR and ARU curves. This means that along the entire AR and ARU curve, the AP is 0.99, which means that only 1 % of the filtered poses have a larger error. We chose an AP of 0.99 because it is a realistic assumption for industrial applications, but higher values may be required. In addition, we also plotted the average recall curve AR* of the unfiltered pose set, as commonly done since <cit.>. On this curve, the AP is much lower than 0.99, but it marks the optimum that the plotted AR curve could reach with an optimal uncertainty quantification method. In other words, if the AR curve were to reach the AR* curve, it would mean that no valid poses would be rejected while simultaneously having an AP of 0.99. Therefore, the provided pose set would have been best exploited by the uncertainty quantification method with respect to a given AP target.
To summarize the performance of the uncertainty methods on these curves by a scalar value, we additionally compute the area under the curve (AUC) for AR and tabulate the results in Tab. <ref>. Furthermore, we provide the Spearman's rank correlation between the true error and the uncertainties as in <cit.>. The Spearman's rank correlation is able to measure also non-linear correlations and a perfect correlation or inverse correlation is given by +1 or -1 respectively.
Regarding the results presented, the first thing to note is that it is possible with both MaskVal and the Ensemble-ADD to filter the poses in such a way that an AP of 0.99 can be achieved. This means that the uncertainties calculated by the methods correlate sufficiently with the actual true error to reliably filter out insufficient poses. In this context, the plots also show that MaskVal performs significantly better in that the uncertainty values correlate better with the true error, so that less valid poses are discarded and the AR curves reach higher values earlier. These results are consistent with the Spearman's rank correlation results in Tab. <ref>.
Furthermore, it can be noted that the AR curve based on MaskVal achieves optimal values within an error range that is sufficient for corresponding object-gripper combinations for many applications. This has direct practical implications, as it means that the same pose estimator can or cannot be used for industrial applications with specific requirements for accuracy, reliability and cycle times, depending on the uncertainty quantification method used. For example, for the antenna use case, which requires a pose accuracy of approximately 0.015 m, MaskVal allows near-optimal exploitation of the pose set, promising significantly shorter cycle times than Ensemble-ADD.
Note further that optimal AR values are reached before optimal ARU values, because false negatives are not counted for objects in the image with the set visibility threshold θ_v < 85 %.
§.§ Results on Robotic Experiment
To evaluate how the results on the dataset transfer to the real world, we conducted tests on a robotic setup, depicted in Fig. <ref>, which is the same as in <cit.>.
The setup represents a sequencing process of the internal logistics, where the goal of a robot is to pick objects from a conveyor belt and place them in a nearby sequence container. In this context, we count a TP if the robot is successful, a FP if the pose leads to a collision or an unsuccessful placing and a FN for all parts that remain on the conveyor belt after a time limit of 20 seconds per part. For more details on the setup, see <cit.>.
To perform the task, the robot performs a search run over the parts to be grasped. If an object is detected and a pose is estimated, the robot moves to the pose for grasping only if the uncertainty is equal or smaller than the specified pose uncertainty threshold u_t,g. If this uncertainty threshold is not reached, but the uncertainty is smaller or equal to a refinement threshold u_t,r, the robot moves to a refinement position based on the pose and captures an image of the object again. If the uncertainty is above both the grasping and the refinement uncertainty thresholds, the robot continues its original search.
In total, we performed 5 experiments per object and uncertainty method, where the robot had to sequence 8 objects of the same class per experiment. We deduced the corresponding grasping uncertainty thresholds u_t,g from the results of the synthetic test dataset, so that for an error threshold e_t = 0.015 m an AP of 0.99 is achieved. In this context, we obtained for the antenna a value of u_t,g = 0.2 for MaskVal and u_t,g = 0.1 for Ensemble-ADD and for the handle a value of u_t,g = 0.55 for MaskVal and u_t,g = 0.08 for Ensemble-ADD. Along with this we have set the upper uncertainty refinement threshold u_t,r to 0.6 for all variants. In addition, we ran a subset of the experiments without any uncertainty quantification for comparison. The results of the experiments are shown in Tab. <ref>.
As a main result it can be stated that both uncertainty methods improved the overall performance compared to the plain GDR-Net. Furthermore, MaskVal achieved significantly better results than Ensemble-ADD and is close to optimal values for the antenna. Note that MaskVal, despite discarding pose estimates, also improved the AR performance, while achieving near-optimal and optimal results on AP against the plain GDR-Net. We observed that the reason for this is that the robot with plain GDR-Net had many time consuming paths where the gripper reached into the void, which was not the case with MaskVal.
§.§ Interpretation of the Results
The basic idea of an ensemble method to provide an uncertainty quantification for 6D pose estimation is to relate the similarity of the ensemble outputs to the underlying pose error. The higher the similarity, the lower the uncertainty and vice versa. In this context, Ensemble-ADD tries to relate the similarity of two poses to the true underlying error, while MaskVal tries to do the same with the similarity of two instance segmentations. An essential condition for this approach to work well is that the comparison ensemble part is as good as possible, if the ensemble part that provides the actual result performs well. This generally implies that the comparison ensemble part should be as good as possible. If the comparison part were an oracle, one could theoretically infer the actual error directly. From this consideration it also follows that Ensemble-ADD can theoretically produce better results than MaskVal, since MaskVal is not able to infer the exact true pose error even with an oracle instance segmentation. However, since instance segmentation can be considered as less complex than 6D pose estimation, the results support the assumption that in practice it is easier to find a superior instance segmentation and the above mentioned disadvantage of information loss is relativized.
§ CONCLUSION
In this work we propose MaskVal as an uncertainty quantification method for 6D pose estimation for known objects, that creates an uncertainty for a pose estimate by comparing the pose with its corresponding instance segmentation in an ensemble-like manner. We show that MaskVal outperforms a previous state-of-the-art ensemble-based method on both a dataset and a robotic setup. We further show that it enhances the performances of the state of the art RGB-based 6D pose estimator GDR-Net trained only on very lightweight synthetic data such that a deployment to industrial sequencing process is feasible. This is an important implication, as the combination of lightweight synthetic data with an RGB-based estimator is very promising in terms of economic scalability for industrial use cases.
Furthermore, our work implies that the direct uncertainty quantification of two-stage pose estimators can be significantly improved, since the mask and the pose are already inherently available in such estimators.
As a conclusion, our results generally suggest a stronger focus on uncertainty quantification in the field of 6D pose estimation, on the one hand on the estimators themselves and on the other hand in established benchmarks.
§ ACKNOWLEDGMENT
The authors would like to thank Dino Knoll for the fruitful discussions and the Logistics Robotics Team at BMW and especially Jan Butz for their support. We used DeepL for light-editing such as for minor translations, spelling and grammar corrections.
IEEEtran
|
http://arxiv.org/abs/2409.02618v1 | 20240904112257 | Neuromorphic Heart Rate Monitors: Neural State Machines for Monotonic Change Detection | [
"Alessio Carpegna",
"Chiara De Luca",
"Federico Emanuele Pozzi",
"Alessandro Savino",
"Stefano Di Carlo",
"Giacomo Indiveri",
"Elisa Donati"
] | cs.ET | [
"cs.ET"
] |
Neuromorphic Heart Rate Monitors: Neural State Machines for Monotonic Change Detection
Alessio Carpegna1,
Chiara De Luca23,
Federico Emanuele Pozzi4,
Alessandro Savino1,
Stefano Di Carlo1,
Giacomo Indiveri2,
Elisa Donati2
1Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
2Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
3Digital Society Initiative, University of Zurich, Zurich, Switzerland
4Neurology Department, Fondazione IRCCS San Gerardo Dei Tintori, Monza, Italy
This paper has received funding from: The NEUROPULS project in the European Union’s Horizon Europe research and innovation programme under grant agreement No. 101070238; Bridge Fellowship founded by the Digital Society Initiative at University of Zurich (grant no.G-95017-01-12). We also thank the University of Zurich for supporting this project.
July 2024
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Detecting monotonic changes in heart rate (HR) is crucial for early identification of cardiac conditions and health management. This is particularly important for dementia patients, where HR trends can signal stress or agitation. Developing wearable technologies that can perform always-on monitoring of HRs is essential to effectively detect slow changes over extended periods of time.
However, designing compact electronic circuits that can monitor and process bio-signals continuously, and that can operate in a low-power regime to ensure long-lasting performance, is still an open challenge.
Neuromorphic technology offers an energy-efficient solution for real-time health monitoring. We propose a neuromorphic implementation of a Neural State Machine (NSM) network to encode different health states and switch between them based on the input stimuli. Our focus is on detecting monotonic state switches in electrocardiogram data to identify progressive HR increases. This innovative approach promises significant advancements in continuous health monitoring and management.
§ INTRODUCTION
Detecting and quantifying HR has emerged as a critical tool for identifying potential pathologies and providing valuable insights into cardiovascular health <cit.>. Among variable behaviors, monotonic HR changes indicate unidirectional trends, either increasing or decreasing, in average HR over time.
In non-clinical settings, such as general well-being and athletic training, tracking monotonic changes in HR is essential for evaluating physical fitness and recovery rates <cit.>.
In clinical settings, monitoring monotonic changes in HR is crucial for medical diagnosis and patient monitoring. A consistent monotonic increase or decrease in heart rate can be an early indicator of cardiac conditions such as arrhythmia, bradycardia, or tachycardia.
Early detection of these trends enables timely medical intervention, preventing more severe complications and improving patient outcomes <cit.>.
Understanding monotonic increases in HR is also particularly important in patients with dementia, as it can help detect physiological stress or discomfort that could precede or accompany agitation states <cit.>. By monitoring trends in dementia care and detecting early signs of agitation, healthcare providers and caregivers can intervene more promptly and effectively. This may potentially reduce the severity and frequency of agitation episodes, thereby improving the quality of care and the quality of life for dementia patients and reducing the burden on caregivers <cit.>.
Due to the importance and, at the same time, the difficulty of detecting these changes for human interpreters, an always-on system capable of continuously monitoring and detecting changes in average HR can be extremely impactful. Always-on devices for health monitoring must meet stringent requirements, including low power consumption and real-time processing. For patients with dementia, these devices must operate for extended periods without frequent recharging, as subjects might not remember to charge the device regularly. Therefore, a low-power device that can be used as a “wear and forget” system is essential to ensure effective health management.
To this end, neuromorphic technology offers a compelling solution, providing energy-efficient and reliable processing capabilities <cit.>. Sensory-processing systems built using mixed-signal neuromorphic circuits are well-suited to the demands of continuous health monitoring <cit.>. Example solutions have already been successfully applied in a wide range of wearable applications, such as ECG anomaly detection <cit.>, HFO detection <cit.>, and EMG decoding <cit.>.
In this paper, we present for the first time a neuromorphic implementation of an on-line signal processing system to specifically detect monotonic changes in average HR over a long period of time. We deploy computational primitives of analog neuron circuits, such as recurrent neural network models of finite state machines (i.e., NSM) <cit.> that can switch between states, each encoding different average HR conditions, independent of the time elapsed between state changes. Here, we focus on a monotonic state switch, tested on ECG, to detect a progressive HR increase. We validate our model's ability to track HR changes during activities (i.e., walking, cycling) by testing it on a dataset with varying patterns. We demonstrate how the model accurately follows monotonic changes (steady increase or decrease) while remaining inactive for non-monotonic signals. We further show how the robust computational properties of NSM allow the mixed-signal neuromorphic processor to produce accurate and reliable results. The low power consumption, 90 μ W, and real-time processing features of these neuromorphic circuits make them ideal candidates for building continuous, long-term health monitoring devices in both clinical and non-clinical settings.
§ MATERIALS AND METHODS
§.§ Neuromorphic Hardware
The neuromorphic processor used in this study is the DYNAP-SE chip <cit.>. It is a custom-designed asynchronous mixed-signal processor that features analog spiking neurons and synapses that mimic the biophysical properties of their biological counterparts in real-time. The chip comprises four cores, each containing 256 AdEx InF neurons. Each synapse can be configured as one of four types: slow/fast and inhibitory/excitatory. Each neuron includes a CAM block with 64 addresses, representing the pre-synaptic neurons to which it is connected. Digital peripheral asynchronous input/output logic circuits receive and transmit spikes via the AER communication protocol <cit.>. In this system, each neuron is assigned a unique address encoded as a digital word, which is transmitted using asynchronous digital circuits as soon as an event is generated. The chip features a fully asynchronous inter-core and inter-chip routing architecture, allowing flexible connectivity with microsecond precision even under heavy system loads.
§.§ Dataset and signal processing
To test our model, we selected an available dataset that includes left wrist PPG and chest ECG recordings taken while participants used an indoor treadmill and exercise bike, accompanied by simultaneous motion data from accelerometers and gyroscopes <cit.>. Participants performed various exercises, such as walking, light jogging/running on a treadmill, and pedaling at low and high resistance, each for up to 10 minutes. The dataset includes records from 8 participants, with most participants spending 4 to 6 minutes per activity. The signals were sampled at 256 Hz, and the ECG records were processed with a 50 Hz notch filter to remove mains interference.
An energy-based approach was devised for signal-to-spike conversion <cit.>. The proposed method comprises two stages: bandpass filtering and LIF neurons <cit.>.
Each input channel was processed through a series of bandpass filters. We used four bands: 60-82 (#0), 82-105 (#1), 105-128 (#2), and 128-150 (#3) bpm, employing fourth-order Butterworth filters. This covers a reasonable range of HR variation, spanning from a relaxed state to a tachycardiac one. Afterward, the signal was full-wave rectified and injected as a time-varying current into a simple LIF neuron.
§.§ Network on chip
Figure <ref> shows the designed NSM. The architecture is based on EI-balanced populations of AdEx InF neurons. These populations can maintain sustained activity for extended periods, much larger than the time constants of the neurons themselves. This allows the network to implement a working memory capable of processing signals that change slowly compared to the time scales used within the chip.
The core of the network is a set of four EI-balanced populations, organized in a WTA architecture. The shared inhibition population (red) encodes the state of the network by sustaining the activity of a specific population while silencing all the others. This guarantees that when receiving the input signal, such as an ECG recording, only the population corresponding to the most active frequency range is activated.
A second set of EI-balanced populations, named Gating EI-balanced populations, is used to control the transition between different states. The goal is to target a progressive increase in HR, making the system robust to short fluctuations or temporary bpm decreases, following only the average trend of the input.
The mechanism of dis-inhibition <cit.> is exploited to ensure the network switches only to increasing states. The gating populations continuously inhibit the inactive elements of the WTA network, making them insensitive to any input stimulus. When a state is activated, it turns off (inhibits) the gating population of the subsequent state. This leads to the dis-inhibition of the next state, which makes it sensitive to the input. At the same time, it activates all the gating populations of previous states and the states following the next. This guarantees that the network follows only monotonic transitions between states and does not require precise tuning of the populations and connection synapses to make the system work. What matters is (i) that populations are sufficiently stable to maintain a sustained activity and (ii) that the inhibition between the gating populations and the WTA states is strong enough to silence any spiking activity completely. Finally, state 0 has no gating populations connected to it. This allows it to be used as a reset state: if the monotonic increase is only partial, and the HR returns to the relaxation range before reaching the alarming threshold, the network restarts, waiting for a new ramp-up in the input.
§ EXPERIMENTAL RESULTS
To obtain reliable computation in the NSM while minimizing the overall network size, we used 16 neurons per population, with a total requirement of 176 neurons. Table <ref> shows the average connection probabilities for different populations. The resulting network is compact, fitting well within a single core of the target chip <cit.>. Each population exhibits an average firing rate of approximately 50Hz. The total power consumption can be estimated by integrating the various contributions required for spike generation and communication <cit.>. The obtained average power consumption is around 90 μ W, indicating that our network successfully balances stable, sustained activity with low power consumption.
The behavior of the hardware system was first tested using control stimuli produced as 50Hz Poisson input spike trains. Figure <ref> shows the sustained activity of one EI-balanced population stimulated for one second. As shown, the network can keep a stable activity even when the input is removed.
§.§ WTA network: non-monotonic state transitions
As a second test, we evaluated the dynamics of the WTA network. Figure <ref>(a) shows the response of the network when stimulated with Poisson spike trains through a protocol covering all the possible transitions. In this case, each state is activated when receiving the corresponding input, regardless of the order of arrival. Once stimulated, the population becomes active, it suppresses the activity of all other populations, and stays active until a new input is provided to a different population.
§.§ NSM network: monotonic state transitions
When adding the gating connections, the response of the network becomes the one shown in Fig. <ref>(b). The stimulation protocol is the same as in Section <ref>. In this case, however, the network switches state only (i) towards a higher state, following the expected monotonic behavior, or (ii) towards state 0, which behaves as a reset state for the network, as mentioned in Section <ref>
§.§ Network dynamics on real ECG signal
Finally, we evaluated the network performance using real ECG signals. Here, we demonstrate its robust and coherent ability to monitor and detect monotonic increases in HR. To achieve this, we fed the network pre-processed ECG recordings from the dataset described in Section <ref>. Specifically, we tested the network under two different conditions: during an intense physical activity, namely cycling, lasting approximately 10 minutes (Fig. <ref>(a)), and during a walking session of around 6 minutes (Fig. <ref>(b)). The first scenario correctly detects a complete ramp-up of the HR, leading to the activation of the alarming state. These results show how the network is robust to spurious transitions: at time zero, the stimulation on input 3, caused by noise in the recording upper band, is ignored since state 3 is completely inhibited by its gating population. The same effect can be observed between 300 and 400 seconds with the network ignoring noisy stimuli from input 1. Note that also the weak stimuli coming from input 0 are ignored both in state 1 and 2, despite the absence of a gating population in this case. This shows that the WTA dynamics by themselves are robust to spurious transitions and a complete relaxation is required to reset the network, thus restarting the monotonic increase. This is even more evident in the walking session: in this case, the ramp-up is only partial, given the lower effort required. The network remains stable in state 1, waiting for a further increase in the HR and ignoring the noise on input 0.
§ CONCLUSION
Our results show that a small and simple network, implemented with mixed-signal analog/digital neuromorphic circuits, can reliably monitor a monotonic HR trends over extended periods, paving the way for always-on health monitoring systems that are both efficient and long-lasting. The low power consumption of the neuromorphic circuits enables continuous operation for extended amounts of time, making it ideal for wearable devices for health monitoring. Future research will focus on replacing ECG with PPG signals measured at the wrist and addressing the challenges of noise and movement artifacts. Additionally, we will transition from healthy subjects to patients affected by neuropathologies, such as dementia, which impact HR over long periods, despite the absence of cardiac pathology.
This technology promises significant improvements in patient care and health management, especially in scenarios requiring constant monitoring and rapid response, thus enhancing overall quality of life.
|
http://arxiv.org/abs/2409.02655v1 | 20240904122959 | UNC-104 transport properties are robust and independent of changes in its cargo binding | [
"Amir Shee",
"Vidur Sabharwal",
"Sandhya P. Koushika",
"Amitabha Nandi",
"Debasish Chaudhuri"
] | physics.bio-ph | [
"physics.bio-ph",
"cond-mat.soft",
"cond-mat.stat-mech"
] |
et al.
i.e
ıi
/κ̨ϵεβΔθταk_ onk_ offr_∥ r_⊥ D[X] D[x] L L_0 L_1 Re∑_j=1^2ρ^ (k) ρ^ (1) C^ (1) ρ^ (0)
ρ^ (2) ρ̅^ (2) ρ̃^ (k) ρ̃^ (1) ρ̃^ (0) ρ̃^ (2) ρ̊D̃σk_BF̅ F⟨⟩↑↓Σ†δ̣∂łλŁΛΓøΩωγ bû r vηξ() j j_r j_d j_dαδ̣∂⟨⟩ϵηγ#1
1/2
[Amir Shee and Vidur Sabharwal contributed equally to this work]Northwestern Institute on Complex Systems and ESAM, Northwestern University, Evanston, IL 60208, USAInstitute of Physics, Sachivalaya Marg, Bhubaneswar-751005, Odisha, India
[Amir Shee and Vidur Sabharwal contributed equally to this work]Department of Biological Sciences, Tata Institute of Fundamental Research, Mumbai, IndiaDepartment of Biological Sciences, Tata Institute of Fundamental Research, Mumbai, India
[Author for correspondence: ][email protected] of Physics, IIT Bombay, Powai, Mumbai 400076, India[Author for correspondence: ][email protected] of Physics, Sachivalaya Marg, Bhubaneswar-751005, Odisha, IndiaHomi Bhabha National Institute, Anushakti Nagar, Mumbai 400094, India
§ ABSTRACT
Cargo distribution within eukaryotic cells relies on the active transport mechanisms driven by molecular motors. Despite their critical role, the intricate relationship between motor transport properties and cargo binding — and its impact on motor distribution — remains inadequately understood. Additionally, improper regulation of ubiquitination, a pivotal post-translational modification that affects protein degradation, activation, and localization, is associated with several neurodegenerative diseases.
Recent data showed that ubiquitination can alter motor-cargo binding of the Kinesin-3 motor UNC-104/KIF1A that transports synaptic vesicles.
To investigate how ubiquitin-like modifications affect motor protein function, particularly cargo binding, transport properties, and distribution, we utilize the PLM neuron of C. elegans as a model system.
Using fluorescent microscopy, we assess the distribution of cargo-bound UNC-104 motors along the axon and probe their dynamics using FRAP experiments. We model cargo binding kinetics with a Master equation and motor density dynamics using a Fokker-Planck approach.
Our combined experimental and theoretical analysis reveals that ubiquitin-like knockdowns enhance UNC-104’s cooperative binding to its cargo. However, these modifications do not affect UNC-104's transport properties, such as processivity and diffusivity. Thus, while ubiquitin-like modifications significantly impact the cargo-binding of UNC-104, they do not alter its transport dynamics, keeping the homeostatic distribution of UNC-104 unchanged.
UNC-104 transport properties are robust and independent of changes in its cargo binding
Debasish Chaudhuri
September 9, 2024
=======================================================================================
§ INTRODUCTION
Neurons are specialized cells that constitute the nervous system and are responsible for transmitting electrical and chemical signals across an organism. These cells typically have a cell body and neurites comprising a parallel arrangement of structurally polar microtubules with plus ends directed away from the cell body <cit.> (see Fig. <ref>(a)). These microtubules are used by motor proteins (MPs) like kinesins <cit.> and dyneins <cit.> as tracks to transport cargo along axons.
In axons, kinesins carry cargo anterogradely away from the cell body, whereas dyneins transport cargo retrogradely towards the cell body (e.g., Fig. <ref>(b) ) <cit.>.
The number of MPs on cargo affects its processive run length. Increasing the number of kinesins on cargo extends its duration of processive movement in vitro, but this is not always observed in vivo <cit.>.
Thus, the consequence of having more MPs on the cargo surface in vivo remains unclear. In a neuron, optimal cargo transport away from the cell body may be aided by the processive motion of MPs. However, previous studies have shown that cargo transport is frequently bidirectional <cit.>. Modeling reveals that bidirectional movement may aid in the circulation of cargo between the cell body and distal ends of the neuron <cit.>. Such bidirectional movements can arise from either (i) a tug-of-war between opposing MPs <cit.>, or (ii) coordination between opposing motors, where dynamic switching between opposing MPs can drive bidirectional motion <cit.>. Indeed, both of these hypotheses have been validated in different contexts <cit.>. Thus, while processive MP movement is required for transporting cargo, optimal cargo transport may require precise control over MP binding to cargo and MP activation.
Indeed, MP-cargo binding and MP activation are under the control of several pathways that modify the MPs <cit.>.Optimal cargo distribution by MPs depends on their binding to cargo through the cargo-binding domain and their efficient transport, which relies on the processive motion of the motor domain along the filament <cit.>.
If MPs are not attached to their conjugate filaments, they can either diffuse freely or hitchhike after binding to a cargo <cit.>. Note that MPs must bind to cargo and attach to conjugate filaments simultaneously to mediate active cargo transport. Three different possibilities may arise <cit.>: (i) good cargo binding but weak filament binding, (ii) good filament binding but weak cargo binding, and finally, (iii) processive binding to both cargo and filament. All these possible scenarios are illustrated in Fig. <ref>(c) and may arise in different contexts. As an example, good cargo binding but weak filament binding is a typical feature of dynein, where an individual dynein molecule may not generate sufficient processivity and force for reliable cargo movement, but teams of dynein can work together to move a cargo <cit.>. On the other hand, good filament binding but weak cargo binding has been observed in the slow transport of soluble cytoplasmic proteins – they move by intermittent binding to motor proteins for directed motion, interspersed with dissociation and consequent diffusion <cit.>.
Finally, both synaptic vesicle proteins and autophagosomes bind reliably to motor proteins that move processively along filaments: synaptic vesicle proteins move anterogradely, while autophagosomes move retrogradely <cit.>.
Arguably, the last possibility, where MPs processively bind to both cargo and filament, provides the most reliable transport toward the filament or axon termini but might reduce material availability closer to the cell body. Note that the production of new MPs near the cell body and their subsequent degradation already ensures a possible steady state. However, the actual shape of the homeostatic distribution of MPs and their dynamics also depend on their filament processivity and related transport properties.
Post-translational modifications (PTMs) of motor proteins (MPs) are essential for controlling their amount, activity, and cargo binding <cit.>. In particular, ubiquitination and ubiquitin-like PTMs play a key role in degrading and regulating MPs <cit.>. Disruptions in these PTMs are associated with various neurodegenerative diseases <cit.>. Additionally, changes in molecular motors and cargo transport are implicated in the development and progression of these diseases <cit.>. Therefore, understanding how ubiquitin and ubiquitin-like PTMs impact motor proteins can provide valuable insights into axonal cargo transport and highlight potential targets for therapeutic interventions in neurodegenerative diseases.
Recently, using a combination of live imaging of C. elegans neurons and theoretical analysis, we investigated how different ubiquitin-like modifications affect the cargo binding of Kinesin-3 MP UNC-104, a KIF1A ortholog <cit.>. The results suggested that MPs may bind to cargo cooperatively, and the degree of cooperative binding depends on the ubiquitin-like modifications.
In this paper, we utilize RNAi-mediated knockdowns and live imaging of C. elegans to study the impact of different levels of ubiquitin-like modifications on the kinematics of the UNC-104 motor in the mechanosensory neurons, the posterior lateral microtubule (PLM) cells.
With the help of a data collapse for cargo-bound MP distributions and theory, we characterize cooperative cargo binding and evaluate how the modifications influence this process. We explore the impact on the homeostatic UNC-104 MP distribution along the axons and study the UNC-104 dynamics using localized FRAP. We analyze the experimental findings using a theoretical model.
The experiments and theory allow us to uniquely determine UNC-104's effective diffusivity and drift.
The rest of the manuscript is organized as follows: Sec.<ref> details the experimental system and methods; Sec.<ref> covers the cargo-binding model and data-collapse of motor protein distributions; Sec.<ref> presents the theoretical model for motor protein dynamics along axons and uses experiments to evaluate transport properties; and Sec.<ref> summarizes our findings and provides an outlook.
§ MATERIAL AND METHODS
§.§ Worm maintenance
C. elegans was reared on NGM agar seeded with E. coli OP50 using standard practices <cit.>. For RNAi-mediated knockdown, NGM agar was prepared with 100 μ g μ L^-1 ampicillin and 1 mM IPTG. These plates were seeded with the appropriate dsRNA [against either control empty vector (referred to as wild type), uba-1 or fbxb-65] expressing E. coli HT115 bacteria and incubated for 1 day at 20C before use. After 1 day, 4 C. elegans young adults were placed and incubated at 20C. Their progeny at the L4 stage were then used for all experiments.
§.§ Microscopy
To image UNC-104::GFP distribution, we used a Zeiss LSM 880 equipped with a 63x/1.4 N.A. oil objective at a frame size of 1024x1024 pixels leading to a pixel size of 88 nm using a high sensitivity GaAsP detector illuminated with a 488 nm argon LASER and imaged with a spectral filter set to a range of 493 to 555 nm. The entire neuron length was then imaged by tiling across six regions with a ∼10% overlap at 5% LASER power at 488 nm and 2x flyback line-scan averaging. Simultaneously, soluble mScarlet was imaged using a spectral filter from 585 to 688 nm with a 561 nm DPSS LASER at 5% power at the same resolution. The images were automatically stitched using the Zen software.
§.§ Steady state distribution of UNC-104 along the axonal length
A line profile using a 3 pixels-wide spline fit polyline starting from the distal end of the PLM was traced up to the cell body, and the intensity and distance data were exported using FIJI <cit.>.
The exported data consists of the intensity profile along the entire axonal length for (a) UNC-104 tagged with GFP (UNC-104::GFP), and (b) mScarlet, which can diffuse freely and acts as a read-out of the axonal volume. Both (a) and (b) are obtained for the WT and ubiquitin-like modification knockdown animals. The exported data is further processed to obtain the final UNC-104 intensity profile, which is discussed in the next paragraph.
In Fig. <ref>, we show the experimental intensity profiles for the control (wild type) along the axonal length. The normalized average bare intensity profile I_unc of UNC-104::GFP is shown in Fig. <ref>(a), and the corresponding normalized average mScarlet fluorophore profile I_mscar is shown in Fig. <ref>(b). The intensity I_unc is expected to be proportional to the number of UNC-104 per unit length of the axon. This can increase due to an increase in the density of UNC-104 or an increase in the local axonal volume at a fixed UNC-104 density. Note that the fluorophore profile gives a measure of the local volume of the cell along each axon. It varies as the width of the axon changes from the cell body to the terminal. In order to get the correct readout of the UNC-104 localization per unit volume, we take the ratio I_unc/I_mscar of the intensity I_unc of UNC-104::GFP to mScarlet intensity I_mscar, which provides insight into the distribution of UNC-104 MPs along the axonal length by mitigating variability due to axonal volume. The homeostatic profile of I_unc/I_mscar averaged over tens of cells is shown in Fig. <ref>(c).
§.§ Fluorescence Recovery After Photobleaching (FRAP) assay
The imaging was done in an LSM 880 using a 40x/1.4 DIC M27 Oil objective at 1.5x zoom with a pixel size of 110 nm. Images were acquired every 500 ms in the ALM around 100 μm away from the cell body. The photobleaching was done after acquiring six pre-bleach images using a 488 nm diode laser at a region in the frame 50 μm in length using 80% power (3 mW maximum power at objective) with five iterations. The imaging was done for at least 200 frames post-bleach.
Post-acquisition, movies were analyzed using FIJI <cit.> by drawing a 3 pixels-wide spline fit polyline. The intensity profile at each time point was exported along with the ROI of bleaching and the bleach time point using a custom-built Python script.
The analysis of the FRAP data using our model framework to obtain the diffusion coefficients is discussed in Sec. <ref>
§ THEORETICAL MODEL TO STUDY THE STEADY STATE DISTRIBUTION OF CARGO-BOUND KINESIN MOTORS
Recent theoretical analysis of cargo-bound UNC-104 motor proteins indicated a cooperative cargo binding mechanism <cit.>.
We validate this cooperative binding model through a compelling data collapse by analyzing cargo puncta of different sizes, which correspond to varying numbers of bound motor proteins. Additionally, we assess how ubiquitin-like modifications influence the extent of this cooperative binding.
We consider the evolution for the probability P(n,t) of a cargo bound to n MPs at time t <cit.>Ṗ(n,t) = u_+(n-1)P(n-1,t)+u_-(n+1)P(n+1,t)
-[u_+(n)+u_-(n)]P(n,t).
Here, u_+(n) and u_-(n) denote the rates of MP binding and unbinding, cross-linking the cargo to the microtubule. We use a pairwise detailed balance at the steady state, u_+(n-1)P_s(n-1)=u_-(n) P_s(n) and boundary condition j_+= u_-(1) P_s(1) with j_+ a diffusion-limited rate of MP cross-linking. This leads to the exact steady state given by the recurrence relation
P_s(n)= j_+u_-(n)∏_m=1^(n-1)u_+(m)u_-(m)
with the total number of motors N=∑_n=1^∞ n P_s(n).
Cross-linked MPs can detach with a constant rate β so that u_-(n)=β n. The presence of attached MPs can assist in further MP cross-linking such that u_+(n)=a_++b_+n, within linear approximation. Here, a_+ denotes the basal attachment rate while b_+ quantifies the strength of cooperative binding. Using these expressions and expanding up to linear order in 1/N we obtain the following closed-form expression <cit.>
H_s(n) = A n^ e^-μ n,
where A=j_+exp(μ), =a_+-1 and μ=1-b_+.
The normalized distribution is P_s(n)= N_n^-1 H_s(n) with N_n = ∫_0^∞ H_s(n) dn = A μ^-(+1)(1+). This leads to
P̃_s(μ n)=(1+)μ P_s(n) = (μ n)^exp(-μ n).
From the distribution of P_s(n), we note that n̅ = /μ denotes both the mode and mean of the distribution, and its variance is given by _n^2=2/μ^2.
We compare the theoretical distribution with the experimental results obtaining ≈ 1 and μ_WT=7.33× 10^-3, μ_uba1=4.79× 10^-3 and μ_fbxb65=3.05× 10^-3 <cit.>.
Using these values, we plot the full distributions P̃_s(μ n) to obtain a data collapse in Fig. <ref>. The plot of Eq. (<ref>) using a black solid line agrees well with the experimental data as shown in Fig. <ref>. The scaled quantity a_+/≈ 2 is independent of the RNAi and the cooperative binding strength (b_+/)_WT < (b_+/)_uba1 < (b_+/)_fbxb65.
Moreover, Eq. (<ref>), gives the evolution of average number of MPs n(t) cross-linking the cargo to microtubule,
ṅ=a_++(b_+-β)n(t).
Using the initial condition n(0)=n_0, we obtain
n(t)=n̅ - (n̅-n_0)exp(-t/τ).
As before, n̅=/μ is the steady state average, and τ=1/βμ is the relaxation time. This relaxation time is controlled by cooperative binding and should show a slower approach for uba-1 and fbxb-65 RNAi that have reduced UNC-104 ubiquitin-like modification.
We note that the typical relaxation time for the control (wild type) is τ_WT=1/βμ_WT≈ 136.43 sec considering the constant detachment rate of MPs β = 1 sec^-1 <cit.>.
The relaxation time also determines the fluctuation correlation at the steady state,
δ n(t)δ n(0) = _n^2exp(-t/τ),
where _n^2=2/μ^2, the variance of steady-state distribution P_s(n). These predictions can be tested across future experiments.
§ THEORETICAL MODEL FOR EVOLUTION OF UNC-104 DENSITY PROFILE ALONG AXONS
The UNC-104 transport along microtubules aligned along axons can be approximated as a one-dimensional (1d) motion.
The MPs can be either bound to microtubules and perform active, directed motion hydrolyzing ATP or diffuse in a passive manner when detached from the microtubule. Let us denote these bound and unbound fractions of MPs by _̊b(x,t) and _̊u(x,t), respectively. The bound fraction moves with an active drift velocity v_0. Here, we ignore the stochasticity of motion and the probabilities of back-stepping by ignoring the diffusion of the bound fraction. In contrast, the unbound MPs can only diffuse with diffusion constant D_0. Considering the binding and unbinding kinematics in terms of rates and , we get
_t _̊b + v_0 _x _̊b = _̊u - _̊b,
_t _̊u - D_0 _x^2 _̊u = - _̊u + _̊b.
At this stage, we ignored the synthesis and degradation of MPs, which will be incorporated later.
The above equations can be rewritten in terms of the total MP density =̊_̊b+_̊u and the density difference m=_̊b-_̊u. The evolution of $̊ follows a conserved continuity equation. In contrast, the evolution ofm(x,t)gives_t m + v_0 _x _̊b + D_0 _x^2 _̊u = 2 (_̊u - _̊b).
In the presence of the source term on the right-hand side of the above equation, one can perform an adiabatic elimination in the hydrodynamic limit to obtain_̊u/_̊b = /.
Using the processivityø = /(+)one can write_̊b=ø$̊ and _̊u=(1-ø)$̊. This leads to the conserved dynamics_t +̊ v _x -̊ D _x^2 =̊0where the effective drift velocity and diffusivity of total MP density arev = ø v_0andD=(1-ø) D_0.
Now, we incorporate the other two source terms, the synthesis and degradation processes of MPs.
We consider the synthesis at the cell body with rateQand a homogeneous degradation with rate. Thus, the evolution of total concentration can be expressed as (see Fig. <ref>),
_t (̊x,t) =- v _x +̊ D _x^2 +̊Q (̣x) - .̊
The Dirac-delta functionδ(x)ensures that the motor proteins are synthesized near the cell body (x=0). Remarkably, the above equation is an example of a stochastic resetting process that attracted tremendous recent interest in statistical physics <cit.>. However, unlike the typical stochastic resetting examples, in the present context, the number of particles is not exactly conserved. The mean value of the total number of MPsn(t)=∫ dx ρ(x,t)in the steady staten̅ = Q/γis determined by the synthesis and degradation ratesQand.
§.§ Steady state distribution
We consider a reflective boundary condition atx=Lleading to-D_x(̊x,t)|_x=L+v (̊x,t)|_x=L=0. Thus, Eq. (<ref>) has the following steady-state solution (see Appendix <ref> for a detailed derivation):
_̊s(x) =Q ł_vł e^x/λ_v2Dsinh(L/ł)[e^-(L-x)/ł(ł_v-ł)+e^(L-x)/ł(ł_v+ł)].
Here two effective length scalesλ_v=2l_vandλ=l_γ/[1+(l_γ/ł_v)^2]^1/2determine the steady-state profile. In these expressions, we used the characteristic length scalesl_v=D/vandl_γ=√(D/γ).
A Laplace transform method can be employed to solve the dynamical equation Eq. (<ref>) in the Laplace space. While an inverse transform to a closed-form expression could not be obtained, this solution allows us to determine the relaxation times towards the steady state. The slowest mode of time-scale^-1is determined by the degradation ratealone. The relaxation times towards steady-state for other relatively faster modes are^-1[1+l_^2 ( 1ł_v^2 + π^2 n^2L^2) ]^-1forn=1,2,3,…; see Appendix <ref>.
The steady-state expression given by Eq. (<ref>) may be used to fit the experimental profiles. However, we note that there are four independent parameters (Q,D,v, and ) which makes this comparison difficult. We note that ≈ 10^-4s^-1 as known from earlier studies for KIF1A <cit.>. We further eliminate Q using the total number of MPs
𝒩 = ∫_0^L_̊s(x)dx=Qλ^2λ_v^2/D(λ_v^2-λ^2),
to obtain a normalized steady-state distribution of MPs expressed in terms of the density profile ^̊𝒩_s(x) = _̊s(x)/ N and get
^̊𝒩_s(x) =(λ_v^2-λ^2)e^x/λ_v/2λλ_vsinh(L/λ)[e^-(L-x)/ł(ł_v-ł)+e^(L-x)/ł(ł_v+ł)].
Eq. (<ref>) has only two unknowns, namely D and v. First, we estimate D independently from the FRAP experiments, which are discussed in the next section. Second, we fit normalized experimental distribution using Eq. (<ref>) to get a single fit parameter v with known values of D. Finally, we estimate values of Q utilizing the fitting of non-normalized experimental distribution using Eq. (<ref>) with known values of D and v.
§.§ FRAP Analysis
Although a numerical solution of Eq. (<ref>) for the evolution of the density profile can be obtained, the equation does not allow a simple closed-form solution.
To analyze the experimental results,
we use the following approach. In experiments, we normalize the evolution of the UNC-104 intensity profile by the homeostatic profile before photo-bleaching. The resultant profile is flat to begin with (before photo-bleaching). We observe the FRAP evolution with respect to the homeostatic profile. This evolution can be analyzed by simplifying the theoretical approach presented in Eq. (<ref>).
For this purpose, we consider the scaled evolution with respect to the theoretical steady-state, ϕ(x,t) = (̊x,t)/_̊s(x), over a small domain corresponding to the FRAP window. This follows,
_t ϕ(x,t) = -v _x ϕ(x,t) + D _x^2 ϕ(x,t) ,
locally, where we began by ignoring the synthesis and degradation terms for simplicity. This leads to a uniform steady state ϕ_s(x)=1 in the finite domain. To model FRAP over a window of size 2a, we analyze the evolution towards steady-state starting from the initial condition
ϕ(x_0,0) = 0 for -a≤ x_0 ≤ a ,
= 1 for |x_0| > a.
The Greens function corresponding to Eq. (<ref>) is
G(x-x_0,t) = 1√(4π D t) e^-(x-x_0-v t)^2/4 D t. This,
along with the initial condition in Eq. (<ref>), leads to the solution
ϕ(x,t) = [2- erf(a+vt-x√(4Dt))- erf(a-vt+x√(4Dt))] ,
where erf(x)=2√(π)∫_0^x e^-s^2ds. Now, bringing back the synthesis and degradation terms, the same analysis leads to a solution
ϕ(x,t)
= e^- t/2[2- erf(a+vt-x√(4Dt))- erf(a-vt+x√(4Dt))]
+QΘ(t)/ρ_0(1-e^- t),
where _̊0=_̊s(x=0). However, since the degradation rate is known to be small, the intensity decay due to it over the short FRAP duration is negligible, and Eq. (<ref>) reduces to Eq. (<ref>) apart from an additive constant Q/_̊0. The recovery of the profile ϕ(x,t) is mediated by effective diffusion D, degradation , and effective drift v.
We utilize the experimental intensity evolution during FRAP, after dividing by the homeostatic intensity profile, to estimate diffusion. We extract the diffusion coefficient by directly fitting Eq. (<ref>) to the evolution of this intensity profile. Finally, by averaging over many realizations, i.e., different cells, we obtain the diffusion coefficient D (Table <ref>).
While performing such fitting, one can neglect the small directed displacement v t of the local density profile due to drift over the relatively short recovery time t compared to the extent of the FRAP window w=2a, assuming v t ≪ w. However, as it turns out, a strong intensity fluctuation makes it difficult to estimate D reliably through this procedure.
To reduce such systematic error, we use the evolution of an average intensity over a small window near the minimum intensity spot at the beginning of FRAP (Fig. <ref>(a)). Setting x=0, v t ≪ a, and t≈ 0, Eq. (<ref>) gives
ϕ(0,t) = [ 1 - erfa√(4Dt)].
We estimate the diffusion constant D by computing the half recovery time t_1/2 of the center of the FRAP region defined over a small window of size ϵ=4 μm, where ϵ≪ 2a (considering ϵ window size as 10% of FRAP window size 2a) with 2a≈ 40 μm.
Thus, ϕ(0,t_1/2)=(1/2) ϕ(0,0) (see Fig. <ref>(b)), we find the values of D using
D=0.275(2a)^2t_1/2.
In our experiments, the recovery intensity turns out to be less reliable at longer times due to large intensity fluctuations, and even determining _1/2 can be difficult for some experimental data. For them, we use ϕ(0,t_1/4)=(1/4) ϕ(0,0)
(see Fig. <ref>(b)) and the corresponding relation
D=0.094 (2a)^2t_1/4.
We have estimated the effective diffusion coefficient D using both t_1/2 and t_1/4 and found that diffusion coefficients were comparable (see Fig. <ref>(c) and Table <ref> in Appendix <ref>).
Using the estimated D values for the WT (both via t_1/2 and t_1/4), we compute ϕ(x,t) both numerically (solving Eq. (<ref>)) as well as analytically (using Eq. (<ref>)), neglecting the small advection. We show a comparison of the evolution at different time points of the FRAP experiments with these analytic estimates, which are shown in Fig. <ref>(a) (see Supplemental Material Movie-1 <cit.> for a comparison of the complete time evolution from experiment, numerical solutions and analytic expression). We also compare the theoretical time trajectories for ϕ(0,t) with the corresponding FRAP experiment; see Fig. <ref>(b). We note that for this specific case, ϕ(0,t_1/4) agrees better with the FRAP experiment at early times (up to ≲ 60 sec) and deviates at large times when the recovery intensity is unreliable.
For the fbxb-65 knockdown experiments, the D values estimated using t_1/4 are statistically more reliable than the estimate using t_1/2, while for the WT and uba-1 knockdown, the statistics are equivalent (see Appendix. <ref> for details). The corresponding distributions of D values, estimated using t_1/4, across different cells are compared between the WT and the two knockdown experiments in Fig. <ref>(c) (see also the first row of Table <ref>). Comparing the values of D between the WT and the knockdown experiments, we notice that the relative variation in the value of D across experiments is Δ D=(D_WT-D_KD)/D_WT≤ 10%, where WT and KD stand for wild type and RNAi treated ubiquitin-like knockdowns respectively.
The FRAP analysis thus suggests that cells with altered ubiquitin-like knockdowns of either uba-1 or fbxb-65 do not affect the MP's diffusional transport properties. Interestingly, due to the exponential form of the steady state profile _̊s(x), a FRAP performed on top of this profile can lead to an apparent emergent retrograde bias in the recovery profile (see Appendix <ref> and Movie-2 in <cit.>). This is a general physical effect that could be misleading and is, therefore, crucial to remember while analyzing any FRAP data. This is discussed in detail using our FRAP analysis in Appendix <ref>.
§.§ Active transport is crucial in determining the steady-state intensity profiles
With the diffusion coefficients D and the degradation rate known, we can now determine the effective motor speed v from the experimental steady-state distribution of the UNC-104 motors along the axon. To make a comparative study with our 1d theory, we must ensure that the steady-state distribution of MPs is defined purely as a one-dimensional density profile.
This is discussed in Sec. <ref>, where we corrected for the local volume variability of axons by scaling the experimentally obtained homeostatic intensity profiles for the MPs by the corresponding mScarlet fluorophore profiles for a given cell. This density further averaged over n=15 cells (see Fig. <ref>(c)) gives a read-out for the steady-state local density profile _̊s(x). Finally, we normalize the distribution _̊s(x) to obtain ^̊𝒩_s(x) to eliminate the source term Q and is used for comparison with the theoretical model.
Recall the effective motor speed is defined as v=ø v_0. The velocity v_0 of individual MPs obtained from live imaging of UNC-104::GFP-decorated puncta are estimated to be ∼ 1 μm/s <cit.>. Assuming that the individual motor protein properties do not change, the unknown parameter v can, therefore, be treated as a read-out for the value of processivity ø. Thus, to estimate ø (via v), we further normalize the experimental steady-state distributions with respect to the spatially integrated intensity and compare them to Eq. (<ref>). The results are shown in Fig. <ref>(a), <ref>(b), and <ref>(c), which display a good fit in all the cases. Note that here we use only a single parameter fit to v and hence obtain ø. The values of the processivity ø obtained from these fits are reported in Table <ref>.
Several intriguing observations emerge from the fit results. First, it is evident that the transport parameters D and v remain largely unchanged by ubiquitin-like knockdowns, maintaining similar approximate values across all cases, including the wild type.
Secondly, the processivity (ø≈ 0.05) is low, indicating that only 5% of available motor proteins participate in transport.
This also indicates that the intrinsic diffusivity D_0=D/(1-ø) is mainly due to the detached fraction of MPs and is approximately equal to the measured values.
Additionally, we note that the integrated value of the MP density 𝒩=∫_0^L_̊s(x) (see Eq. (<ref>)) remains almost unchanged. Since this term provides a read-out for the source term Q, we conclude that Q is invariant to uba-1 and fbxb-65 knockdown cells [Q_WT≈ 0.035, Q_uba-1≈ 0.035, Q_fbxb-65≈ 0.033.].
The processivity ø turns out to be the most important parameter regulating the steady-state distribution. From the expression in Eq. (<ref>), we note that if ø=0, the length scale of the steady state exponential distribution is set by l_ and attains the maximum at the source (if l_<L) and the minimum at the posterior end (see Fig. <ref>(a)). Upon slowly increasing ø, the minimum position of _̊s(x) starts moving from x=L to lower values of x (see Fig. <ref>(b)–(d) ) until at ø=0.05, the ρ^𝒩_s(x) profile is already reversed and the length-scale is set by the competition between l_ and l_v (see Fig. <ref>(c)).
This profile is similar to our experimental observation. It gets even sharper at higher ø, with most of the MPs accumulating near the distal end; see Fig. <ref>(d). Moreover, as the inset of Fig. <ref>(d) shows,
the minimum of the profile shifts back to positive values at larger ø. The inversion of the density profile with the maximum shifting from proximal to distal end with increasing processivity is further illustrated using a heat-map in Fig. <ref>(e). Fig. <ref>(f) shows the non-monotonic variation of the minimum density and the location of this minimum with increasing ø.
We can thus conclude that despite ubiquitin-like knockdowns of MP having a drastic effect on cooperative binding, the steady-state MP distribution regulated by the effective diffusivity and speed, which in turn are dependent on the effective processivity, remains completely unaffected.
§ DISCUSSION
Directed axonal transport of synaptic vesicles by MPs is crucial for the proper functioning of an organism, failure of which is associated with neurodegenerative disorders <cit.>.
To ensure a robust and reliable transport of cargo, it is important to understand what controls the steady-state distributions of the MPs and what modifications can affect the motor's intrinsic transport properties
As was shown earlier and further consolidated here using a convincing data collapse, knockdown of the E1 activating enzyme uba-1 or the E3 ligase fbxb-65 causes increased cooperative binding of UNC-104 to cargo. In this work, we further show that despite such a change in cargo binding, the same ubiquitin-like knockdowns leave the UNC-104 distribution along the axon and their transport properties, like effective drift velocity and diffusivity, unchanged.
We quantified UNC-104 steady-state distribution in wild-type neurons and upon RNAi knockdowns of either uba-1 or fbxb-65. The steady-state profile shows a larger accumulation of MPs in the distal end for all the cases. Normalization of the raw intensity data with the corresponding axonal volume showed a nice collapse of the data from different RNAi knockdowns on each other, indicating that the homeostatic distribution remains unchanged under ubiquitin-like knockdowns.
We proposed a theoretical model which clearly explains the nature of such a homeostatic profile. Synthesis, degradation, and transport, both directed drift and diffusion, control the steady-state density profile along the axon.
Our model allowed us to estimate the drift and diffusion utilizing the homeostatic distribution of UNC-104 along the axons and analysts of FRAP results.
We first estimated the diffusion coefficient D from the FRAP experiments. Using the known D values, we fitted the steady-state profiles to obtain estimates for the effective drift velocity v, which is a readout for the processivity ø=v/v_0. Our analysis clearly shows that both the diffusivity and the processivity remain unchanged under ubiquitin-like knockdowns.
Since these kinesin-3 MPs can hitchhike on retrogradely moving cargo carried by dynein MPs, the kinetic properties measured using the overall fluorescent intensity from UNC-104 not only depend on their active motion along microtubules but also include the impact of such retrograde flux.
We find that ubiquitin-like knockdowns increase cooperative cargo binding, thereby potentially increasing the impact of the hitchhiked retrograde motion. Consistent with this, we previously observed a higher occurrence of UNC-104 retrograde movement under ubiquitin-like knockdowns <cit.>. Surprisingly, the effective kinetic properties of UNC-104 remain unchanged under the same ubiquitin-like knockdowns. Such independence suggests a subtle regulation nullifying the effect of potentially increased retrograde flux. Note that the cargo-bound MPs may support each other's microtubule processivity by sheer localization on the cargo.
§ AUTHOR CONTRIBUTIONS
All authors contributed to the design of the research. VS performed experiments under the supervision of SPK. DC and AN designed the theoretical framework and analysis; AS performed numerical calculations; AS, AN, and DC analyzed the data. All authors discussed the results and wrote the manuscript.
§ DATA AVAILABILITY
All data that support the findings of this study are included within the article (and any supplementary files).
§ ACKNOWLEDGEMENTS
AN and DC thank Madan Rao for insightful comments. DC thanks Sanjib Sabhapandit and Fernando Peruani for useful discussions. DC acknowledges research grants from the Department of Atomic Energy (OM no. 1603/2/2020/IoP/R&D-II/15028) and Science and Engineering Research Board (SERB), India (MTR/2019/000750) and thanks the International Centre for Theoretical Sciences (ICTS-TIFR), Bangalore, for an Associateship. AN acknowledges SERB India (MTR/2023/000507) for financial support and thanks the Max-Planck Institute for the Physics of Complex Systems (MPIPKS), Dresden, for hospitality and support during summer visit in 2024. Research in SPK’s lab is supported by grants from DAE (OM no. 1303/2/2019/R&D-II/DAE/2079; Project identification number RTI4003 dated 11.02.2020), PRISM (12-R&D-IMS- 5.02-0202), and a Howard Hughes Medical Institute International Early Career Scientist Grant (55007425). AS acknowledges partial financial support from the John Templeton Foundation, Grant 62213.
§ STEADY-STATE DISTRIBUTIONS
We recast the governing equation (<ref>) in terms of current J=-(D_x - v )$̊ and considering uniform degradation(x) =constant,
_t = -_x J -,
with the boundary condition of total current atx=0,J_x=0=QwhereQis the constant source and total current atx=L,J_x=L=0.
The steady-state limit (_t =̊ 0) gives
_x^2 _̊s - 1l_v_x _̊s - 1l_^2_̊s = 0
where the constantsl_v = D/vandl_=√(D/). Substituting_̊s ∝ e^ xin the Eq. (<ref>) leads to the quadratic equation^2-/l_v-1/l_^2=0with solution = [l_± (4l_v^2 +l_^2)^1/2]/2l_v l_. Thus, steady-state density,
_̊s (x) = e^x/2l_v[𝒜 e^x√(4l_v^2 + l_^2)2l_v l_ + ℬ e^-x√(4l_v^2 + l_^2)2l_v l_]
We recast again withł_v = 2 l_vandł = 2l_v l_ / √(4l_v^2 + l_^2)
=l_ (1+l_^2/ł_v^2)^-1/2. Thus,
_̊s (x) = e^x/ł_v[𝒜 e^x/ł + ℬ e^-x/ł]
Now, the total current atx=0,J_x=0=Qlead to
(ł -ł_v)𝒜 + (ł +ł_v) ℬ = Qłł_v D ,
and the total current atx=L,J_x=L=0lead to
(ł -ł_v) e^L/ł𝒜 + (ł +ł_v) e^-L/łℬ = 0 .
Now, solving Eqs. (<ref>) and (<ref>), we get
𝒜 = Q łł_v D (ł_v - ł ) (e^2L/ł -1) ,
ℬ = Q łł_v e^2L/łD (ł + ł_v ) (e^2L/ł - 1) .
Substituting𝒜andℬback in the Eq. (<ref>), we get
_̊s (x) = Q ł_v łD (e^2L/ł -1) e^x/ł_v[e^x/ł(ł_v - ł) + e^(2L-x)/ł(ł_v + ł )] ,
which can be rewritten as:
_̊s(x) =Q ł_vł e^x/λ_v2Dsinh(L/ł)[e^-(L-x)/ł(ł_v-ł)+e^(L-x)/ł(ł_v+ł)].
This is precisely Eq. (<ref>) of the main text.
§ LAPLACE TRANSFORM, SINGULARITIES AND SLOW DYNAMICS
The equation of motion for the density can be directly solved using Laplace transform̊̃(x,s) = ∫_0^t dt' e^-s t'(̊x,t')to get
̊̃(x,s) = Q ł_v ł(s) e^x/ł_vs D (e^2L/ł(s) -1)[e^x/ł(s)(ł_v - ł(s)) + e^(2L-x)/ł(s)(ł_v + ł(s) )]
whereł(s)=l_(1+l_^2ł_v^2+s)^-1/2. It is easy to see that the above expression gives the steady-state result_̊s(x)= lim_s→ 0 s ̊̃(x,s).
The time dependence is given by the inverse Laplace transform,(̊x,t)= 12 π i∫_c-i∞^c+i∞ ds e^s t ̊̃(x,s)withc>0so that the function remains analytic on the right half of the complex plane. To perform the contour integration, we first need to analyze the structure of singularities of the integrand.
Due to the presence ofł(s)in the numerator,̊̃(x,s)has a branch point ats_b = - (1+l_^2ł_v^2). Moreover, it has simple poles at(e^2L/ł(s) -1)=0; using1=e^i 2nπthis gives simple poles ats_n = s_b - (ℓ_L)^2 n^2 π^2withn=0,1,2,…, all lying on the real axis and to the left ofs_b.
Further, as can be seen from Eq. (<ref>),̊̃(x,s)has a simple pole ats=0, and ats^* = s_b + ℓ_^2 ł_v^2 =-<0(usingł_v = ±ł(s)) betweens=0ands=s_b.
Thus, the solution is
(̊x,t) = _̊s(x) + _̊1(x)e^s^* t + ∑_n=0^∞_̊n(x) e^s_n t,
+ e^s_b t L^-1̊̃(x, s+s_b)
so that_̊1(x) = lim_s→ s^* (s-s^*) ̊̃(x,s), etc.
The last term is the inverse Laplace transform of the frequency-shifted function, arising due to the branch point ats_b<0.
The slowest mode of evolution islim_t→∞[(̊x,t) - _̊s(x)] →_̊1(x)e^- tis controlled by the degradation rate. The reason for this becomes immediately clear by noting from Eq. (<ref>) that the total quantity of MPs N=∫ dx (̊x)evolves asd N/dt= Q- N.
§ COMPARISON OF DIFFUSIVITIES FROM FRAP ANALYSIS
In Fig. <ref>, we show the scatter plot of diffusivities estimated using botht_1/2(Eq. (<ref>)) andt_1/4(Eq. (<ref>))
for control (WT) along with uba-1 and fbxb-65 knockdown cells.
The corresponding comparison for the mean values of effective diffusivities with their standard errors is shown in Table <ref>.
For both control (WT) and uba-1 knockdown, D is estimated by averaging over 22 independent experimental realizations. However, for the case of fbxb-65 knockdown, D measured via t_1/4 is averaged over all 23 realizations, whereas the one via t_1/2 was averaged over 9 realizations. This is because the D estimate via t_1/2 was relatively unreliable in 14 cases due to large intensity fluctuations.
§ FURTHER INSIGHTS FROM THE MODEL: APPARENT RETROGRADE MOTION UNDER FRAP
To characterize the intensity recovery dynamics observed in the FRAP experiments, we numerically integrate Eq. (<ref>)
for an initial condition that is equivalent to the density profile just after FRAP. This is constructed as follows: starting with the steady-state profile defined in [0,L], we choose a small window of size w=2a=Δ L around the center of the profile where the local density is set to zero (Fig. <ref>). The parameters are chosen corresponding to the WT case: we use the transport coefficients D and v from Table <ref> and the degradation rate γ=10^-4sec^-1. We use the typical axon length L=274.74 μm and FRAP window size w=2a=Δ L=39.25 μm. To integrate Eq. (<ref>), we use time step dt=0.0001 sec, set the source term Q=0.035 sec^-1 at x=0 and use a reflecting boundary at x=L. The resultant MP distribution is finally normalized over the axonal length so that ∫_0^L ρ^ N(x) dx = 1.
In Fig. <ref>(a)-(d), the recovery of the density profile after FRAP is shown at four different times and the corresponding local currents J^N(x,t) are shown in Fig. <ref>(e)-(f) respectively. The approach toward the steady-state profile (shown by the dotted lines in Fig. <ref>(a)-(d)) is evident from
these figures, and at long times (see Fig. <ref>(d)), the MP distribution almost overlaps with the steady-state profile. The recovery is further evident from the time evolution of local current J^𝒩(x,t). Just after the FRAP, J^𝒩(x,t) exhibit large positive and negative peaks corresponding to the two edges of the FRAP region (see Fig. <ref>(e)). As time progresses, due to the recovery in depleted density, the peaks start receding until they disappear at long times (see Fig. <ref>(f)-(h)) when J^𝒩(x,t) approach to the steady-state value. It is important to note that at steady state, J^𝒩_s(x) is not a constant due to the presence of a non-zero source Q at one end. We also focus on the relaxation of the minimum density value ρ^𝒩_min (see Fig. <ref>(i)) and its location x_min (see Fig. <ref>(j)) along the axon. This indicates that the shape of the steady-state density profile almost recovers at around t∼ 5 min. Moreover, the decrease of x_ min before vanishing captures an apparent retrograde motion of the location of FRAP minimum against the direction of MP flux. Similar behavior is observed in some of our FRAP experiments. This apparent retrograde movement is entirely due to the non-uniform shape of the steady-state distribution to which the perturbation under FRAP relaxes back. The relaxation of the profile during FRAP towards the approximate exponential steady-state profile led to the apparent retrograde motion of the FRAP center in the current example. This is further demonstrated in Movie-2 presented in the Supplemental Material <cit.>, where we numerically show the FRAP region's evolution with the minimum density point labeled by a filled square. Such apparent movement of the FRAP center is expected in any FRAP experiment on non-homogeneous steady-state profiles and, thus, must be taken into consideration while analyzing and interpreting such experimental results.
48
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0] `
12 `$12
`&12 `#12 `1̂2 `_12 `%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Burton and Paige(1981)]burton1981
authorauthorP. R. Burton and authorJ. L. Paige, titletitlePolarity of axoplasmic
microtubules in the olfactory nerve of the frog., https://doi.org/10.1073/pnas.78.5.3269journaljournalProceedings of the National Academy of Sciences volume78, pages3269 (year1981)NoStop
[Heidemann et al.(1981)Heidemann, Landers, and Hamborg]heidemann1981
authorauthorS. R. Heidemann, authorJ. M. Landers, and authorM. A. Hamborg, titletitlePolarity orientation of
axonal microtubules., https://doi.org/10.1083/jcb.91.3.661journaljournalThe Journal of cell biologyvolume91, pages661 (year1981)NoStop
[Vale et al.(1985)Vale,
Reese, and Sheetz]Vale1985
authorauthorR. D. Vale, authorT. S. Reese, and authorM. P. Sheetz,titletitleIdentification of a novel
force-generating protein, kinesin, involved in microtubule-based motility,https://doi.org/10.1016/S0092-8674(85)80099-4journaljournalCell volume42, pages39 (year1985)NoStop
[Vallee et al.(1988)Vallee,
Wall, Paschal, and Shpetner]vallee1988
authorauthorR. B. Vallee, authorJ. S. Wall,
authorB. M. Paschal, andauthorH. S. Shpetner,titletitleMicrotubule-associated protein 1c from
brain is a two-headed cytosolic dynein, https://doi.org/10.1038/332561a0journaljournalNature volume332, pages561
(year1988)NoStop
[Vale(2003)]Vale2003
authorauthorR. D. Vale, titletitleThe Molecular Motor Toolbox
for Intracellular Transport, https://doi.org/10.1016/S0092-8674(03)00111-9journaljournalCell volume112, pages467 (year2003)NoStop
[Rai et al.(2013)Rai,
Rai, Ramaiya, Jha, andMallik]rai2013
authorauthorA. K. Rai, authorA. Rai, authorA. J. Ramaiya, authorR. Jha, and authorR. Mallik, titletitleMolecular adaptations allow dynein to generate large collective
forces inside cells, https://doi.org/10.1016/j.cell.2012.11.044journaljournalCell volume152, pages172 (year2013)NoStop
[Beeg et al.(2008)Beeg,
Klumpp, Dimova, Gracia,
Unger, and Lipowsky]beeg2008
authorauthorJ. Beeg, authorS. Klumpp,
authorR. Dimova, authorR. S. Gracia, authorE. Unger, and authorR. Lipowsky, titletitleTransport of beads by several kinesin motors, https://doi.org/10.1529/biophysj.106.097881journaljournalBiophysical journal volume94,pages532 (year2008)NoStop
[Shubeita et al.(2008)Shubeita, Tran, Xu, Vershinin, Cermelli, Cotton, Welte, and Gross]shubeita2008
authorauthorG. T. Shubeita, authorS. L. Tran,
authorJ. Xu, authorM. Vershinin, authorS. Cermelli, authorS. L. Cotton, authorM. A. Welte, and authorS. P. Gross, titletitleConsequences of motor copy number on the intracellular transport of
kinesin-1-driven lipid droplets, https://doi.org/10.1016/j.cell.2008.10.021journaljournalCell volume135, pages1098 (year2008)NoStop
[Wilson et al.(2021)Wilson,
Zaragoza, and Xu]wilson2021
authorauthorJ. O. Wilson, authorA. D. Zaragoza, and authorJ. Xu,titletitleTuning ensemble-averaged cargo run
length via fractional change in mean kinesin number, https://dx.doi.org/10.1088/1478-3975/abf5b3journaljournalPhysical biology volume18, pages046004 (year2021)NoStop
[Martin et al.(1999)Martin,
Iyadurai, Gassman, Gindhart Jr, Hays, and Saxton]martin1999
authorauthorM. Martin, authorS. J. Iyadurai, authorA. Gassman,
authorJ. G. Gindhart Jr,
authorT. S. Hays, andauthorW. M. Saxton, titletitleCytoplasmic dynein, the dynactin complex, and
kinesin are interdependent and essential for fast axonal transport, https://doi.org/10.1091/mbc.10.11.3717journaljournalMolecular biology of the cell volume10, pages3717 (year1999)NoStop
[Kuznetsov and Kuznetsov(2022)]kuznetsov2022
authorauthorI. A. Kuznetsov and authorA. V. Kuznetsov, titletitleBidirectional, unlike
unidirectional transport, allows transporting axonal cargos against their
concentration gradient, https://doi.org/https://doi.org/10.1016/j.jtbi.2022.111161journaljournalJournal of Theoretical Biology volume546, pages111161 (year2022)NoStop
[Soppina et al.(2009)Soppina, Rai, Ramaiya, Barak, and Mallik]soppina2009
authorauthorV. Soppina, authorA. K. Rai,
authorA. J. Ramaiya, authorP. Barak, and authorR. Mallik, titletitleTug-of-war between dissimilar teams of microtubule motors regulates
transport and fission of endosomes, https://www.pnas.org/doi/full/10.1073/pnas.0906524106journaljournalProceedings of the National Academy of Sciencesvolume106, pages19381 (year2009)NoStop
[Hendricks et al.(2010)Hendricks, Perlson, Ross, Schroeder, Tokito, and Holzbaur]hendricks2010
authorauthorA. G. Hendricks, authorE. Perlson,
authorJ. L. Ross, authorH. W. Schroeder, authorM. Tokito, and authorE. L. Holzbaur, titletitleMotor coordination via a tug-of-war mechanism drives
bidirectional vesicle transport, https://doi.org/10.1016/j.cub.2010.02.058journaljournalCurrent Biology volume20, pages697 (year2010)NoStop
[Müller et al.(2008)Müller, Klumpp, and Lipowsky]melanie2008
authorauthorM. J. I.Müller, authorS. Klumpp, and authorR. Lipowsky, titletitleTug-of-war as a
cooperative mechanism for bidirectional cargo transport by molecular
motors, https://doi.org/10.1073/pnas.0706825105journaljournalProceedings of the National Academy of
Sciences volume105, pages4609
(year2008)NoStop
[Chowdary et al.(2015)Chowdary, Che, Zhang, and Cui]chowdary2015
authorauthorP. D. Chowdary, authorD. L. Che,
authorK. Zhang, and authorB. Cui, titletitleRetrograde ngf axonal transport—motor coordination in
the unidirectional motility regime, https://doi.org/10.1016/j.bpj.2015.04.036journaljournalBiophysical journal volume108,pages2691 (year2015)NoStop
[Munoz and Klumpp(2022)]munoz2022tug
authorauthorO. Munoz and authorS. Klumpp,titletitleTug-of-war and coordination in
bidirectional transport by molecular motors, https://doi.org/10.1021/acs.jpcb.2c05194journaljournalThe Journal of Physical Chemistry B volume126, pages7957 (year2022)NoStop
[Guillaud et al.(2008)Guillaud, Wong, and Hirokawa]guillaud2008
authorauthorL. Guillaud, authorR. Wong, and authorN. Hirokawa,titletitleDisruption of kif17–mint1 interaction
by camkii-dependent phosphorylation: a molecular model of kinesin–cargo
release, https://doi.org/10.1038/ncb1665journaljournalNature cell biology volume10, pages19 (year2008)NoStop
[Mathieson et al.(2018)Mathieson, Franken, Kosinski, Kurzawa, Zinn, Sweetman, Poeckel, Ratnu, Schramm, Becher, Steidel, Noh, Bergamini, Beck, Bantscheff, andSavitski]Mathieson2018
authorauthorT. Mathieson, authorH. Franken,
authorJ. Kosinski, authorN. Kurzawa, authorN. Zinn, authorG. Sweetman, authorD. Poeckel, authorV. S.Ratnu, authorM. Schramm, authorI. Becher,
authorM. Steidel, authorK.-M. Noh, authorG. Bergamini, authorM. Beck, authorM. Bantscheff, and authorM. M. Savitski, titletitleSystematic analysis of protein turnover in primary cells, https://doi.org/10.1038/s41467-018-03106-1journaljournalNature Communications volume9,pages689 (year2018)NoStop
[Kevenaar et al.(2016)Kevenaar, Bianchi, van Spronsen,
Olieric, Lipka, Frias,
Mikhaylova, Harterink, Keijzer, Wulfet al.]kevenaar2016
authorauthorJ. T. Kevenaar, authorS. Bianchi,
authorM. van Spronsen, authorN. Olieric, authorJ. Lipka, authorC. P. Frias, authorM. Mikhaylova, authorM. Harterink, authorN. Keijzer, authorP. S.Wulf, et al., titletitleKinesin-binding protein controls microtubule dynamics and cargo trafficking
by regulating kinesin motor activity, https://doi.org/10.1016/j.cub.2016.01.048journaljournalCurrent Biology volume26, pages849 (year2016)NoStop
[Mitchell and Lee(2012)]mitchell2012
authorauthorC. S. Mitchell and authorR. H. Lee, titletitleCargo distributions
differentiate pathological axonal transport impairments, https://doi.org/10.1016/j.jtbi.2012.01.019journaljournalJournal of theoretical biology volume300, pages277 (year2012)NoStop
[Blasius et al.(2013)Blasius, Reed, Slepchenko, andVerhey]blasius2013
authorauthorT. L. Blasius, authorN. Reed,
authorB. M. Slepchenko, andauthorK. J. Verhey, titletitleRecycling of kinesin-1 motors by diffusion after
transport, https://doi.org/10.1371/journal.pone.0076081journaljournalPloS one volume8, pagese76081 (year2013)NoStop
[Klopfenstein et al.(2002)Klopfenstein, Tomishige, Stuurman, andVale]Klopfenstein2002
authorauthorD. R. Klopfenstein, authorM. Tomishige, authorN. Stuurman, and authorR. D. Vale, titletitleRole of
Phosphatidylinositol(4,5)bisphosphate Organization in Membrane Transport by
the Unc104 Kinesin Motor, https://doi.org/10.1016/S0092-8674(02)00708-0journaljournalCell volume109, pages347 (year2002)NoStop
[Rizalar et al.(2023)Rizalar, Lucht, Petzoldt, Kong, Sun, Vines, Telugu,
Diecke, Kaas, Bullmann,
Schmied, Löwe, King,
Cho, Hallermann, Puchkov,
Sigrist, and Haucke]Rizalar2023
authorauthorF. S. Rizalar, authorM. T. Lucht,
authorA. Petzoldt, authorS. Kong, authorJ. Sun, authorJ. H. Vines, authorN. S.Telugu, authorS. Diecke, authorT. Kaas,
authorT. Bullmann, authorC. Schmied, authorD. Löwe, authorJ. S. King, authorW. Cho, authorS. Hallermann, authorD. Puchkov, authorS. J.Sigrist, and authorV. Haucke, titletitlePhosphatidylinositol 3,5-bisphosphate facilitates axonal vesicle transport
and presynapse assembly, https://doi.org/10.1126/science.adg1075journaljournalScience volume382, pages223 (year2023)NoStop
[Kumar et al.(2010)Kumar,
Choudhary, Metpally, Zheng,
Nonet, Ramanathan, Klopfenstein, and Koushika]Kumar2010
authorauthorJ. Kumar, authorB. C. Choudhary, authorR. Metpally,
authorQ. Zheng, authorM. L. Nonet, authorS. Ramanathan, authorD. R. Klopfenstein, and authorS. P. Koushika, titletitleThe Caenorhabditis elegans Kinesin-3 Motor UNC-104/KIF1A
Is Degraded upon Loss of Specific Binding to Cargo, https://doi.org/10.1371/journal.pgen.1001200journaljournalPLoS Genetics volume6,pagese1001200 (year2010)NoStop
[Tang et al.(2013)Tang,
Scott, Das, Gitler,
Ganguly, and Roy]tang2013
authorauthorY. Tang, authorD. Scott,
authorU. Das, authorD. Gitler, authorA. Ganguly, and authorS. Roy, titletitleFast
vesicle transport is required for the slow axonal transport of synapsin,https://doi.org/10.1523/JNEUROSCI.1148-13.2013journaljournalJournal of Neuroscience volume33, pages15362 (year2013)NoStop
[Roy et al.(2007)Roy,
Winton, Black, Trojanowski, and Lee]roy2007
authorauthorS. Roy, authorM. J. Winton,
authorM. M. Black, authorJ. Q. Trojanowski, and authorV. M.-Y. Lee, titletitleRapid and intermittent cotransport of slow component-b
proteins, https://doi.org/10.1523/JNEUROSCI.4999-06.2007journaljournalJournal of Neurosciencevolume27, pages3131 (year2007)NoStop
[Li et al.(1995)Li,
Jahn, and Dahlström]li1995
authorauthorJ. Li, authorR. Jahn, andauthorA. Dahlström,titletitleRab3a, a small gtp-binding protein,
undergoes fast anterograde transport but not retrograde transport in
neurons., @noop journaljournalEuropean journal of cell biology volume67,pages297 (year1995)NoStop
[Maday et al.(2012)Maday,
Wallace, and Holzbaur]maday2012
authorauthorS. Maday, authorK. E. Wallace, and authorE. L. F. Holzbaur,titletitleAutophagosomes initiate distally and
mature during transport toward the cell soma in primary neurons, https://doi.org/10.1083/jcb.201106120journaljournalJournal of Cell Biology volume196,pages407 (year2012)NoStop
[Matthies et al.(1993)Matthies, Miller, and Palfrey]matthies1993
authorauthorH. Matthies, authorR. J. Miller, and authorH. C. Palfrey, titletitleCalmodulin binding to and
camp-dependent phosphorylation of kinesin light chains modulate kinesin
atpase activity, https://www.jbc.org/article/S0021-9258(18)82108-1/pdfjournaljournalJournal of Biological Chemistry volume268, pages11176 (year1993)NoStop
[Gordon and Roof(2001)]Gordon2001
authorauthorD. M. Gordon and authorD. M. Roof, titletitleDegradation of the kinesin
Kip1p at anaphase onset is mediated by the anaphase-promoting complex and
Cdc20p, https://doi.org/10.1073/pnas.231212498journaljournalProceedings of the National Academy of
Sciences volume98, pages12515
(year2001)NoStop
[Sabharwal et al.(2024)Sabharwal, Boyanapalli, Shee, Nonet, Nandi, Chaudhuri, andKoushika]Sabharwal2024
authorauthorV. Sabharwal, authorS. P. P. Boyanapalli, authorA. Shee,
authorM. L. Nonet, authorA. Nandi, authorD. Chaudhuri, and authorS. P. Koushika, titletitleF-box protein FBXB-65 regulates anterograde transport of
the kinesin-3 motor UNC-104 through a PTM near its cargo-binding PH
domain, journaljournalJournal of Cell
Science volume137, https://doi.org/10.1242/jcs.26155310.1242/jcs.261553 (year2024)NoStop
[Perry et al.(1987)Perry,
Friedman, Shaw, and Chau]perry1987
authorauthorG. Perry, authorR. Friedman,
authorG. Shaw, and authorV. Chau, titletitleUbiquitin is detected in neurofibrillary tangles and
senile plaque neurites of alzheimer disease brains., https://doi.org/10.1073/pnas.84.9.3033journaljournalProceedings of the National Academy of Sciences volume84, pages3033 (year1987)NoStop
[Nakazawa et al.(2016)Nakazawa, Oikawa, Ishii, Ayaki, Takahashi, Takeda, Ishitani, Kamei, Takeyoshi, Kawakamiet al.]nakazawa2016
authorauthorS. Nakazawa, authorD. Oikawa,
authorR. Ishii, authorT. Ayaki, authorH. Takahashi, authorH. Takeda, authorR. Ishitani, authorK. Kamei, authorI. Takeyoshi, authorH. Kawakami, et al., titletitleLinear ubiquitination is involved in the pathogenesis of
optineurin-associated amyotrophic lateral sclerosis, https://doi.org/10.1038/ncomms12547journaljournalNature communications volume7,pages12547 (year2016)NoStop
[Steffan et al.(2004)Steffan, Agrawal, Pallos, Rockabrand, Trotman, Slepko, Illes, Lukacsovich, Zhu, Cattaneoet al.]steffan2004
authorauthorJ. S. Steffan, authorN. Agrawal,
authorJ. Pallos, authorE. Rockabrand, authorL. C. Trotman, authorN. Slepko, authorK. Illes, authorT. Lukacsovich, authorY.-Z.Zhu, authorE. Cattaneo, et al., titletitleSumo
modification of huntingtin and huntington's disease pathology, https://doi.org/10.1126/science.1092194journaljournalScience volume304, pages100 (year2004)NoStop
[Gunawardena et al.(2003)Gunawardena, Her, Brusch, Laymon, Niesman, Gordesky-Gold,
Sintasath, Bonini, and Goldstein]gunawardena2003
authorauthorS. Gunawardena, authorL.-S. Her, authorR. G. Brusch,
authorR. A. Laymon, authorI. R. Niesman, authorB. Gordesky-Gold, authorL. Sintasath, authorN. M. Bonini, and authorL. S. Goldstein, titletitleDisruption of axonal transport by loss of huntingtin or
expression of pathogenic polyq proteins in drosophila, https://doi.org/10.1016/S0896-6273(03)00594-4journaljournalNeuron volume40, pages25 (year2003)NoStop
[LaMonte et al.(2002)LaMonte, Wallace, Holloway, Shelly, Ascaño, Tokito, Van Winkle, Howland, and Holzbaur]lamonte2002
authorauthorB. H. LaMonte, authorK. E. Wallace, authorB. A. Holloway, authorS. S. Shelly, authorJ. Ascaño,
authorM. Tokito, authorT. Van Winkle, authorD. S. Howland, and authorE. L. Holzbaur, titletitleDisruption of dynein/dynactin inhibits axonal transport in
motor neurons causing late-onset progressive degeneration, https://doi.org/10.1016/S0896-6273(02)00696-7journaljournalNeuron volume34, pages715 (year2002)NoStop
[Reid et al.(2002)Reid,
Kloos, Ashley-Koch, Hughes,
Bevan, Svenson, Graham,
Gaskell, Dearlove, Pericak-Vanceet al.]reid2002kinesin
authorauthorE. Reid, authorM. Kloos,
authorA. Ashley-Koch, authorL. Hughes, authorS. Bevan, authorI. K. Svenson, authorF. L.Graham, authorP. C.Gaskell, authorA. Dearlove, authorM. A. Pericak-Vance, et al., titletitleA
kinesin heavy chain (kif5a) mutation in hereditary spastic paraplegia
(spg10), https://doi.org/10.1086/344210journaljournalThe American Journal of Human Genetics volume71, pages1189 (year2002)NoStop
[Zhao et al.(2001)Zhao,
Takita, Tanaka, Setou,
Nakagawa, Takeda, Yang,
Terada, Nakata, Takeiet al.]zhao2001
authorauthorC. Zhao, authorJ. Takita,
authorY. Tanaka, authorM. Setou, authorT. Nakagawa, authorS. Takeda, authorH. W. Yang, authorS. Terada, authorT. Nakata, authorY. Takei, et al., titletitleCharcot-marie-tooth disease type 2a caused by mutation in a
microtubule motor kif1bβ, https://doi.org/10.1016/S0092-8674(01)00363-4journaljournalCell volume105, pages587 (year2001)NoStop
[Brenner(1974)]Brenner1974
authorauthorS. Brenner, titletitleThe Genetics Of
Caenorhabditis Elegans, https://doi.org/10.1093/genetics/77.1.71journaljournalGenetics volume77, pages71 (year1974)NoStop
[Schindelin et al.(2012)Schindelin, Arganda-Carreras, Frise,
Kaynig, Longair, Pietzsch,
Preibisch, Rueden, Saalfeld,
Schmidet al.]schindelin2012
authorauthorJ. Schindelin, authorI. Arganda-Carreras, authorE. Frise, authorV. Kaynig,
authorM. Longair, authorT. Pietzsch, authorS. Preibisch, authorC. Rueden, authorS. Saalfeld, authorB. Schmid, et al., titletitleFiji: an open-source platform for biological-image analysis,https://doi.org/10.1038/nmeth.2019journaljournalNature methods volume9, pages676 (year2012)NoStop
[Gardiner(1985)]Gardiner
authorauthorC. W. Gardiner, http://linux0.unsl.edu.ar/ froma/libros/Gardiner
titleHandbook Of Stochastic Methods (publisherSpringer, year1985) p. pages437NoStop
[Chaudhuri et al.(2011)Chaudhuri, Borowski, and Zapotocky]Chaudhuri2011
authorauthorD. Chaudhuri, authorP. Borowski, and authorM. Zapotocky, titletitleModel of fasciculation
and sorting in mixed populations of axons, https://doi.org/10.1103/PhysRevE.84.021908journaljournalPhysical Review E volume84, pages021908 (year2011)NoStop
[Block et al.(2003)Block,
Asbury, Shaevitz, and Lang]Block2003
authorauthorS. M. Block, authorC. L. Asbury,
authorJ. W. Shaevitz, andauthorM. J. Lang, titletitleProbing the kinesin reaction cycle with a 2d
optical force clamp, https://doi.org/10.1073/pnas.0436709100journaljournalProceedings of the National
Academy of Sciences volume100, pages2351 (year2003), https://arxiv.org/abs/https://www.pnas.org/doi/pdf/10.1073/pnas.0436709100https://www.pnas.org/doi/pdf/10.1073/pnas.0436709100NoStop
[Evans et al.(2020)Evans,
Majumdar, and Schehr]Evans2020
authorauthorM. R. Evans, authorS. N. Majumdar, and authorG. Schehr, titletitleStochastic resetting and
applications, https://doi.org/10.1088/1751-8121/ab7cfejournaljournalJournal of Physics A: Mathematical and
Theoretical volume53, pages193001
(year2020)NoStop
[Cohen et al.(2013)Cohen,
Zuchman, Sorokina, Müller, Dieterich, Armstrong,
Ziv, and Ziv]Cohen2013
authorauthorL. D. Cohen, authorR. Zuchman,
authorO. Sorokina, authorA. Müller, authorD. C. Dieterich, authorJ. D. Armstrong, authorT. Ziv, and authorN. E. Ziv, titletitleMetabolic Turnover of Synaptic Proteins: Kinetics,
Interdependencies and Implications for Synaptic Maintenance, https://doi.org/10.1371/journal.pone.0063191journaljournalPLoS ONE volume8, pagese63191 (year2013)NoStop
[Fornasiero et al.(2018)Fornasiero, Mandad, Wildhagen,
Alevra, Rammner, Keihani,
Opazo, Urban, Ischebeck,
Sakib, Fard, Kirli,
Centeno, Vidal, Rahman,
Benito, Fischer, Dennerlein,
Rehling, Feussner, Bonn,
Simons, Urlaub, and Rizzoli]Fornasiero2018
authorauthorE. F. Fornasiero, authorS. Mandad,
authorH. Wildhagen, authorM. Alevra, authorB. Rammner, authorS. Keihani, authorF. Opazo, authorI. Urban, authorT. Ischebeck, authorM. S.Sakib, authorM. K. Fard, authorK. Kirli,
authorT. P. Centeno, authorR. O. Vidal, authorR.-U. Rahman, authorE. Benito, authorA. Fischer, authorS. Dennerlein, authorP. Rehling, authorI. Feussner, authorS. Bonn, authorM. Simons, authorH. Urlaub, and authorS. O. Rizzoli,titletitlePrecisely measured protein lifetimes
in the mouse brain reveal differences across tissues and subcellular
fractions, https://doi.org/10.1038/s41467-018-06519-0journaljournalNature Communications volume9, pages4230 (year2018)NoStop
[Sup()]Supply2024
@noop titleSee supplemental material at [publisher will
insert url] for description of simulation movies.Stop
[Note1()]Note1
noteQ_WT≈ 0.035, Q_uba-1≈ 0.035,
Q_fbxb-65≈ 0.033.Stop
|
http://arxiv.org/abs/2409.03474v1 | 20240905123630 | Hemispherical Antenna Array Architecture for High-Altitude Platform Stations (HAPS) for Uniform Capacity Provision | [
"Omid Abbasi",
"Halim Yanikomeroglu",
"Georges Kaddoum"
] | cs.IT | [
"cs.IT",
"cs.NI",
"math.IT"
] |
Hemispherical Antenna
Array Architecture for High-Altitude Platform Stations (HAPS) for
Uniform Capacity Provision
Omid Abbasi, Senior Member, IEEE, Halim Yanikomeroglu, Fellow, IEEE, Georges Kaddoum, Senior Member, IEEE
O. Abbasi and H. Yanikomeroglu are with Non-Terrestrial Networks (NTN) Lab, Department of Systems and
Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada. E-mail: [email protected]; [email protected]. (Corresponding author: Omid Abbasi.)
G. Kaddoum is with Department of Electrical Engineering, École de Technologie Supérieure (ETS), Université du Québec, Montréal, QC H3C 1K3, Canada. E-mail: [email protected].
A preliminary version of this work was presented at the 2024 IEEE Wireless Communications and Networking Conference (WCNC) <cit.>.
September 9, 2024
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In this paper, we present a novel hemispherical antenna array (HAA) designed for high-altitude platform stations (HAPS). A significant limitation of traditional rectangular antenna arrays for HAPS is that their antenna elements are oriented downward, resulting in low gains for distant users. Cylindrical antenna arrays were introduced to mitigate this drawback; however, their antenna elements face the horizon leading to suboptimal gains for users located beneath the HAPS.
To address these challenges, in this study, we introduce our HAA.
An HAA's antenna elements are strategically distributed across the surface of a hemisphere to ensure that each user is directly aligned with specific antenna elements.
To maximize users’ minimum signal-to-interference-plus-noise ratio (SINR), we formulate an optimization problem. After performing analog beamforming, we introduce an antenna selection algorithm
and show that this method achieves optimality when a substantial number of antenna elements are selected for each user.
Additionally, we employ the bisection method to determine the optimal power allocation for each user. Our simulation results convincingly demonstrate that the proposed HAA outperforms the conventional arrays, and provides uniform rates across the entire coverage area.
With a 20 MHz communication bandwidth, and a 50 dBm total power, the proposed approach reaches sum rates of 14 Gbps.
Hemispherical, antenna array, uniform capacity, HAPS.
§ INTRODUCTION
§.§ Background
The high-altitude platform station (HAPS) has emerged as a prominent contender for integration in the forthcoming 6th generation (6G) and beyond, as has been evidenced by recent research <cit.>. To date, HAPS has showcased its versatility across a broad spectrum of applications, including communication <cit.>, computing <cit.>, localization <cit.>, and sensing <cit.>. HAPS [In this paper, we use HAPS for both singular and plural forms.] are conventionally positioned in the stratosphere at an altitude of approximately 20 km, and maintain a quasi-stationary position relative to Earth <cit.>. This strategic deployment enables them to offer line-of-sight (LoS) communication with an expansive coverage radius spanning 50-500 km. Furthermore, to ensure uninterrupted operation, HAPS installations come equipped with robust computing resources and integrated battery systems <cit.>. Recent research (see, e.g., <cit.>) has explored integrating HAPS-based computing as a promising extension of edge computing. Meanwhile, the authors of <cit.> envisioned HAPS as an instrumental technology for communication, computing, caching, and sensing in next-generation aerial delivery networks.
HAPS used as international mobile telecommunication (IMT) base stations (BSs) are referred to as HIBS <cit.>. In recent years, the use of HAPS for communication has garnered significant attention in both academic <cit.> and industrial <cit.> sectors.
In <cit.>, for instance, the authors envisaged HAPS as super macro BSs to provide connectivity across a wide range of applications. Unlike conventional HAPS systems that are primarily used to provide extensive coverage for remote areas and disaster recovery, the authors of <cit.> envisioned deploying their proposed HAPS in densely populated metropolitan areas.
Additionally, in <cit.>, an HAPS was proposed to support the backhauling of aerial BSs. The HAPS's capabilities as an HIBS are explored in more details in <cit.> and <cit.>, where specifications and performance characteristics are thoroughly investigated.
§.§ State of the Art
In the literature on HAPS, two primary types of antennas to establish ground cells have been proposed. The first and more conventional category is aperture-type antennas, which are positioned beneath a HAPS to generate beams, as discussed in <cit.>. Each antenna is responsible for creating one beam and, consequently, one cell. To establish cells in different locations, each of multiple antennas needs to be mechanically steered toward their respective designated spot.
The second category is array-type antennas, which are arranged in a geometric grid pattern to form multiple beams (see <cit.>). In this latter category, beamforming allows beams to be steered towards various locations. Conducting beamforming on the HAPS level enables generating multiple beams in different directions. When a substantial number of antenna elements is employed, these beams can achieve a high array gain. To effectively manage intercell
interference, both types of antennas can operate with various frequency reuse factors.
Elements in antenna arrays can be arranged in various architectures. In <cit.>, the authors proposed planar antenna arrays for conventional HAPS systems. These planar arrays can be square or rectangular. For instance, in <cit.>, the authors explored cellular communication using a HAPS equipped with a rectangular antenna array (RAA). The results of the aforementioned study demonstrated feasibility of constructing cells with a radius as small as 100 meters using square arrays measuring less than 12 meters× 12 meters and a frequency band of 2 GHz. In <cit.>, the authors presented an analysis of the capacity of both sparse users and hotspot users in a massive multiple-input multiple-output (MIMO) system implemented with an RAA and a HAPS. Furthermore, <cit.> showcased that a HAPS equipped with a planar array can achieve data rates of up to 1 Gbps.
In another relevant study <cit.>, a user-clustered opportunistic beamforming approach was proposed for a HAPS equipped with a uniform linear array. Likewise, the authors of <cit.> and <cit.> incorporated a HAPS with cylindrical antenna array (CAA) in a massive MIMO system to achieve ultra-wide coverage and high capacity. The aforementioned work demonstrated that the proposed system can achieve more than twice the capacity of conventional massive MIMO systems with planar arrays.
In <cit.>, the authors proposed a hexagonal antenna array featuring a downward-facing panel serving the center cell and six outward-facing panels serving six outer cells. The study mentioned above revealed that the proposed system can attain a downlink sector throughput of 26 Mbps with a radius of 100 km and a bandwidth of 20 MHz.
Considering that the number of users to be served frequently by far exceeds the number of beams created, user scheduling must be employed in various domains, including time, frequency, code, and power. Additionally, high channel correlation among users may hinder users' spatial separation and necessitate user scheduling. In <cit.>, the authors proposed using a beamformer partitioned into two parts: a dominant part directed towards the user cluster, on the one hand, and a random part to enhance user fairness, on the other hand. Slowly varying expected channel correlation was chosen as the metric for user clustering. It is interesting to note that in this work, only one user from each cluster was assumed to be served by a given beam in a resource block. The other users in that beam are served in other resource blocks, thereby leaving to only inter-beam interference. The authors of <cit.> proposed a two-stage precoding design for the Ricean channel in the HAPS massive MIMO systems.
In contrast, in <cit.> and <cit.>, all users in a given cluster were served in the same resource blocks which resulted in both inter-beam and intra-beam interference. In <cit.>, a 3-D massive MIMO system with a HAPS equipped with a 2-D RAA was investigated for air-to-ground transmission. Using leveraged slow-varying parameters, such as channel correlation and the angles of departure of users (UEs), the authors proposed a two-layer location-assisted precoding scheme for downlink transmission, followed by UE clustering. The results demonstrated that location-assisted precoding significantly outperforms matched filter precoding.
The authors of <cit.> explored user grouping and beamforming for HAPS-equipped massive MIMO systems. They introduced user grouping based on the average chordal distance and an outer beamformer that took into account the user's statistical eigenmode. The authors found that their proposed scheme outperformed conventional channel correlation matrix-based schemes.
Another relevant investigation <cit.> introduced a beamspace HAPS-NOMA scheme with a RAA was proposed that enabled the non-orthogonal multiple access (NOMA) scheme to simultaneously serve multiple users in each beam. The authors obtained a zero-forcing digital precoder based on an equivalent channel.
Furthermore, in <cit.>, a cylindrical massive MIMO system was introduced to enhance the capacity and coverage of a HAPS. Each of the beams was allocated to only one user.
The authors of <cit.> and <cit.> developed a beamforming method that considered the movements of the solar panel in HAPS systems. They created cells using a CAA with the users in each beam served in different resource blocks. Finally, in <cit.>, the cell configuration in HAPS systems was optimized considering a frequency reuse factor of one.
§.§ Motivation and Contributions
RAAs are ideal for users located directly beneath a HAPS, as all their antenna elements are oriented downward, so maximum gain is achieved in this direction. However, for users situated at a considerable distance from the HAPS, each antenna element's gain becomes negligible, thereby rendering RAAs suboptimal.
To address this challenge, the authors of <cit.> proposed a CAA. In their innovative design, some antenna elements were placed on the facade of a cylindrical structure, while others were located on a circular surface beneath the cylinder. This arrangement enabled maximizing the utility of antenna elements for users located at greater distances. Nevertheless, it's worth noting that their proposed architecture was not optimal, as many users still did not have a direct view of antenna elements to obtain maximum gains.
In this paper, aiming to mitigate the limitations of RAA and CAA systems, we introduce a hemispherical antenna array (HAA). In our proposed scheme, each user is directly aligned with specific antenna elements to ensure they receive the maximum gain possible from these elements. Simulation results demonstrate that our proposed HAA system outperforms RAA and CAA systems. Furthermore, in contrast to the baseline schemes, the proposed scheme achieves uniform data rates across the entire coverage area.
The main contributions of this paper can be summarized as follows:
* We propose an HAA scheme where antenna elements are strategically placed over the surface of a hemisphere. A subset of these elements is selected to create focused beams on the ground. The number of elements selected determines the size of the resulting beam.
* In the proposed HAA system, we derive the achievable rate for each user when the number of elements selected is relatively high.
* To optimize the system's performance, we formulate an optimization problem that aims to maximize users' minimum SINR while adhering to constraints regarding the total power available at the HAPS and the number of elements selected for each user.
* Our system employs analog beamforming techniques that capitalize on steering vectors of antenna elements selected for each user.
* We introduce an antenna selection algorithm that takes into account the gains of the selected antenna elements and demonstrate that this method is optimal when a substantial number of antenna elements is selected.
* We utilize the bisection method to determine the optimal power allocation for each user and thus further enhance the system's efficiency and performance.
§.§ Outline and Notations
The remainder of this paper is organized as follows. Section II presents the system model and the channel model. Section III presents the proposed transmitter scheme at the HAPS. Section IV formulates the optimization problem, provides its solution, and derives the achievable rate. Section V provides simulation
results to validate the proposed scheme's performance. Finally, Section VI concludes the paper.
Notations:
In this paper, scalar variables are denoted by regular lowercase letters, e.g., x.
Matrices and vectors are represented by boldface uppercase and lowercase letters, respectively (e.g., 𝐗 and 𝐱).
The element at the (i,j)-th position of matrix 𝐗 is denoted as [𝐗]_i,j.
Transpose, conjugate, and Hermitian operators on a matrix 𝐗 are denoted as 𝐗^T, 𝐗^*, and 𝐗^H, respectively.
The Hadamard product is represented by ⊙.
The diagonal matrix of a vector 𝐱 is denoted as diag(𝐱).
The trace of a matrix 𝐗 is given by Tr(𝐗).
The absolute value of a scalar x is denoted as |x|.
The Frobenius norm of a vector 𝐱 is represented by 𝐱.
§ SYSTEM MODEL AND CHANNEL MODEL
§.§ System Model
The proposed HAA is illustrated in Fig. <ref>. We consider a scenario in which a HAPS serves K terrestrial users in the downlink mode, in the sub-6 GHz frequency band. The HAPS is equipped with M antenna elements placed on a hemispherical surface. Subsets of these elements are selected to create focused beams on the ground. The number of elements selected determines the size of the resulting beam. Note that, as depicted in Fig. <ref>, the HAA is positioned beneath the HAPS. This array, being affixed to the underside of the HAPS, constitutes a distinct equipment from the HAPS platform. In this setup, we assume that the terrestrial users are each equipped with a single antenna. In our proposed scheme, each user is directly aligned with specific antenna elements to ensure that they receive the maximum possible gains from these elements. To this end, we assume a large number of elements is positioned on the hemispherical surface so that each user can be aligned with a high number of antenna elements facing towards them. Moreover, analog beamforming is used to ensure that the signals transmitted from the HAPS are aligned with intended users. This beamforming process relies on steering vectors associated with the HAA at the HAPS.
In this paper, the terms cell and beam are used interchangeably. More specifically, a cell represents the footprint of a beam created by the proposed antenna array. Importantly, multiple users can be connected to each beam by being scheduled in orthogonal resource blocks. In the special case where only one user is served by each beam, the array's multi-cell functionality can be simplified to multi-user massive MIMO.
§.§ Channel Model
In our scheme, the channel between user k and antenna element m of the HAPS is denoted as h_km, which includes both large-scale fading
and small-scale multipath fading effects. We assume the existence of a LoS link between the users and the HAPS; hence Ricean distribution is considered for the channel between user k and antenna element m of HAPS as follows:
h_km=10^-𝖯𝖫_𝗄20(√(P_k^𝖫𝗈𝖲)b_km+√(P_k^𝖭𝖫𝗈𝖲)CN(0,1)),
where CN(0,1) shows a complex normal random variable with a mean of 0 and variance (power) equal to 1. Also,
b_km=exp(j2π(d_km/λ_𝗌𝗎𝖻6))
indicates the phase shift of the LoS link's signal due to distance, with d_km indicating the distance between user k and antenna element m of the HAPS (see Section IV for its derivation), λ_𝗌𝗎𝖻6=c/f_𝗌𝗎𝖻6 representing the wavelength, and c=3×10^8 m/s being the speed of light.
Each user k's coordinates are denoted by (x_k, y_k,0).
θ_k and ϕ_k represent the elevation and azimuth angles of departure, respectively, of the transmitted signal directed towards user k.
It should be noted that b_k=[b_km]_1× M is the steering vector of the HAPS's transmit antenna array for user k.
In addition, we have 𝖯𝖫_𝗄=P_k^𝖫𝗈𝖲𝖯𝖫_k^𝖫𝗈𝖲+P_k^𝖭𝖫𝗈𝖲𝖯𝖫_k^𝖭𝖫𝗈𝖲, where the LoS and non-LoS (NLoS) path losses of the link between the HAPS and user k are equal to 𝖯𝖫_k^𝖫𝗈𝖲=𝖥𝖲𝖯𝖫_k+η_𝖫𝗈𝖲^𝖽𝖡, and 𝖯𝖫_k^𝖭𝖫𝗈𝖲=𝖥𝖲𝖯𝖫_k+η_𝖭𝖫𝗈𝖲^𝖽𝖡, respectively <cit.>. In these equations, 𝖥𝖲𝖯𝖫_k=10log(4π f_𝗌𝗎𝖻6d_k/C)^2 represents the free-space path loss (FSPL), while η_𝖫𝗈𝖲^𝖽𝖡 and η_𝖭𝖫𝗈𝖲^𝖽𝖡 indicate excessive path losses (in dB) affecting the air-to-ground links in the LoS and NLoS cases, respectively <cit.>. Next, P_k^𝖫𝗈𝖲=1/1+Aexp(-B(90-θ_k)-A) denotes the probability of establishing a LoS link between user k and the HAPS, with θ_k (in degrees) referring to the elevation angle between user k and HAPS, and A and B being environment-dependent parameters <cit.>. Clearly, P_k^𝖭𝖫𝗈𝖲=1- P_k^𝖫𝗈𝖲 indicates the probability of establishing a NLoS link between user k and the HAPS.
The large-scale channel power gain of the link between user k and the HAPS is equal to
β_k^2=E{|h_km|^2} =E{h_kmh_km^*}=10^-𝖯𝖫_𝗄/10
=10^-P_k^𝖫𝗈𝖲𝖯𝖫_k^𝖫𝗈𝖲+P_k^𝖭𝖫𝗈𝖲𝖯𝖫_k^𝖭𝖫𝗈𝖲/10.
By considering β_0=(4π f_𝗌𝗎𝖻6/c)^-2 to be the channel gain at the reference distance d_0=1 m, the large-scale channel power gain can be rewritten as
β_k^2=η_kβ_0(d_k)^-2,
where η_k=10^-P_k^𝖫𝗈𝖲η_𝖫𝗈𝖲^𝖽𝖡+P_k^𝖭𝖫𝗈𝖲η_𝖭𝖫𝗈𝖲^𝖽𝖡/10 represents the excessive path loss.
We consider independent additive white Gaussian noise (AWGN) with the distribution CN(0,σ^2) at each user's receive antenna element.
§ TRANSMITTER SCHEME PROPOSED FOR THE HAPS
We can see the transmitter scheme proposed for the HAPS in Fig. <ref>. We assume that the HAPS transmits all users' signals in the same time-frequency resources. User k's transmit signal is denoted as √(p_k)s_k, where p_k (for k∈𝒦={1,2,...,K}) represents the transmit power allocated to each user k at the HAPS while s_k (E|s_k|^2=1) is the symbol transmitted for user k. Next, users' digital signals are converted to analog signals and then up-converted to the carrier frequency band utilizing radio frequency (RF) chains. Importantly, we assume that the number of RF chains is equal to the number of users (K). Once all users' signals have passed through the RF chains, we determine which antenna elements have been selected to transmit each user's signal. Then, we employ analog beamforming to direct signals towards users. The signal received at user k is expressed as
y_k =𝐡_k𝐆_k(𝐖⊙𝐀)𝐏𝐬+z_k
=∑_k'=1^K∑_m=1^Mh_kmg_kmw_k'ma_k'm√(p_k')s_k'+z_k,
where 𝐬=[s_1,...,s_K]^T ∈ℂ^K and 𝐏=diag(√(p_1),...,√(p_K)) ∈ℝ^K× K. 𝐀=[𝐚_1,...,𝐚_K]^T ∈𝔹^K× M indicates the antenna selection matrix, where [𝐀]_km=1 if antenna m is selected to transmit user k's signal, and [𝐀]_km=0 otherwise. In this case, k∈𝒦={1,2,...,K}, m∈ℳ={1,2,...,M}, and 𝐚_k∈ℂ^M× 1 is the antenna selection vector for user k. Furthermore, 𝐖=[𝐰_1,...,𝐰_K]^T ∈ℂ^K × M represents the analog beamforming matrix at the HAPS, with [𝐖]_km indicating the phase shifter (PS) of the m-th antenna element for user k and 𝐰_k∈ℂ^M× 1 being the beamforming vector for user k.
It is worth noting that in the proposed HAA scheme, we introduce antenna and power pooling among all users within the coverage region. This implies that each user has access to various options for antenna element selection, which enhances diversity. In addition, the proposed HAA scheme offers great flexibility for power allocation among users. This differs from terrestrial networks with a smaller pool for antenna element selection and power allocation.
The transmit vector from the HAPS is equal to 𝐭=(𝐖⊙𝐀)𝐏𝐬∈ℂ^M× 1, where 𝐬=[s_1,...,s_K]^T ∈ℂ^K is the data vector with E{𝐬𝐬^H}=𝐈_K. The transmit vector must satisfy the total power constraint at the HAPS, which is given by E{𝐭^H𝐭}=E{𝐭^2}≤ P_𝖧𝖠𝖯𝖲, where P_𝖧𝖠𝖯𝖲 is the total power at the HAPS. Moreover, 𝐆_k=diag(√(g_k1),...,√(g_kM)) ∈ℝ^M× M represents the antenna gain matrix at the HAPS for user k, with g_km indicating the m-th antenna element's gain at user k. Furthermore, 𝐡_k=[h_k1,...,h_kM] ∈ℂ^1× M represents the channel vector to user k while z_k ∼CN(0,σ^2) is
complex AWGN noise with variance σ^2 at user k's receiver.
§ OPTIMIZATION PROBLEM
On formulating an optimization problem that maximizes the users' minimum SINR, we find the optimum powers allocated to users at the HAPS, i.e., 𝐏=diag(√(p_1),...,√(p_K)) ∈ℝ^K× K, the analog beamforming matrix at the HAPS, i.e.,𝐖=[𝐰_1,...,𝐰_K]^T ∈ℂ^K × M,
and the antenna selection matrix at the HAPS, i.e., 𝐀=[𝐚_1,...,𝐚_K]^T ∈𝔹^K× M. Note that matrix 𝐀_k, that is the diagonal matrix version of vector 𝐚_k, denoted as 𝐀_k=diag(𝐚_k) ∈𝔹^M× M.
We can write the optimization problem as follows:
(𝒫): P,A,Wmax kmin 𝖲𝖨𝖭𝖱_𝗄
s.t. E{𝐭^2}=
E{∑_k=1^K𝐰_k^H𝐀_k𝐰_kp_k} ≤P_𝖧𝖠𝖯𝖲,
p_k>0, ∀k ∈𝒦,
Tr(𝐀_k)≤M_k, ∀k ∈𝒦,
∑_k=1^K[𝐀]_km≤M_𝖾𝗅𝖾𝗆𝖾𝗇𝗍, ∀m ∈ℳ,
[𝐀]_km∈{0,1}, ∀k ∈𝒦, ∀M ∈ℳ,
|[𝐖]_km|=1/√(M_k), ∀k ∈𝒦, ∀M ∈ℳ,
where constraint in (<ref>) shows the maximum total power constraint at the HAPS, constraint in(<ref>) limits the total number of antennas that can be selected for each user, constraint in (<ref>) limits the total number of users that can be selected for each antenna element, and constraint in (<ref>) refers to the constant modulus condition that applies because PSs are used. We derive an expression for each user's SINR in Proposition <ref>.
The problem (𝒫) is a mixed integer programming (MIP) problem with a discrete variable 𝐀 and continuous variables P and W, which are known to be NP-hard. Also, the objective function of (𝒫) is not concave with respect to the variables. Moreover, constraint in (<ref>) is not convex due to its coupled variables 𝐀 and W, and the constant modulus constraint in (<ref>) is also non-convex. We solve this problem by splitting it into three sub-problems. In the first sub-problem, we employ users' steering vectors at the HAPS to derive analog beamforming matrix W. Then, we utilize W and the use-and-then-forget bound <cit.> to derive the achievable rates of users for the proposed HAA. In the second sub-problem, we propose a heuristic method to find antenna selection matrix A. Finally, in the third sub-problem, we use the bisection method to calculate optimal values for power allocation matrix P.
§.§ Steering Vector-Based Analog Beamforming
In this paper, we assume that the analog beamforming matrix at the HAPS is derived from steering vectors of the antennas selected for each user, i.e., 𝐰_k=1/√(M_k)𝐛_k^*, where M_k indicates the number of antenna elements selected for user k. To direct the signals transmitted from the HAPS toward users, this beamforming is performed with the aid of PSs. In proposition 1, we utilize the beamforming matrix to derive an achievable rate for users in the proposed system.
The achievable rate of user k in the proposed HAA scheme for HAPS is given by
R_k=𝖡𝖶log_2(1+𝖲𝖨𝖭𝖱_k),
where
𝖲𝖨𝖭𝖱_k=p_kβ_k^2/M_kTr(𝐆_k𝐀_k)^2/β_k^2∑_k'=1^K p_k'/M_k'Tr(𝐆_k^2𝐀_k')+σ^2,
where β_k^2 represents large-scale fading, p_k indicates the power allocated to user k, 𝐆_k ∈ℝ^M× M represents user k's diagonal antenna gain matrix at the HAPS, 𝐀_k ∈ℝ^M× M represents user k's diagonal antenna selection matrix, M_k indicates the total number of antennas selected for user k, and 𝖡𝖶 is the communication bandwidth.
See Appendix <ref>.
Now, we can update (𝒫) as follows:
(𝒫): P,Amax kmin p_kβ_k^2/M_kTr(𝐆_k𝐀_k)^2/β_k^2∑_k'=1^K p_k'/M_k'Tr(𝐆_k^2𝐀_k')+σ^2
s.t. ∑_k=1^K p_k ≤P_𝖧𝖠𝖯𝖲,
(<ref>), (<ref>), (<ref>), (<ref>).
In the interference-limited regime, assuming allocation of equal power for all users, i.e., p_k=P/K, ∀ k, and assuming equal number of selected antenna elements for all users, i.e., M_1=...=M_K, the achievable rate of user k in the proposed HAA scheme for HAPS is given by
R_k^∞=𝖡𝖶log_2(1+𝖲𝖨𝖭𝖱_k),
where
𝖲𝖨𝖭𝖱_k^∞=Tr(𝐆_k𝐀_k)^2/∑_k'=1^K Tr(𝐆_k^2𝐀_k').
The proof is straightforward; hence, we omit it.
In the interference-limited regime, assuming omnidirectional antennas with gains equaling 1, the sum rate of the proposed HAA scheme for HAPS is given by
∑_k=1^K R_k^∞=𝖡𝖶 M_k log_2(e),
where M_k is the number of selected antenna elements for users.
Assuming omnidirectional gains of 1 for elements, (<ref>) can be written as 𝖲𝖨𝖭𝖱_k^∞=M_k^2/KM_k=M_k/K. Therefore, for the sum rate, we can write
∑_k=1^K R_k^∞=∑_k=1^K 𝖡𝖶log_2(1+M_k/K)=𝖡𝖶log_2(1+M_k/K)^K.
When K tends to infinity, knowing lim_x→∞ (1+1/x)^x=e, we have
∑_k=1^K R_k^∞= 𝖡𝖶 M_klog_2 e.
Thus the proof is completed.
§.§ Antenna Selection Based on the Gains of the Antenna Elements
In this section, we propose a heuristic algorithm to find the antenna selection matrices 𝐀_k, ∀ k ∈𝒦. To this end, we first derive the gain of each of the HAPS's antenna elements at each user's location based on antenna pattern formulas provided in <cit.>. In Fig. <ref>, one can see the polar coordinates for antenna element m expressed as (d_m,θ_m,ϕ_m) and those for user k denoted as (d_k,θ_k,ϕ_k). As can be seen in Fig. <ref>, the distance between antenna element m and user
k, i.e., d_km, is calculated in accordance with the triangle law as follows:
d_km=√(d_k^2+d_m^2-2d_kd_mcosθ_km).
The gain of antenna element m (in dB) at the location of user k in the proposed HAA scheme is given by
g_km=
G_𝖤,𝗆𝖺𝗑+γ_km, if 0<θ_km<90,
0, if 90<θ_km<180,
where G_𝖤,𝗆𝖺𝗑 is the maximum directional gain of an antenna element as
G_𝖤,𝗆𝖺𝗑=32400/θ_3𝖽𝖡^2,
and
γ_km=-min(12(θ_km/θ_3𝖽𝖡)^2,γ_𝗆𝖺𝗑),
where θ_3𝖽𝖡 is the 3 dB beamwidth of each antenna element, while γ_𝗆𝖺𝗑 is the front-to-back ratio for each element. Furthermore, θ_km is the angle between antenna m and user k (in degrees) that is given by
θ_km =arccos(sinθ_ksinθ_mcosϕ_kcosϕ_m
+sinθ_ksinθ_msinϕ_ksinϕ_m+cosθ_kcosθ_m).
See Appendix <ref>.
Fig. <ref> shows heatmaps depicting spectral efficiencies of three selected individual antenna elements determined from the formula derived in Proposition <ref> for the antenna gain of each element in the proposed HAA. Each element is allocated a fixed power of 1 Watt. As shown in Fig. <ref>, while elements facing the nadir of the HAPS exhibit a circular pattern, those directed towards other spots in the proposed HAA produce ellipsoidal footprints. In Fig. <ref>, the 3 dB beamwidth was reduced from 25 to 10 for the same selected antenna elements as are shown in Fig. <ref>, which demonstrates it is possible to create narrower beams in this configuration.
Now, we construct the antenna selection matrix based on the gains derived by the HAPS's antenna elements at the location of each user, which are denoted by vector 𝐠_k=[√(g_k1),...,√(g_kM)] ∈ℝ^1× M, ∀ k ∈𝒦. To this end, we first sort, in descending order, the entries in each user's gain vector 𝐠_k to create new vectors 𝐠_k,𝗌𝗈𝗋𝗍𝖾𝖽.
We then generate index vectors 𝐢_k ∈ℤ^1× M with entries containing the corresponding indices of the entries of the sorted vector 𝐠_k,𝗌𝗈𝗋𝗍𝖾𝖽∈ℝ^1× M in the original vector 𝐠_k.
Next, we select the first M_k entries of each user k's index vectors 𝐢_k to transmit signals to user k and place them in vector 𝐢_k,𝗌𝖾𝗅𝖾𝖼𝗍𝖾𝖽∈ℤ^1× M_k.
Finally, the antenna elements selected for user k are given by
a_km=
1, if m ∈𝐢_k,𝗌𝖾𝗅𝖾𝖼𝗍𝖾𝖽,
0, otherwise.
Algorithm 1 summarizes the method proposed to find the antenna selection matrix. Now, we first present a lemma that will serve as the foundation for demonstrating the optimality of the antenna selection method outlined in Algorithm 1. This demonstration specifically applies when a substantial number of antenna elements are selected for each user.
The expression derived in (<ref>) for each user's SINR increases with the gain of each antenna element.
The proof is straightforward; hence, we omit it.
The proposed method for antenna selection based on the gains of the antenna elements that is set out in Algorithm <ref> is optimal when a substantial number of antenna elements are selected for each user.
See Appendix <ref>.
§.§ Optimal Power Allocation via the Bisection Method
With beamforming and antenna selection matrices derived in previous sections, we can express the power allocation sub-problem as follows:
(𝒫1): Pmax kmin 𝖲𝖨𝖭𝖱_𝗄
s.t. (<ref>),(<ref>).
This problem is still non-convex with respect to the power allocation coefficients P.
The optimization problem (𝒫1) is a quasi-linear optimization problem.
See Appendix <ref>.
Since problem (𝒫1) is a quasi-linear optimization problem, its optimal solution can be efficiently found using the bisection
method <cit.>. To this end, we rewrite problem (𝒫1) by adding slack variable η as follows:
(𝒫2): P,ηmax η
s.t. 𝖲𝖨𝖭𝖱_𝗄≥η, ∀k, ∈𝒦,
(<ref>),(<ref>).
It is easy to prove that (𝒫2) is equivalent to (𝒫1). We do this by noting that the optimal solution for (𝒫2) must satisfy η=min (𝖲𝖨𝖭𝖱_1,...,𝖲𝖨𝖭𝖱_𝖪)=𝖲𝖨𝖭𝖱_1=...=𝖲𝖨𝖭𝖱_𝖪, which is identical to the optimal solution for (𝒫1); hence, the proof is completed.
For any given value of η, (𝒫2) will be a linear feasibility
problem that can be optimally solved using convex optimization techniques, such as the interior-point method <cit.>.
The bisection method is summarized in Algorithm 2. Since each iteration of Algorithm 2 requires
solving only a convex problem, Algorithm 2's overall complexity is polynomial at worst. Algorithm 3 summarizes the methods proposed to solve problem (𝒫) and find variables P,A,W.
§ NUMERICAL RESULTS
In this section, we report numerical results to demonstrate that our proposed HAA scheme outperforms baseline array schemes. Simulation parameters are summarized in Table I. In addition, the following default parameters were employed in the simulations, with variations specified as needed.
The carrier frequency was set to f_C=2 GHz, the communication bandwidth is assumed to be 𝖡𝖶=20 MHz, and the noise power spectral density is N_0=-174 dBm/Hz. We assume that all users are uniformly distributed over a square area measuring 60 km on each side.
We also assume that the HAPS is deployed in the middle of the square area. Default values for the number of users and antenna elements in HAPS's transmit array are K=16 and M=2650, respectively. We assume that all antenna elements are placed on a hemisphere with a radius of d_m=3 meters.
We set the maximum transmit power at the HAPS to P_𝖧𝖠𝖯𝖲=50 dBm=100 W. Given an urban area, the excessive path loss affecting air-to-ground links in the LoS and NLoS cases is assumed to be η_𝖫𝗈𝖲^𝖽𝖡=1 dB and η_𝖭𝖫𝗈𝖲^𝖽𝖡=20 dB, respectively <cit.>. Also, for the urban area, A=9.61 and B=0.16.
In Algorithm 2, we use initial values η_𝗆𝗂𝗇=0 and η_𝗆𝖺𝗑=1500 and set the tolerance value to ϵ=0.01. We also consider the HAPS's altitude to be 20 km. To calculate each antenna element's gain, we assume that the 3 dB beamwidth is θ_3𝖽𝖡=25 degrees, while the front-to-back ratio is γ_𝗆𝖺𝗑=30 dB. We also suppose that M_k=64 elements are selected for each user.
In simulation figures, we compare the proposed scheme, referred to as hemispherical antenna array (HAA), with the following three baseline schemes:
* Cylindrical antenna array (CAA): In this configuration, the identical number of antenna elements, i.e., M=2650=50×53, are placed on a cylindrical structure. In this case, 50 indicates the number of vertical element layers, while 53 represents the number of horizontal elements arranged around the circumference of each circular layer.
* Rectangular antenna array (RAA): Similar to the HAA and the CAA, this arrangement employs M=2650=50×53 antenna elements. However, in this case, elements are situated on a rectangular surface, with 50 representing the number of elements along the length of the rectangle, and 53 representing the number of elements along the width of the rectangle.
* Hybrid rectangular and cylindrical antenna array (HRCAA): In this scheme, M_𝖼𝗒𝗅=2000=50×40 antenna elements are placed on a cylindrical structure, with 50 indicating the number of vertical element layers, 40 representing the number of horizontal elements around the circumference of each circular layer, and an additional M_𝗋𝖾𝖼𝗍=M-M_𝖼𝗒𝗅=650 elements placed on a rectangular surface underneath the cylinder.
Note that the CAA, RAA, and HRCAA baseline schemes were previously proposed in <cit.>, <cit.>, and <cit.>, respectively. For a fair comparison among all schemes, we utilize the same values for the total number of elements M, selected number of elements M_k, and 3 dB beamwidth θ_3dB for the proposed and baseline schemes.
Also, identical parameters, signaling, and channel models are used for the three baseline schemes and the proposed HAA scheme. We also employ the same beamforming, antenna selection, and power allocation techniques for all four schemes. The sole distinction among the four schemes is their elements' angles with respect to terrestrial users, denoted as θ_km. This disparity results in variation in antenna element gains and leads to the arrays with different data rates.
Fig. <ref> shows the cumulative distribution function (CDF) of spectral efficiency for a user that is uniformly distributed across 10,000 different locations in two square urban areas with dimensions 200 km× 200 km and 60 km× 60 km. We assumed the user is allocated a fixed power of 1 Watt and served by 64 antenna elements (M_k=64).
As can be seen in both Fig. <ref> and Fig. <ref>, the HAA scheme outperforms the three other antenna array configurations.
In Fig. <ref>, where the coverage area is relatively large, the CAA scheme outperforms the RAA scheme for most users. This is because in the CAA, the antenna elements oriented towards the horizon ensure greater gains for distant users. However, in Fig. <ref>, the coverage area is smaller, and most users are in a closer proximity to the HAPS. In this case, the RAA scheme, whose antenna elements are oriented towards the nadir of the HAPS, offers users better gains than the CAA scheme.
Importantly, the HAA scheme performs the best since its antennas are directed towards every point on the ground.
Additionally, in both figures, the HRCAA scheme outperforms the RAA and CAA schemes. By capitalizing on the advantages of cylindrical and rectangular surfaces for distant and nearby users, respectively, the HRCAA achieves a CDF upper-bounded by the lower envelope of the CDF of the RAA and CAA schemes.
Fig. <ref> presents heatmaps illustrating the spectral efficiency distribution of the rectangular, cylindrical, hybrid, and hemispherical antenna arrays. For this figure, we assumed a user is uniformly distributed across 10,000 different locations in a square urban area with dimensions 60 km× 60 km. The user is allocated a fixed power of 1 Watt and served by 64 antenna elements (M_k=64).
As can be seen in Fig. <ref>, regions beneath the HAPS exhibit higher performance than distant locations. This outcome is attributable to the orientation of antenna elements in the RAA, which face downward and thus provide maximum gains to the areas directly beneath them.
In Fig. <ref>, the CAA scheme showcases an inverse trend. As the antenna elements are oriented toward the horizon in the CAA, remote areas experience superior spectral efficiencies compared to those of nearby ones due to their higher antenna gains.
Fig. <ref> illustrates the advantages of the HRCAA scheme. This configuration combines the benefits of the RAA and CAA schemes to offer commendable data rates to both nearby and distant users.
Finally, Fig. <ref> highlights the HAA scheme, which stands out by providing favorable spectral efficiencies across all regions. This result can be attributed to the fact that users at every location can align with antenna elements oriented toward them and have a robust spectral efficiency for all regions.
Fig. <ref> displays another set of heatmaps of spectral efficiency distribution of the rectangular, cylindrical, hybrid, and hemispherical antenna arrays. This time, we assumed a user is uniformly distributed across 10,000 different locations in a square urban area with dimensions 200 km× 200 km. The user is allocated a fixed power of 1 Watt and served by 64 antenna elements (M_k=64).
Fig. <ref> and Fig. <ref> show that peak spectral efficiencies of the RAA and CAA schemes are achieved in the regions with small and medium-sized radii, respectively. However, remote users experience much lower spectral efficiencies due to substantial path loss.
As can be seen in Fig. <ref>, the HRCAA scheme can provide peak spectral efficiencies to regions with both small and medium-sized radii. Finally, in Fig. <ref>, the HAA scheme stands out for its capacity to deliver competitive spectral efficiencies across a broader region than the three baseline schemes.
In the HAA scheme, antenna elements' gains uniformly influence the spectral efficiencies of all users. Therefore, the path loss is the only factor that impacts spectral efficiencies. Note that as one can see in the figure, the proposed hemispherical scheme has much better rates as compared to the baseline schemes. Said differently, in order to provide a specific rate, the proposed scheme requires less power consumption, which is essential for HAPS with its limited power resources.
Fig. <ref> presents a set of heatmaps showing the spectral efficiencies for five beams with different numbers of antenna elements selected, i.e., M_k=16, 32, 64, 128, and 256, for the proposed HAA scheme. Each beam is allocated a fixed power of 1 Watt.
In Fig. <ref>, we visualize the beams with a 3 dB beamwidth of θ_3𝖽𝖡=10. As shown in the figure, in cases where the beams of single antenna elements are narrow, increasing the number of elements selected from M_k=16 to M_k=256 does not enhance the array gain. This finding indicates that when the 3 dB beamwidth of elements is small, selecting a massive number of elements for beam creation may not yield substantial benefits. For instance, when M_k=128, the power of the beam is equally divided among all 128 elements selected. However, most antennas exhibit negligible gain for the beam, which makes it impractical to allocate power to them. Conversely, when M_k=16, the power is distributed among antennas that offer significant gains for the beam.
Fig. <ref>, Fig. <ref>, and Fig. <ref> illustrate the footprints of beams with 3 dB beamwidth of θ_3𝖽𝖡=15, θ_3𝖽𝖡=20, and θ_3𝖽𝖡=25, respectively. It becomes apparent that widening the beams of each element makes it possible to achieve an array gain with a high number of antenna elements selected. For instance, in Fig. <ref> where θ_3𝖽𝖡=25, a narrow beam with a high array gain is created with M=128 antenna elements. It is worth noting that, as one can see in this figure, there is increased interference on neighboring beams when 3 dB beamwidth is a higher value.
Therefore, our results highlight that, when the 3 dB beamwidth value is smaller, array (beamforming) gains cannot be effectively harnessed and it is more advantageous to create beams with a smaller number of antenna elements selected. However, with higher 3 dB beamwidth values, beamforming becomes a more potent tool that supports the creation of very narrow high-gain beams utilizing a larger number of antenna elements.
Fig. <ref> shows a set of heatmaps displaying the patterns of
spectral efficiencies for five beams with different numbers of selected antenna elements — i.e., M_k=16, 32, 64, 128, and 256 — for the proposed HAA scheme. Five red markers represent the centers of the five beams formed by the HAPS on the ground .
In this figure, as indicated by the color bar, the spectral efficiency ranges from 0 bps/Hz (light yellow) to 10 bps/Hz (dark blue). Each beam is allocated a fixed power of 1 Watt.
For a 3 dB beamwidth of θ_3𝖽𝖡=10, in Fig. <ref> and Fig. <ref>, we visualize the beams with a hemisphere radius of 6 m and 9 m, respectively. Furthermore, for a 3 dB beamwidth of θ_3𝖽𝖡=25, Fig. <ref> and Fig. <ref> show the beams with a hemisphere radius of 6 m and 9 m, respectively. We can see that increasing the radius of the hemisphere, results in narrower beams with stronger grating lobes <cit.> for both 3 dB beamwidth values. This is due to the fact that, with an increase in the radius, the distance between the antenna elements increases as well. Note that for θ_3𝖽𝖡=25 case, due to a better array gain, the beams are narrower compared with the θ_3𝖽𝖡=10 case. However, due to their wider beams for each element, θ_3𝖽𝖡=25 case results in strong grating lobes.
Fig. <ref> shows a set of heatmaps displaying spectral efficiencies for five beams with different numbers of selected antenna elements — i.e., M_k=16, 32, 64, 128, and 256 — for the rectangular and cylindrical baseline schemes. Each beam is allocated a fixed power of 1 Watt.
For the rectangular scheme, Fig. <ref> and Fig. <ref> show the beams with a 3 dB beamwidth values of θ_3𝖽𝖡=10 and θ_3𝖽𝖡=25, respectively. We can see that, the rectangular scheme, with its antenna elements facing
downwards, has much stronger beams for the location under the HAPS. For the cylindrical scheme, Fig. <ref> and Fig. <ref> show the beams with a 3 dB beamwidth values of θ_3𝖽𝖡=10 and θ_3𝖽𝖡=25, respectively. With its antenna elements facing towards the horizon, the cylindrical scheme has similar beams for the locations under the HAPS and those far from the HAPS. Note that for θ_3𝖽𝖡=25 case, due to a better array gain, narrower beams are created.
Fig. <ref> shows the CDF of spectral efficiency for a user that is uniformly distributed across 10,000 different locations in a square urban area with dimensions 60 km× 60 km. We consider two distinct values for the total number of antenna elements in the array — namely, M=990, and 2650. We assume the user is allocated a fixed power of 1 Watt and served by 64 selected antenna elements (M_k=64).
Fig. <ref> and Fig. <ref> show the CDF for 3 dB beamwidth values of θ_3dB=10 and θ_3dB=25, respectively. For the HAA scheme, increasing M improves the rates, which can be attributed to the fact that, for a bigger M, more elements with a higher gain will be available for each user to be selected. Note that this gain is more pronounced for the θ_3dB=10 case, as the beam of each antenna element is narrower. For the rectangular case, a higher M does not change the direction and gain of antenna elements at users' location as all elements are looking downwards. Therefore, the CDF remains fixed for both values of M. For the CAA, a higher M leads to a higher granularization only in the azimuth angles of the elements. Therefore, we observe a small gain for increasing M in the cylindrical array. Note that for the HAA, a higher M leads to a higher granularization in both azimuth and elevation angles of the elements, while, for the RAA, a higher M does not improve the angles of the elements.
Fig. <ref> illustrates the sum rate of the HAA, CAA, and RAA schemes in relation to the total transmit power at the HAPS. Simulation was conducted with 16 users uniformly distributed in a square urban area measuring 60 km× 60 km using the optimal parameter values proposed in Algorithm 3. We also assumed a communication bandwidth of 20 MHz and a total transmit power of 50 dBm at the HAPS.
The achieved sum rate is presented for two cases: one with M_k=64 elements (see Fig. <ref>) and the other with M_k=256 elements (see Fig. <ref>). The HAA scheme outperforms both the CAA and RAA schemes. Additionally, the CAA scheme performs better than the RAA scheme.
Importantly, the HAA scheme's sum rate is lower with θ_3𝖽𝖡=25 than θ_3𝖽𝖡=10 when M_k=64. However, when M_k=256, the narrower beamwidth performs worse than the wider one. This discrepancy can be attributed to the fact that wider beamwidth values require more antenna elements to achieve an effective array (beamforming) gain.
Fig. <ref> shows the sum rate as a function of the number of users K uniformly distributed in a square urban area measuring 60 km× 60 km. Simulation employs the optimized parameter values proposed in Algorithm 3, a communication bandwidth of 20 MHz, and a total transmit power of 50 dBm at the HAPS.
This figure presents the sum rates for five different numbers of antenna elements selected, i.e., M_k=16, 32, 64, 128, and 256. Increasing the number of users leads to higher sum rates. Fig. <ref>, which shows the sum rate when the elements have a narrower 3 dB beamwidths of θ_3𝖽𝖡=10, reveals that increasing M_k from 16 to 32 improves the sum rate; however, further increasing it from 32 to 256 causes a decrease.
Fig. <ref> shows the sum rate when the elements have a wider 3 dB beamwidths of θ_3𝖽𝖡=25. In this case, increasing M_k from 16 to 256 leads to an increase in the sum rate due to the array gain achieved by beamforming with a larger number of wider elements.
Fig. <ref> illustrates the sum rate as a function of the number of antenna elements selected M_k for different numbers of users K uniformly distributed in a square urban area measuring 60 km× 60 km. Simulation employs the optimal parameter values proposed in Algorithm 3, a communication bandwidth of 20 MHz, and a total transmit power of 50 dBm at the HAPS.
This figure provides insight into the sum rates for different values of K — specifically, K=16, 64, 100, and 196. Fig. <ref> displays the sum rate when the elements have a narrower 3 dB beamwidth of θ_3𝖽𝖡=10. It is evident that the sum rate first improves with an increase in M_k from 16 to 32, but then declines with an increase of M_k from 32 to 256. Furthermore, the optimal number of antenna elements selected to maximize the sum rate is M_k=32.
Fig. <ref>, which shows the sum rate when the elements have a wider 3 dB beamwidth of θ_3𝖽𝖡=25, suggests that, when M_k increases from 16 to 256, the sum rate increases too, thus highlighting the array gain that can be achieved by beamforming with a larger number of wider elements.
Fig. <ref> displays the CDF of the spectral efficiencies of the users, with a total transmit power of 50 dBm at the HAPS and M_k=64 antenna elements selected for each beam. This figure illustrates spectral efficiency distribution for four different quantities of beams — specifically, K=16, 64, 100, and 196.
For this figure, K beams are first generated to cover K designated locations on the ground and create a square grid within an urban area measuring 60 km× 60 km. Then, the optimized parameter values for the K beams are determined using Algorithm 3. Next, 5,000 users are uniformly distributed in the urban area with each user assigned to the nearest beam center on the ground.
Fig. <ref> and Fig. <ref> provide the CDFs for θ_3𝖽𝖡=10 and θ_3𝖽𝖡=25, respectively. It is evident that a greater number of beams (K) leads to more uniform user spectral efficiencies. Additionally, it can be observed that θ_3𝖽𝖡=10 yields higher spectral efficiencies than θ_3𝖽𝖡=25, which is mainly due to the limitation of having only M_k=64 selected antenna elements, which does not provide sufficient beamforming gain for wider beamwidths.
Fig. <ref> presents the CDF of the spectral efficiencies of the users assuming K=196 beams and the total transmit power of 50 dBm. This figure illustrates spectral efficiency distributions for five different quantities of antenna elements selected — namely, M_k=16, 32, 64, 128, and 256.
For this figure, we use the same methodology as the one used for Fig. <ref>.
Fig. <ref> depicts the CDF when the antenna elements have a narrower 3 dB beamwidths of θ_3𝖽𝖡=10. We observe that the spectral efficiency improves by increasing M_k from 16 to 32, but a further increase from 32 to 256 results in a reduction of spectral efficiency.
Fig. <ref> displays the CDF when the antenna elements have a wider 3 dB
beamwidth of θ_3𝖽𝖡=25. The results reveal that increasing M_k from 16 to 256 leads to higher spectral efficiencies due to the array gain achieved by beamforming with a greater number of wider elements. These observations are consistent with the findings shown in Fig. <ref> and Fig. <ref>.
Moreover, with an increase of the number of selected antenna elements in Fig. <ref>, homogeneity of users' achieved spectral efficiencies decreases. This outcome is attributable to the beamforming gain being more advantageous for those located closer to the beam center, while users located further from it experience lower spectral efficiencies.
§ CONCLUSION
In this paper, we introduced a novel hemispherical antenna array (HAA) for high-altitude platform station (HAPS) to respond the challenges posed by conventional rectangular and cylindrical antenna arrays. The results of our simulations conclusively demonstrate that our proposed HAA scheme outperforms traditional rectangular and cylindrical baseline arrays. Furthermore, our simulation results underscore the significance of antenna element beamwidth. When beamwidth is narrower, the harnessing of array (beamforming) gain is less effective and it becomes more advantageous to craft beams with fewer antenna elements selected. Conversely, when beamwidth is wider, beamforming emerges as a potent tool for generating highly focused high-gain beams with a larger number of selected antenna elements. In an urban area spanning a length of 60 km, with a communication bandwidth of 20 MHz and a total power at HAPS set at 50 dBm, the proposed approach can efficiently attain sum data rates of up to 14 Gbps. Finally, in contrast to the baseline schemes, the proposed approach achieves uniform data rates across the entire coverage area.
§ PROOF OF PROPOSITION <REF>
To derive each user's SINR, we rewrite the signal received at user k, i.e., y_k in (<ref>), as
y_k=𝐡_k𝐆_k(𝐖⊙𝐀)𝐏𝐬+z_k
=𝐡_k𝐆_k(𝐰_k⊙𝐚_k)√(p_k)s_k+∑_k'=1,k'≠ k^K𝐡_k𝐆_k(𝐰_k'⊙𝐚_k')√(p_k')s_k'+z_k
=𝐡_k𝐆_k𝐀_k𝐰_k√(p_k)s_k+∑_k'=1,k'≠ k^K𝐡_k𝐆_k𝐀_k'𝐰_k'√(p_k')s_k'+z_k,
where 𝐀_k=diag(𝐚_k) ∈𝔹^M× M. Then, we utilize the use-and-then-forget bound <cit.> to derive an achievable SINR. From the last equation in (<ref>), we write the expectation of the desired signal (DS) for user k as
E{𝖣𝖲_k} =E{𝐡_k𝐆_k𝐀_k𝐰_k√(p_k)}.
In this paper, we apply matched filtering to beamform the signals toward users. Hence, the PS for each user is calculated based on the conjugate of the user's steering vector, i.e., 𝐰_k=1/√(M_k)𝐛_k^*, where M_k indicates the number of antenna elements selected for user k. Therefore, we have
E{𝖣𝖲_k} =√(p_k/M_k)E{𝐡_k𝐆_k𝐀_k𝐰_k}
=√(p_kβ_k^2/M_k)Tr(𝐆_k𝐀_k).
Next, we derive the variance of the interference term in (<ref>) as
E{I_kI_k^*} =E{(∑_k'=1^K𝐡_k𝐆_k𝐀_k'𝐰_k'√(p_k'))(∑_k”=1^K𝐡_k𝐆_k𝐀_k”𝐰_k”√(p_k”))^*}
=E{(∑_k'=1^K(𝐡_k𝐆_k𝐀_k'𝐰_k'√(p_k'))(𝐡_k^*𝐆_k𝐀_k'𝐰_k'^*√(p_k'))}
=β_k^2∑_k'=1^K p_k'E{(𝐡_k𝐆_k𝐀_k'𝐰_k')(𝐡_k^*𝐆_k𝐀_k'𝐰_k'^*)}
=β_k^2∑_k'=1^K p_k'/M_k'Tr(𝐆_k^2𝐀_k').
Now, we calculate the SINR of user k from the formulas derived for the desired signal and interference as
𝖲𝖨𝖭𝖱_k =E{𝖣𝖲_k}^2/E{I_kI_k^*}+σ^2=p_kβ_k^2/M_kTr(𝐆_k𝐀_k)^2/β_k^2∑_k'=1^K p_k'/M_k'Tr(𝐆_k^2𝐀_k')+σ^2.
Finally, we can write user k's achievable rate as log_2(1+𝖲𝖨𝖭𝖱_k). This completes the proof.
§ PROOF OF PROPOSITION <REF>
First, we derive the angle between user k and antenna m, i.e., θ_km. To this end, we write their unit vectors in Cartesian coordinates as
𝐯_k=[sinθ_kcosϕ_k,sinθ_ksinϕ_k,cosθ_k],
and
𝐯_m=[sinθ_mcosϕ_m,sinθ_msinϕ_m,cosθ_m].
Now, based on the geometric definition of dot product, we can write
𝐯_m·𝐯_k=𝐯_m𝐯_kcosθ_km=cosθ_km.
Therefore, we have
cosθ_km =sinθ_ksinθ_mcosϕ_kcosϕ_m
+sinθ_ksinθ_msinϕ_ksinϕ_m+cosθ_kcosθ_m,
and hence, θ_km can be derived as (<ref>).
It is clear that if θ_km>90, antenna element m will achieve no gain user k's location. For θ_km<90, we know that an antenna element's maximum directional gain G_𝖤,𝗆𝖺𝗑 must be equal to 1 when the 3 dB beamwidth is equal to 180 degrees. Therefore, we can derive G_𝖤,𝗆𝖺𝗑 as
G_𝖤,𝗆𝖺𝗑=(180)^2/θ_3𝖽𝖡^2=32400/θ_3𝖽𝖡^2.
Finally, we can derive antenna m's loss at user k's location based on <cit.> as (<ref>). This completes the proof.
§ PROOF OF PROPOSITION <REF>
In order to prove the proposed Algorithm's optimality by contradiction, we assume a different element for user k that has less gain than proposed in Algorithm 1. This means that g_km_𝗌𝖾𝗅𝖾𝖼𝗍𝖾𝖽<g_km_𝗉𝗋𝗈𝗉𝗈𝗌𝖾𝖽. Now, we write the SINR expression derived in (<ref>) in scalar form as
𝖲𝖨𝖭𝖱_k=p_kβ_k^2/M_kTr(𝐆_k𝐀_k)^2/β_k^2∑_k'=1^K p_k'/M_k'Tr(𝐆_k^2𝐀_k')+σ^2
=β_k^2p_k/M_k(∑_m=1^M g_kma_km)^2/β_k^2∑_k'=1^Kp_k'/M_k'∑_m=1^Mg_km^2a_k'm+σ^2
=β_k^2p_k/M_k(∑_m=1^M g_kma_km)^2/β_k^2p_k/M_k∑_m=1^Mg_km^2a_km+β_k^2∑_k'=1,k'≠ k^Kp_k'/M_k'∑_m=1^Mg_km^2a_k'm+σ^2.
When a different element is selected for users k, this may have either a larger or smaller gain at other users' locations. When a large number of elements is selected for each user, these effects tend to offset one another. Consequently, the second term in the denominator of (<ref>) remains relatively constant, even with a different element choice than what is proposed in Algorithm 1. However, the numerator and the first term in the denominator of (<ref>) decrease in value. Per Lemma 1, reducing the value of each antenna element leads to a decrease in the SINR. This suggests that 𝖲𝖨𝖭𝖱_km_𝗌𝖾𝗅𝖾𝖼𝗍𝖾𝖽 < 𝖲𝖨𝖭𝖱_km_𝗉𝗋𝗈𝗉𝗈𝗌𝖾𝖽.
As user k's SINR decreases, the minimum SINR among all users also declines. Consequently, the objective function of (𝒫) decreases whenever an antenna element other than the one proposed in Algorithm 1 is chosen. Therefore, whenever there is a large number of selected elements involved, Algorithm 1 is an optimal solution. This completes the proof.
§ PROOF OF PROPOSITION <REF>
To prove the quasi-linearity of (𝒫1), we show that the objective function is quasi-linear and the constraints are linear sets. To prove quasi-linearity of the objective function, we just need to prove that its upper-level set (ULS) is a linear set <cit.>. To this end, we show the objective function with f(𝐏). Then, for any t∈ℝ_+, the ULS of the objective function is given by
𝖴𝖫𝖲(f,t)={𝐏:f(𝐏)>t}
={P:p_kβ_k^2/M_kTr(𝐆_k𝐀_k)^2/β_k^2∑_k'=1^K p_k'/M_k'Tr(𝐆_k^2𝐀_k')+σ^2>t, ∀ k ∈𝒦}
={P:p_kβ_k^2/M_kTr(𝐆_k𝐀_k)^2>t(β_k^2∑_k'=1^K p_k'/M_k'Tr(𝐆_k^2𝐀_k')+σ^2),
∀ k∈𝒦},
which is in the form of an affine function exceeding an affine function of variable 𝐏; hence, it is a linear set. Constraint (<ref>) is also linear. This completes the proof.
IEEEtran
|
http://arxiv.org/abs/2409.02533v1 | 20240904084957 | Shedding Light on the Future: Exploring Quantum Neural Networks through Optics | [
"Shang Yu",
"Zhian Jia",
"Aonan Zhang",
"Ewan Mer",
"Zhenghao Li",
"Valerio Crescimanna",
"Kuan-Cheng Chen",
"Raj B. Patel",
"Ian A. Walmsley",
"Dagomir Kaszlikowski"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Blackett Laboratory, Department of Physics, Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Centre for Quantum Engineering, Science and Technology (QuEST), Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
[email protected]
Centre for Quantum Technologies, National University of Singapore, Queenstown 117543, Singapore
Department of Physics, National University of Singapore, Queenstown 117543, Singapore
Blackett Laboratory, Department of Physics, Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Centre for Quantum Engineering, Science and Technology (QuEST), Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Centre for Quantum Engineering, Science and Technology (QuEST), Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Centre for Quantum Engineering, Science and Technology (QuEST), Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
National Research Council of Canada, 100 Sussex Drive, Ottawa, Ontario K1N 5A2, Canada
Department of Physics, University of Ottawa, 25 Templeton Street, Ottawa, Ontario K1N 6N5 Canada
Centre for Quantum Engineering, Science and Technology (QuEST), Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Department of Materials, Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
[email protected]
Blackett Laboratory, Department of Physics, Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Centre for Quantum Engineering, Science and Technology (QuEST), Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Centre for Quantum Engineering, Science and Technology (QuEST), Imperial College London, Prince Consort Rd, London, SW7 2AZ, United Kingdom
Centre for Quantum Technologies, National University of Singapore, Queenstown 117543, Singapore
Department of Physics, National University of Singapore, Queenstown 117543, Singapore
§ ABSTRACT
At the dynamic nexus of artificial intelligence and quantum technology, quantum neural networks (QNNs) play an important role as an emerging technology in the rapidly developing field of quantum machine learning. This development is set to revolutionize the applications of quantum computing. This article reviews the concept of QNNs and their physical realizations, particularly implementations based on quantum optics . We first examine the integration of quantum principles with classical neural network architectures to create QNNs. Some specific examples, such as the quantum perceptron, quantum convolutional neural networks, and quantum Boltzmann machines are discussed. Subsequently, we analyze the feasibility of implementing QNNs through photonics. The key challenge here lies in achieving the required non-linear gates, and measurement-induced approaches, among others, seem promising. To unlock the computational potential of QNNs, addressing the challenge of scaling their complexity through quantum optics is crucial. Progress in controlling quantum states of light is continuously advancing the field. Additionally, we have discovered that different QNN architectures can be unified through non-Gaussian operations. This insight will aid in better understanding and developing more complex QNN circuits.
Shedding Light on the Future: Exploring Quantum Neural Networks through Optics
Dagomir Kaszlikowski
==============================================================================
§ INTRODUCTION
Quantum computing is making significant progress toward machines that can achieve an advantage over classical computers, and new algorithms and applications that can benefit from this rapidly advancing domain are emerging continually. Applications for certain complex problems harness the principles of quantum mechanics to perform calculations with an efficiency unattainable by conventional computers. Technologies being pursued to build scalable quantum computers include superconducting qubits <cit.>, trapped ions <cit.>, photons <cit.> and neutral atoms <cit.>.
A particularly promising platform for quantum computers uses photonics. Photons are less susceptible to decoherence and thermal noise than atomic or solid-state materials, making them ideal for transmitting quantum information over long distances. For example, the recently-developed photonic quantum machines <cit.> are capable of executing some specific quantum algorithms at room temperature, a notable departure from the cryogenic environments required for other qubit types. The platform also benefits from a mature silicon photonics technology, meaning that large-scale quantum devices can be achieved by silicon or silicon nitride integrated photonic chip techniques <cit.>.
To achieve a fully-scalable universal quantum computer requires quantum error correction and fault tolerance are essential <cit.>. Error-correction architectures, such as the surface code <cit.>, are being developed to enable scalable processors. In the framework of photonic quantum computing, for example, the Gottesman-Kitaev-Preskill (GKP) state has been prepared <cit.>, just one example of a protected logical qubit suitable for error correction.
These developments highlight the versatility and potential scalability of photonic approaches, potentially accelerating the timeline for practical quantum computing applications.
Recently, the ongoing development of quantum computing technology <cit.> has opened some new approaches for quantum machine learning to tackle computational challenges related to quantum data <cit.>.
Variational quantum algorithms (VQAs) <cit.> and, more specifically, quantum neural networks (QNNs) <cit.>, stand out as some of the most promising applications in this area.
Meanwhile, both of these techniques rely on the concept of parameterized quantum circuit (PQC), a type of quantum circuit architecture equipped with adjustable parameters, such as the angle of rotation gates and phase shifters.
QNNs combine the techniques of quantum mechanics with the structure and function of neural networks, and are able to represent and process data (including quantum data) in ways that classical systems cannot efficiently achieve. By utilizing PQCs, QNNs can be trained and optimized for specific tasks, opening doors for advanced applications in quantum machine learning and potentially accelerating the practical using of quantum computing.
Traditional neural networks <cit.>, which are fundamental to modern artificial intelligence, have driven substantial advancements across numerous fields. For applications in physics, see, for example, Refs. <cit.>. However, these advancements are inevitably constrained by the limitations of classical computational systems. As we continue to generate and grapple with increasingly complex data and problems, the demand for more potent computational solutions is intensifying.
Quantum computing harnesses the laws of quantum mechanics for information processing and has the potential to solve certain problems intractable for classical computing systems <cit.>.
Specialized photonic quantum computers without error-correction capabilities have demonstrated a quantum-classical separation in computing power when compared to even the best classical algorithms <cit.>, which may find practical application in areas such as quantum chemistry, graph theory and drug discovery <cit.>.
The superposition of correlated states yields entanglement between subsystems, and this phenomenon imbues QNNs with unmatched speed <cit.> and capacity <cit.>.
As a cornerstone of quantum machine learning (QML) <cit.>, which leverages quantum systems to significantly accelerate the processing and analysis of large, complex datasets, the potential applications of QNNs span numerous fields, from pattern recognition to cancer prediction <cit.>, promising to deliver disruptive breakthroughs.
The transition of QNN from theoretical concepts to practical applications presents significant challenges, particularly due to existing losses and errors. As a predecessor to quantum optical neural networks (QONN), optical neural networks (ONNs) <cit.> have been extensively utilized in the training and inference of deep neural networks for real-time processing and scenarios requiring rapid data throughput <cit.>, owing to the significant noise resistance of classical light. From this perspective, the development of QONN is not starting from scratch. Building on the foundation of existing optical computing technologies, particularly their advancements in hardware, QONN can evolve within a relatively mature technological environment <cit.>.
In this perspective paper, we review several definitions of QNNs proposed in recent years, along with their applications in practical problems. Subsequently, we analyze the potential implementation of QNNs through quantum optics technology. This article also explores future prospects and challenges in these rapidly evolving fields. Given the anticipated quantum leap in computational power, understanding these topics is crucial for fully harnessing and directing the power of quantum computing.
§ QUANTUM NEURAL NETWORKS
Classical neural networks <cit.> play a crucial role in machine learning applications.
Various classical neural networks have been introduced over the years. These include feedforward networks such as the perceptron and convolutional neural networks, as well as recurrent neural networks such as the Boltzmann machine (also known as the Hopfield network) and long short-term memory networks.
Similarly, QNN can be categorized based on their structure and functionality. These networks integrate classical neural network architectures with principles of quantum computation.
In this section, we provide an overview of the fundamental concepts associated with QNNs and present illustrative examples for clarity.
§.§ Basic concepts of quantum neural network
The foundational component of a neural network is the artificial neuron. This neuron receives several inputs, designated as x_1,⋯,x_n, and generates a single output, y. Consider the feedforward neural network for example. Each input is associated with a specific weight, w_i. When the neuron receives the weighted sum ∑_i w_i x_i, it compares this sum with a bias, b, and applies an activation function, f, to the difference to yield an output value, <cit.>:
y=f(∑_i w_i x_i-b).
A neural network is a complex system composed of numerous interconnected artificial neurons, structured according to a specified network architecture. See Fig. <ref> (a) for an illustration. These connections enable the flow of information and computations that underpin the capabilities of the neural network.
A QNN synergizes classical neural network principles with quantum computational elements. Typically, a QNN is structured into three main components: (i) data encoding, which translates classical data into quantum states; (ii) a quantum circuit equipped with adjustable parameters that facilitate quantum computations; and (iii) quantum measurements, which extract relevant classical information from the quantum states.
The data is encoded in the quantum state, and the manipulation of these data is executed through quantum operations, such as unitary gates, quantum channels, or quantum measurements.
The encoding of classical data into a quantum state is not unique, for which there exist several proposals <cit.>.
The classical data 𝒳⊂{0,1}^n (or 𝒳⊂ℝ^n) can be encoded in quantum states via a bijective map
x⃗↦ψ_x⃗, ∀x⃗∈𝒳,
where ψ_x⃗ is a pure state. This type of encoding is also called pure-state encoding. Similarly, we can introduce mixed-state encoding.
The data encoding is realized via a state preparation
circuit U_x⃗ acting on the initial state |0⟩^⊗ n. For example, we have
* Basis encoding. For n-bit string x∈𝒳, choose a n-qubit Hilbert space ℋ=(ℂ^2)^⊗ n and maps x⃗=(x_1,x_2⋯ ,x_n)∈𝒳 to the basis
x⃗↦ |x⃗⟩=⊗_i=1^n (cos(x_i)|0⟩ + sin(x_i)|1⟩).
* Amplitude encoding. By introducing a feature map f⃗:𝒳→ℝ^N, we can encode classical data in a N-dimensional feature Hilbert space
x⃗↦ |ψ_x⃗⟩=1/f⃗(x⃗)_2∑_i f_i (x) |i⟩,
where f_2=(∑_i f_i(x⃗)^2)^1/2 and i=1,⋯,N.
A frequently used example of feature map f is defined as f_i(x⃗)=x_i, namely taking the i-th component of x⃗. In this case, the encoding is also called wavefunction encoding.
* Angle encoding or product encoding. When x⃗ is an n-dimension real vector, the encoding given is Eq. (<ref>) is also called angle encoding.
Unlike classical neural networks that utilize non-linear activation functions, as shon in Fig. 1(b), QNNs rely on PQCs as their foundational structure <cit.>.
A parameterized quantum circuit is essentially a quantum circuit that incorporates gates with tunable parameters, often denoted as continuous variables. Specifically, consider two sets of gates: 𝒢_1={V_1,⋯,V_n}, which contains gates without parameters, and 𝒢_2={U_1(θ_1),⋯,U_m(θ_m)}, consisting of gates with adjustable parameters. A quantum circuit constructed using the combined gate set 𝒢=𝒢_1∪𝒢_2 is termed a parameterized quantum circuit.
A QNN can be regarded as a PQC U_ QNN(θ⃗) defined by a specific circuit architecture, where θ⃗ represents the adjustable parameters <cit.>.
We train the QNN via using the training set of data 𝖳𝗋𝖺𝗂𝗇={(x⃗_i,y_i)} where y_i's are the labels of data x⃗_i's. After the training, the parameters of the QNN are fixed, our goal is that on the test data set 𝖳𝖾𝗌𝗍={(s⃗_i,t_i)}, we could obtain the correct label of data via observing some obserbable O (or equivalently, measuring the output state in some given bases):
t_i = ⟨ψ_s⃗_i|U_ QNN(θ⃗)^† O U_ QNN(θ⃗)|ψ_s⃗_i⟩.
To simplify the discussion, hereinafter, we will assume that label of all data is a real number.
Given that the fundamental architecture of a QNN is a PQC, it is important to determine the appropriate choice of ansatz gates.
A commonly used ansatz is of the form (see, e.g., <cit.>)
U(θ⃗)=∏_ke^-iθ_k H_kW_k,
where H_k are Hermitian operators, W_k are unparametrized gate (such as CNOT gate), and θ_k are parameters.
The parameters of the QNN are tuned based on a loss function, which can vary based on the QNN's architecture and the specific tasks it addresses.
Suppose that O is our target observable, we can express the loss function as
L(θ⃗)=1/|𝒳_ train|∑_k: 𝖳𝗋𝖺𝗂𝗇 c_k (Tr(ρ_ out^k(θ⃗) O) -y_k)^2,
where c_k are real coefficients and ρ_ out^k(θ⃗)=U_ QNN(θ⃗)ρ_in^kU_ QNN(θ⃗)^† with ρ^k_ in∈𝒳_ train the training quantum data with labels y_k.
A crucial problem in studying QNN is the trainability.
For a substantial number of QNNs, the cost function gradients of an ansatz, when randomly initialized, experience an exponential decrease as the problem size expands. This widely observed occurrence is referred to as the barren plateau phenomenon <cit.>.
We denote the partial derivative of loss function with respect to θ_α as ∂_αL:=∂ L(θ⃗)/∂θ_α.
It was pointed out in Refs. <cit.> that the trainability of a randomly initialized QNN can be analyzed by studying the scaling of the variance
Var[∂_αL]=⟨ (∂_αL)^2⟩-⟨∂_αL⟩^2,
where the expectation value is taken over the parameters. Using the assumption that ⟨∂_αL⟩=0, from Chebyshev's inequality, we obtain that
Pr[|∂_αL|>ε]≤Var[∂_αL]/ε^2,
where ε≥ 0.
If the variance, Var[∂_αL], is exponentially small, it indicates a barren plateau in the loss function. In such scenarios, the gradient of the loss function becomes vanishingly small on average, necessitating an exponentially high precision to traverse this flat region effectively.
§.§ Examples of quantum neural network
In this section, we will delve into specific examples of QNNs. Remarkably, QNNs often lack nonlinear activation functions, a feature we'll delve into in the upcoming examples.
Quantum perceptron. —
Classical perceptrons can be regarded as the fundamental building block of more complex artificial neural networks <cit.>.
The quantum perceptron is the quantum generalization, where data and weights are encoded into quantum states.
The main tool to implement the quantum perceptron is Grover's search algorithm <cit.>.
For the labeled data set 𝒳={(x⃗_i,y_i)} (for simplicity, we assume the label y_i's take values in {0,1}) and a given perceptron, our goal is to find weights that correctly classify the data.
This is characterized by a perceptron classification function: F(x⃗_i,y_i;{w_k})=1 if and only if the perceptron with weights {w_i} misclassifies the data.
In the classical implementation of the online perceptron, training points are sequentially fed into the classification algorithm. Each time a point is misclassified, the weight parameters are updated.
The quantum version of the online perceptron diverges from the traditional sequential data access during an epoch. It employs a method of accessing data points in a superposed quantum state and applies the classification function to this state. This enables all data points to be processed simultaneously, streamlining the search for the misclassified point.
First, we use a quantum circuit to prepare the superposition of the data
U_𝒳:|j⟩⊗ |0⟩↦ |j⟩⊗ |x_i⟩.
Then we have
U (1/√(|𝒳|)∑_j |j⟩) ⊗ |0⟩ =1/√(|𝒳|)∑_j |j⟩⊗ |x_j⟩.
To implement Grover's search, we need to build a quantum circuit that implements the Boolean function F(x⃗_i,y_i;{w_k}).
With access to such a circuit, we can subsequently define a corresponding oracle quantum operator
U_F(x⃗_i,y_i;{w_k}) |x_j⟩ =(-1)^F(x⃗_i,y_i;{w_k}) |x_j⟩.
Notice that U_F(x⃗_i,y_i;{w_k}) depends on the weights {w_k}.
Now we can use U_F(x⃗_i,y_i;{w_k}) as an oracle to implement the Grover search.
The quantum perceptron algorithm works as follows: (i) Apply Grover's search using U_𝒳 and U_F(x⃗_i,y_i;{w_k}) over the state 1/√(|𝒳|)∑_j |j⟩) ⊗ |0⟩; (ii) Measure the first register, if F(x⃗_i,y_i;{w_k})=1, then update the weights {w_k'}←{w_k} and update the operation accordingly U_F(x⃗_i,y_i;{w'_k})← U_F(x⃗_i,y_i;{w_k}); (iii) Repeat the preceding two steps until the condition Pr[∃ j, F(x_j,y_j;{w_k}))=1] ≤ε is satisfied for a given small ε >0.
For alternative approaches to quantum perceptron implementation, refer to the detailed analysis in Ref. <cit.>.
Quantum convolutional neural networks (QCNN) —
In classical computing, convolutional neural networks comprise layers for convolution, pooling, and full connectivity. Their quantum counterpart has been suggested in Ref. <cit.>.
See Fig. <ref> for an illustration.
For the convolution layer, we implement local unitary gate U_1 in a translationally invariant manner for finite depth. This is inspired the classical convolution operation, where a weighted convolutional kernel is applied in a translationally invariant manner.
In the pooling layer, a subset of qubits are measured. Subsequently, unitary gates V_1 are applied to the adjacent qubits, conditioned on the received measurement outcomes.
In the pooling layer, due to the reduction in the number of qubits, non-linearity is inherently introduced.
The fully connected layer is realized by applying a global unitary gate W.
For a n-qubit input state ρ_ in, the output state ρ_ out(θ⃗) has a much smaller dimension.
The parameters, represented by θ⃗ in the output state, require optimization. Typically, these parameters originate from the unitary gates U_1 in the convolutional layer and the global unitary gate W in the fully connected layer.
In the study by Ref. <cit.>, it was demonstrated that barren plateaus are absent in QCNNs. This suggests that QCNNs can be effectively trained from random initializations, highlighting their potential in quantum data applications.
Quantum Boltzmann machine. —
The classical Boltzmann machine is constructed using the energy function:
E=-∑_i,jv_i w_i,j h_j -∑_ia_i v_i -∑_j b_j h_j,
where w_i,j are weights between visible neurons v_i and hidden neurons h_j, a_i and b_j are biases.
The probability distribution is given by Boltzmann distribution
Pr({v_i})=1/Z∑_{h_j}e^-E({v_i},{h_j}),
where Z is the partition function.
The quantum Boltzmann machine is a quantum generalization<cit.>. Following proposal of Ref. <cit.>, both the hidden and visible neurons are replaced with quantum spins and the energy function is replaced with a Hamiltonian:
H=-∑_i b_i σ_i^z-∑_i,jw_i,jσ^z_iσ_j^z.
The corresponding thermal equilibrium state is:
ρ= e^-H/Z,
where Z=Tr e^-H.
To obtain the Boltzmann probability of quantum visible neurons, we construct the operator
Λ_{v_i} =|{v_i}⟩⟨{v_i}|⊗ I.
The marginal Boltzmann probability is obtained by
Pr({v_i}) =Tr(Λ_{v_i}ρ).
Consequently, we can conduct supervised learning analogously to the classical Boltzmann machine approach.
The above three examples are not exhaustive, as numerous quantum neural network models have been proposed. However, since this work focuses primarily on constructing quantum neural networks through optical methods, we will not delve further into this topic.
§ REALIZING QUANTUM NEURAL NETWORKS WITH OPTICS
Optical quantum computing possesses significant advantages, especially in the following key areas. Firstly, unlike other quantum systems that require low-temperature, it can operate in ambient environments. Secondly, photons have minimal interactions with the environment, giving these systems a high degree of noise isolation. Besides, optical quantum computing can be easily integrated with existing optical communication networks, offering substantial convenience for practical applications and system expansion.
However, no known or anticipated material possesses an optical nonlinearity strong enough to implement the deterministic photon-photon gate <cit.>, which typically requires a strong light-matter coupling system <cit.>. As a result, quantum computation with optics also naturally bring a notable drawback: the weak interaction between photons makes the implementation of two-qubit quantum gates challenging and probabilistic, thereby limiting its scalability <cit.>.
In order to realizing QONN under ccontinuous variable (CV) architecture (as shown in Fig. 1 (c)), linear transforms and non-linear gates are necessary <cit.>. While the former one is easy to achieve in the normal optical circuit <cit.>, but the non-linear gate (e.g., Kerr-type gate) is quite challenge to realize, especially in the weak field case. What's more, the lack of strong optical nonlinearity, as discussed above, also hinders the implementation of deterministic two-qubit entangling gates in the digital variable (DV) architecture <cit.>, which is crucial for constructing a universal photonic quantum computer.
We've discovered that many operations in optical systems are relatively more straightforward to implement within the CV architecture. For instance, it allows for the preparation of large-scale cluster states with distinct advantages <cit.>, and achieving programmable Gaussian Boson sampling circuit <cit.>. What's more, besides the linear transformations and squeezing operations can be realized in these above experiments, notably, the realization of non-linear gates have also become more feasible in this architecture compared with directly introducing non-linear crystal in weak field, and there is hope for deterministic implementation <cit.>.
Leveraging the convenience of implementing the aforementioned operations, let's first explore how to realize QONN in optical systems under the CV architecture <cit.>, the corresponding circuit is shown in Fig. 3.
As shown in Fig. 3(a), a linear interferometer need to be applied into the QONN at beginning. To accommodate the parameter adjustments required in QONN, the linear interferometer must be programmable at will. Such a linear interferometer can currently be easily implemented using integrated quantum photonic chips <cit.> (as shown in Fig. 4(a)) and time-bin loop-based processor structures <cit.> (as shown in Fig. 4(b)). Both of these architectures can implement arbitrary unitary operations and are easily scalable in terms of mode numbers.
The optical quantum chip system involves using integrated photonic circuits (shown in Fig. 4(a)) to perform fully programmable unitary operations with high precision, leveraging the stability and low decoherence of photonic systems. Such an integrated chip system is scalable and can be fabricated using well-established manufacturing techniques, making them a promising option for building large-scale QONNs.
Fully programmable unitary operations can also be realized with time-bin loop setups, as shown in Fig. 4(b). This method uses time-bin encoding, whereby information is stored in every pulse of light, to manipulate qumodes. in this arrangement electro-optic modulators (EOMs) can effectively implement the unitary gate on the polarization degree of freedom <cit.>, creating a robust and flexible framework for photonic quantum computations. Both methods highlight the full programmability and stability needed to overcome the limitations of current quantum hardware.
Squeezing gates (as shown in Fig. 3(b)) are followed by linear interferometers. The effect of the squeezing gate on Fock basis states x_i is
Ŝ(r_i)|x_i⟩=√(c_i)|c_ix_i⟩
where c_i = e^-r_i <cit.>, and r_i denotes the squeezing level. This operation can be implemented using a measurement-induced squeezing gate with all optical system <cit.>. In addition to providing an input, this operation also requires the simultaneous input of an auxiliary squeezed state. These two states are then both input into a polarizing beam splitter (PBS) with an adjustable transmission-reflection ratio. A displacement operation is needed to be performed based on a measurement of the field quadrature amplitude on the auxiliary output arm. The final output state then can be considered as applying a squeezing operation to the input state. The degree of squeezing in this operation is determined by the transmission-reflection ratio of the PBS <cit.>. Assume transmission ratio of the PBS is T_0, then the x_output quadrature of the output state becomes x_output=√(T_0)x_input+√(1-T_0)x_ancillae^-r_ancilla <cit.>. We can observe that this operation is equivalent to a squeezing operation of magnitude r=-ln√(T_0). Moreover, as the squeezing level of the auxiliary squeezed state increases, the resulting output state approaches the ideal state more closely.
The subsequent displacement operation (as shown in Fig.3(c)) is relatively straightforward. One simply needs to interfere the target state with a local oscillator on a beam splitter (BS) with a 99:1 ratio. We can place a phase modulator before the BS to adjust the phase of the input local oscillator <cit.>. This can effectively implement the D(α) operation, and α=|α|e^-φ.
Lastly, we introduce the most crucial nonlinear operation as depicted in Fig. 3(d). Typically characterized as a non-Gaussian operation <cit.>, this function plays an essential role within the CV framework. Numerous operations fall under this category, such as photon number resolved detection (PNRD) <cit.>, Kerr gate <cit.>, and cubic phase gate <cit.>. The Kerr-type nonlinear operation, for example, has only been realized through a measurement-induced method, involving a process of photon addition and subtraction <cit.>. The observation can be made that within the CV architecture, all requisite operations can be executed through exclusively optical devices. Drawing upon existing technologies, like integrated optical chips and free-space time-bin systems, the implementation of QONN demonstrates substantial scalability. This suggests that, in the future, we could train larger datasets with QONN and genuinely transition QONN into practical application.
Meanwhile, analyzing the nonlinear operation realized by this non-Gaussian process is also beneficial for preparing GKP states <cit.> and advancing the study of fault-tolerant quantum computers <cit.>.
However, quantum hardware is currently still in its nascent stage, often referred to as the noisy intermediate-scale quantum computing (NISQ) era. The limited scale and relatively high error rates pose significant challenges for scaling QNNs to practical sizes. Thus, effective error correction techniques are essential for maintaining the integrity of computations.
Besides, optimization strategies and randomness play crucial roles in the training process. Optimization strategies include quantum natural gradient descent, hybrid classical-quantum optimization, and parameterized quantum circuits, which adjust parameters to minimize the loss function. Randomness in QNNs also manifests in random initialization, random sampling, and random perturbations, and can help to avoid local minimum and estimate gradients.
Additionally, we can integrate QONNs under both CV/DV-architectures with Gaussian and non-Gaussian operations <cit.>. In the CV-architecture (like shown in Fig. <ref> (c)), non-Gaussian operations are used to accomplish the highly significant and most challenging non-linear operations. Whereas in the DV-architecture (like shown in Fig. <ref> (b)), such non-Gaussian resources can potentially be used to complete two-qubit gate operations utilizing GKP states <cit.>. Therefore, we can find that both CV and DV architectures require the intervention of non-linear/non-Gaussian operations to effectively implement QONN.
While the realization of QNNs using quantum optics still faces many challenges, particularly in terms of scalability and noise reduction, the theoretical underpinning and preliminary experimental results indicate a promising path forward. Continued advancements in the control and manipulation of quantum states of light are expected to drive progress in the development of practical QNNs.
§ POTENTIAL APPLICATIONS OF QNN
One prominent application of QNNs is in the field of handwritten digit recognition <cit.>. In this case, a QNN combines the advantages of neural modelling and fuzzy theory. It is designed to apply to both real data and synthetically distorted images, has has been shown to give both efficiency and accuracy enhancement in identifying handwritten numbers <cit.>. With the continuous improvement of technology, QNN has potential for broader applications in pattern recognition <cit.>. Moreover, QNNs hold significant promise in medical diagnostics, particularly in predicting breast cancer <cit.>. This application underscores the transformative potential of QNNs in improving healthcare outcomes through advanced predictive analytics.
Furthermore, the potential applications of QNNs are vast and varied, impacting such fields as quantum phase recognition <cit.>, artificial intelligence <cit.>, and weather prediction <cit.>. As research and development in quantum computing continue to advance, the deployment of QNNs could lead to breakthroughs that leverage quantum advantages for solving some complex problems with unprecedented efficiency and accuracy.
§ CHALLENGES AND FUTURE DIRECTIONS
While developing and implementing QNNs through quantum optics shows great promise, it also presents some unique challenges that must be overcome. Scalability is one of the significant technical difficulties that need to be addressed. As the scale of quantum systems increases, the complexity of managing the system also increases. This is particularly relevant to quantum neural networks, where the complexity will grow exponentially with the number of neurons (nodes) and synapses (connections).
As the scale of QNNs expands, quantum systems become prone to errors due to losses and operational defects. Then the effective quantum error-correction codes and techniques are essential for practical QONN applications, but these technologies are still underdeveloped.
The development of quantum neural networks and quantum optics has immense potential in shaping the future of computing, offering new approaches to current computational challenges. QNNs are predicted to facilitate the development of new quantum algorithms. These algorithms could significantly improve the efficiency of data processing tasks in various areas, from financial modeling to drug discovery, such as pattern recognition and optimization problems. Additionally, with the aid of QNNs, quantum machine learning is possible to revolutionize the field of artificial intelligence by providing more powerful models. This could lead to breakthroughs in complex tasks like natural language processing, image recognition, and real-time decision-making. Meanwhile, with advancements in quantum optics and integrated optical circuits, it is possible to develop scalable and practical quantum systems. As the technology matures, we can expect to see QNNs integrated into everyday computing devices, accelerating the advent of the quantum computing era. Furthermore, the integration of quantum neural networks in quantum optics might assist in establishing a quantum internet. This would enable ultra-secure communication, distributed quantum computing, and a new level of coordination between quantum devices.
While these opportunities are promising, they come with significant challenges that necessitate thorough research and development. Nonetheless, persistent effort and investment in quantum neural networks and quantum optics could significantly enhance our computational power.
In summary, with the quantum domain poised for computational leaps, the fusion of quantum neural networks and quantum optics is at the forefront of this evolution. While the journey to fully harness and steer the power of quantum computing remains challenging, the ongoing innovations and research detailed in this article set a hopeful trajectory for the quantum revolution.
§ ACKNOWLEDGEMENTS
We thank Steven Sagona-Stophel for providing useful feedback on the manuscript.
This work was supported by Imperial QuEST seed funding, UK Research and Innovation Guarantee Postdoctoral Fellowship (EP/Y029631/1), Engineering and Physical Sciences Research Council and Quantum Computing and Simulation Hub (T001062), UK Research and Innovation Future Leaders Fellowship (MR/W011794/1),, EU Horizon 2020 Marie Sklodowska-Curie Innovation Training Network (no. 956071, `AppQInfo'), National Research Foundation in Singapore, and A*STAR under its CQT Bridging Grant.
apsrev4-1-title
|
http://arxiv.org/abs/2409.03731v1 | 20240905174219 | A Deep Generative Learning Approach for Two-stage Adaptive Robust Optimization | [
"Aron Brenner",
"Rahman Khorramfar",
"Jennifer Sun",
"Saurabh Amin"
] | eess.SY | [
"eess.SY",
"cs.LG",
"cs.SY"
] |
1,3]Aron BrennerCorresponding author. Email: [email protected]
1,2]Rahman Khorramfar
4]Jennifer Sun
1,2,3]Saurabh Amin
[1]Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, Cambridge, MA
[2]MIT Energy Initiative, Massachusetts Institute of Technology, Cambridge, MA
[3]Civil and Environmental Engineering, Massachusetts Institute of Technology, Cambridge, MA
[4]Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA
A Deep Generative Learning Approach for Two-stage Adaptive Robust Optimization
[
==============================================================================
§ ABSTRACT
Two-stage adaptive robust optimization is a powerful approach for planning under uncertainty that aims to balance costs of “here-and-now” first-stage decisions with those of “wait-and-see” recourse decisions made after uncertainty is realized. To embed robustness against uncertainty, modelers typically assume a simple polyhedral or ellipsoidal set over which contingencies may be realized. However, these simple uncertainty sets tend to yield highly conservative decision-making when uncertainties are high-dimensional. In this work, we introduce AGRO, a column-and-constraint generation algorithm that performs adversarial generation for two-stage adaptive robust optimization using a variational autoencoder. AGRO identifies realistic and cost-maximizing contingencies by optimizing over spherical uncertainty sets in a latent space using a projected gradient ascent approach that differentiates the optimal recourse cost with respect to the latent variable. To demonstrate the cost- and time-efficiency of our approach experimentally, we apply AGRO to an adaptive robust capacity expansion problem for a regional power system and show that AGRO is able to reduce costs by up to 7.8% and runtimes by up to 77% in comparison to the conventional column-and-constraint generation algorithm.
empty
=0pt
§ INTRODUCTION
Growing interest in data-driven stochastic optimization has been facilitated by the increasing availability of more granular data for a wide range of settings where decision-makers hedge against uncertainty and shape operational risks through effective planning. Adaptive robust optimization (ARO) – also referred to as adjustable or two-stage/multi-stage robust optimization <cit.> – is one class of models that has found applications in optimization of industrial processes <cit.>, transportation systems <cit.>, and energy systems planning <cit.>. In this work, we aim to solve data-driven two-stage ARO problems with high-dimensional uncertainty that take the form
min_x∈𝒳 { c^⊤ x + λmax_ξ∈𝒰min_y∈𝒴(ξ, x) d^⊤ y }.
Here, x represents “here-and-now” first-stage decisions while y represents “wait-and-see” recourse decisions made after the random variable ξ∈ℝ^D is realized.
The set 𝒰⊂ℝ^D represents the uncertainty set, encompassing all possible realizations of ξ that must be accounted for when identifying the optimal first-stage decisions x. In this work, we assume first-stage decisions x belongs to a mixed-integer feasible set 𝒳 while recourse decisions y belong to a polyhedral set 𝒴(ξ, x) = {y | By ≥ b(ξ) - A(ξ)x, y≥ 0} , where b(ξ) and A(ξ) are affine functions. Additionally, we adopt a complete recourse assumption, i.e., 𝒴(ξ, x) ≠∅ for any ξ∈𝒰, x∈𝒳.
As an illustrative example, consider the problem of capacity expansion planning for a supply chain network. In this case, x denotes discrete long-term decisions (i.e., constructing warehouses), which require an upfront investment c^⊤ x. As the day unfolds and demands ξ are known, production and transportation decisions y∈𝒴(ξ,x) can be made to meet demand at a recourse cost of d^⊤ y. To ensure robustness against uncertain demand, first-stage decisions balance investment costs with the λ-weighted worst-case daily production and transportation cost incurred over the uncertainty set 𝒰.
The uncertainty set 𝒰 plays a large role in shaping planning outcomes and as such must take into account the available data as well as the planner's risk preferences. To optimize risk measures such as worst-case costs or value-at-risk more precisely, it is advantageous to construct 𝒰 based on the desired risk tolerance and obtain probabilistic guarantees on the recourse cost before optimizing. In the case of nonadaptive robust optimization (RO), various probabilistic guarantees can be obtained for a range of conventional (e.g., ellipsoidal, budget) uncertainty sets depending on in the information one has about the distribution of uncertainty p_ξ <cit.>. These results, which give guarantees for certain classes of constraints admitting a closed-form expression, are not easily extended to the case of recourse costs in ARO problems. While an approximate α-probability guarantee can be obtained using an uncertainty set that contains α-fraction of ξ observations, such an approach can yield increasingly conservative solutions to (<ref>) – i.e., solutions that “over-invest” in anticipation of extreme contingencies – in settings with high-dimensional uncertainty as the size of such uncertainty sets scale exponentially with D <cit.>.
In this work, we propose AGRO, a solution algorithm that embeds a variational autoencoder within a column-and-constraint generation (CCG) scheme to perform adversarial generation of realistic contingencies for adaptive robust optimization. We list our contributions below:
* Extension of deep data-driven uncertainty sets to ARO. We extend recent approaches for learning tighter uncertainty sets in RO <cit.> to the case of ARO, a richer class of optimization models that allows for recourse decisions to be made after uncertainty is realized.
* ML-assisted optimization using exact solutions. Rather than approximate decision-making with a predictive model <cit.>, we train a generative model to approximate the distribution of uncertain parameters. Consequently, AGRO does not require building a large dataset of solved problem instances for training a predictive model and optimizes with respect to exact recourse costs rather than an ML-based approximation.
* Application to planning with observed energy supply/demand data. We apply our solution algorithm to the problem of long-term power system planning under hourly supply/demand uncertainties – a data-driven setting with high-dimensional and nonlinearly correlated uncertainty – and show reductions in both costs and runtimes when compared to a conventional CCG algorithm.
The remainder of the text is organized as follows. In Sec. <ref>, we describe the state of the art for modeling with ARO. First, we describe the CCG solution framework and outline standard approaches for uncertainty set construction, which together provide a basis for discussion of recent developments integrating learning and optimization for RO/ARO. In Sec. <ref>, we describe our solution algorithm in full, which integrates an uncertainty set representation learned by a variational autoencoder (VAE) within the CCG algorithm. Finally, in Sec. <ref>, we apply our CCG algorithm to the case of capacity expansion planning for a regional power system and compare against the conventional CCG approach.
§ PRELIMINARIES
§.§ Uncertainty Sets
A fundamental modeling decision in both robust optimization and ARO is the definition of the uncertainty set 𝒰, representing the range of uncertainty realizations for which the first stage decision x must be robust. The most widely adopted uncertainty sets are variations of the box, budget, and ellipsoidal uncertainty sets <cit.>, which can be obtained in a data-driven setting as
𝒰^box = {ξ∈ℝ^D | ξ̂^min≤ξ_i ≤ξ̂^max, ∀ i=1,…,D }
𝒰^budget = {ξ∈ℝ^D | ∑_i=1^D Σ̂_ii^-1|ξ_i - μ̂_i| ≤Γ_budget}
𝒰^ellipse = {ξ∈ℝ^D | (ξ - μ̂)^⊤Σ̂^-1(ξ - μ̂) ≤Γ_ellipse}.
Here, μ̂, Σ̂, ξ̂^min, and ξ̂^max denote the empirical mean, covariance, minimum value, and maximum value of the empirical distribution of ξ while the Γ parameters denote the uncertainty “budgets.” To obtain an approximate α-probability guarantee of feasibility, one can select Γ such that 𝒰 covers an α-fraction of observed ξ realizations. Alternatively, one can take a distributionally robust
approach to selecting Γ by leveraging concentration inequalities.
§.§ Column-and-Constraint Generation
Once 𝒰 is defined, one can apply a number of methods to either exactly <cit.> or approximately <cit.> solve the ARO problem. Here, we focus our discussion on the CCG algorithm <cit.> due to its prevalence in the ARO literature (including its use in recent learning-assisted methods <cit.>) and because it provides a nice foundation for introducing AGRO in Sec. <ref>. The CCG algorithm is an exact solution method for ARO that iteratively identifies “worst-case” uncertainty realizations ξ^i by maximizing recourse costs for a given first-stage decision x^*. These uncertainty realizations are added in each iteration to the finite scenario set 𝒮, which is used to instantiate the main problem
min_x,y,γ c^⊤ x + λγ
s.t. x ∈𝒳
A(ξ^i) x + B y^i ≥ b(ξ^i) ξ^i∈𝒮
y^i ≥ 0 i = 1,…,|𝒮|
γ≥ d^⊤ y^i i = 1,…,|𝒮|
where γ denotes the worst-case recourse cost over 𝒮 (see (<ref>)). In each iteration i of the CCG algorithm, additional variables (i.e., columns) y^i and constraints y^i ∈𝒴(ξ^i, x) corresponding to the most recently identified worst-case realization are added to the main problem.
Solving the main problem yields a set of first-stage decisions, x^*, which are fixed as parameters in the adversarial subproblem
max_ξ∈𝒰min_y ≥ 0 {d^⊤ y | B y ≥ b(ξ) - A(ξ) x^* }.
Solving the adversarial subproblem yields a new worst-case realization ξ^i to be added to 𝒮. Such a max-min problem can be solved by constructing the dual of the inner minimization problem and maximizing d^⊤ y subject to Karush–Kuhn–Tucker (KKT) conditions of the inner minimization problem <cit.>. Despite its prominence in the literature, the KKT reformulation of (<ref>) yields a potentially large bilinear problem; specifically, one for which the number of bilinear terms scales linearly with D. This need to repeatedly solve a large-scale bilinear program can cause CCG to be computationally unwieldy in the case of high-dimensional uncertainty. To this end, a number of alternative solution approaches – including some that leverage ML methods – have been proposed that approximate recourse decisions in order to yield more tractable reformulations, which we discuss next.
§.§ Learning for Adaptive and Single-stage RO
As an alternative to CCG and other exact methods, linear decision rules (LDRs) are commonly used to approximate recourse decisions as affine functions of uncertainty <cit.>. To better approximate recourse costs using a richer class of functions, <cit.> propose a “deep lifting” procedure with neural networks to learn piecewise LDRs. <cit.> train decision trees to predict near-optimal first-stage decisions, worst-case uncertainty, and recourse decisions, which are deployed as part of a solution algorithm for ARO. Similarly, <cit.> train a neural network to approximate optimal recourse costs conditioned on first-stage decisions, which they embed within a CCG-like algorithm.
While these learning-assisted methods address some shortcomings of conventional methods (such as CCG) in reducing runtimes, several challenges remain. The first of these is looseness of uncertainty representation. These methods, along with the conventional CCG algorithm, fail to address the challenge of “loose” uncertainty sets which arises in the case of high-dimensional uncertainty. While uncertainty sets of the form (<ref>) may be effective in approximating high-density regions in low-dimensional settings, they are prone to yielding costly, overly conservative solutions when ξ is high-dimensional <cit.>. The second such challenge is inexactness; specifically, these methods optimize first-stage decisions with regard to a learned approximation of the recourse cost rather than the actual recourse cost, min_y∈𝒴(ξ,x) d^⊤ y. Accordingly, the quality of the approximation depends on the extensiveness of pre-training and amount of training data matching (ξ, x) pairs to corresponding recourse costs. Obtaining such training data in particular can be computationally onerous as it involves solving a large number of linear programs.
In the case of nonadaptive RO, some recent works have addressed the issue of looseness by learning “deep data-driven” uncertainty sets. <cit.> propose using clustering and dimensionality reduction methods to construct data-driven uncertainty sets as unions or intersections of ellipsoids and polyhedra. More similarly to our work, <cit.> introduce a solution algorithm that first constructs a tight uncertainty set as the image of a Gaussian superlevel set under a neural network-learned transformation and then iteratively identifies worst-case realizations by optimizing “through” the neural network. <cit.> extend their approach to construct uncertainty sets conditioned on the observation of a subset of covariates. In these two studies, the adversarial subproblem corresponds to a tractable mixed-binary linear program with as many binary variables as hidden units in the neural network. While <cit.> and <cit.> show their deep data-driven uncertainty set approach reduces costs in the case of nonadaptive RO, these approaches cannot be easily extended to ARO. This is because the resulting subproblem will correspond to a max-min problem, which does not admit a tractable duality-based reformulation due to the presence of binary variables.
§ AGRO
In this work, we propose AGRO, a solution algorithm that performs adversarial generation for robust optimization. As described in Fig. <ref>, AGRO modifies the conventional CCG algorithm in two key ways: (1) To reduce looseness (and by consequence, costs) we construct an uncertainty set with (approximately) known probability mass by training a VAE and projecting spherical uncertainty sets from the latent space into the space of ξ. (2) Because the recourse cost does not admit a closed-form expression, the adversarial subproblem cannot be formulated as a mixed-integer program as was the case in <cit.>. Consequently, we solve the subproblem by differentiating the recourse cost with respect to ξ and performing gradient ascent to identify worst-case realizations.
Table <ref> compares AGRO to related works for ARO and single-stage RO. Our approach is similar to that of <cit.> in that we optimize “through” a trained neural network to obtain worst-case scenarios within a CCG algorithm. While <cit.> approximate recourse decisions using a predictive model to reduce runtime, however, we exactly solve the recourse problem as a linear program in each iteration while leveraging a trained VAE to approximate the distribution of uncertainty. Doing so also removes the computational effort required for building a dataset of recourse problem solutions – i.e., solving a large number of linear programs to obtain an accurate predictive model for recourse cost given ξ and x. Additionally, our approach extends the work of <cit.> and <cit.> in constructing tighter uncertainty sets using deep learning to the problem setting of ARO.
§.§ Deep Data-Driven Uncertainty Sets
Toward addressing the challenge of looseness, we take a VAE approach to learning tight uncertainty sets as nonlinear and differentiable transformations of convex uncertainty sets lying in the latent space ℝ^L with L < D. We let z ∼ p_z = 𝒩(0,I_L) be an isotropic Gaussian random variable and let h_ϕ: ℝ^D →ℝ^L and g_θ: ℝ^L →ℝ^D respectively denote the encoder and decoder of a variational autoencoder (VAE) model trained to generate samples from p_ξ <cit.>. Specifically, h_ϕ is trained to map samples of uncertainty realizations to samples from the isotropic Gaussian distribution while g_θ is trained to perform the reverse mapping. Here, L denotes the VAE bottleneck dimension, which is a hyperparameter that significantly influences the diversity and quality of generated samples and is generally tuned through a process of trial and error. We note that our approach can be extended to other classes of deep generative models that learn such a mapping from 𝒩(0,I_L) to p_ξ such as generative adversarial networks <cit.>, normalizing flows <cit.>, and diffusion models <cit.>. However, we employ a VAE architecture as such models are known to exhibit relatively high stability in training and low computational effort for sampling <cit.>, making them highly conducive to integration within an optimization framework.
We let 𝒵 be the L-ball ℬ(0,r) with r chosen to be the minimum value such that an α-fraction of projected uncertainty realizations h_ϕ(ξ) are observed to fall within 𝒵. Since 𝒵 is closed and bounded, the image of 𝒵 under g_θ, which is continuous, will also be a closed and bounded (but not necessarily convex) set g_θ(𝒵) ∈ℝ^D. Unless g_θ is also invertible, the probability mass contained in this set under p_ξ will not necessarily equal α, the sample estimate of probability mass in 𝒵 under p_z <cit.>. Nevertheless, we observe that constraining z to 𝒵 (and by extension, ξ to g_θ(𝒵)) discourages generating uncertainty realizations that are unrealistic or lie outside of high-density regions of p_ξ. This is intuitive as realizations lying farther from the origin in the latent space are estimated to have lower likelihood under the VAE Gaussian prior <cit.>. Fig. <ref> shows a comparison of conventional uncertainty sets against a tighter uncertainty set constructed by the proposed VAE-based method for samples from the bivariate “moons” distribution <cit.>.
§.§ Adversarial Subproblem
Given a decoder (i.e., generative model) g_θ: ℝ^L →ℝ^D, latent uncertainty set 𝒵 = ℬ(0, r), and first-stage decision x^*, we then formulate the adversarial subproblem as
max_z∈𝒵 min_y {d^⊤ y | B y^i ≥ b(g_θ(z)) - A(g_θ(z)) x^* },
which substitutes g_θ(z) and 𝒵 for ξ and 𝒰 respectively in (<ref>). We denote the optimal objective value of the recourse problem as a function of ξ by f: ℝ^D→ℝ (see Fig. <ref>). It is then possible to differentiate f with respect to ξ by implicitly differentiating the optimality conditions of the recourse problem at an optimal solution <cit.>. Doing so provides an ascent direction for maximizing f with respect to ξ, and by applying the chain rule, z, which can be leveraged to maximize f using first-order optimization methods. By maximizing f using projected gradient ascent over 𝒵, we discourage selection of worst-case realizations that fall outside high-density regions of p_ξ.
Importantly, projected gradient ascent is not guaranteed to converge to a worst-case uncertainty realization as solving max_z f(g_θ(z), x^*) requires maximizing the composition of f, which is convex <cit.>, with g_θ, which is not necessarily convex nor concave. As such, the adversarial problem requires maximizing a nonconcave function, which does not yield convergence guarantees to global maxima. To encourage discovery of global maxima, we solve the adversarial subproblem multiple times in each iteration with I different, random initializations of z. Specifically, each initialization requires that we first randomly sample a latent variable z from a distribution p_s (e.g., the truncated normal or uniform distribution) supported on 𝒵. We then transform z to obtain the uncertainty realization ξ = g_θ(z) and solve the inner linear minimization problem with respect to ξ to obtain f(ξ, x^*) and ∂ f / ∂ξ. Using automatic differentiation, we obtain the gradient of ξ with respect to z and perform the update
z ←Π_𝒵( z + η∂ f(ξ, x^*)/∂ξξ/ z),
where η > 0 denotes the step-size hyperparameter and Π_𝒵 denotes the projection operator given by
Π_𝒵(z) =
z, z ∈𝒵
r ·z/z_2, z ∉𝒵.
This procedure is repeated until a convergence criterion is met. At this point, the worst-case uncertainty realization obtained over all I initializations is added to 𝒮 and the main problem is re-optimized, which completes one iteration of the AGRO solution algorithm. AGRO terminates when the worst-case total costs estimated by the main problem and subproblem have converged, i.e., (c^⊤ x^* + λ f(ξ^i, x^*)) - (c^⊤ x^* + λγ)/c^⊤ x^* + λγ≤ϵ for some convergence tolerance ϵ. The full pseudocode for AGRO is now presented in Alg. <ref>.
§ APPLICATION TO CAPACITY EXPANSION PLANNING
Stochastic capacity expansion models have become a centerpiece for long-term planning of energy generation, transmission, and distribution infrastructure under supply, demand, and cost uncertainties <cit.>. A number of recent works have adopted ARO formulations for future energy systems planning with a particular focus on ensuring system reliability under mass adoption of renewable energy with intermittent and uncertain supply <cit.>. In capacity expansion settings with high spatiotemporal fidelities, supply and demand uncertainties are complex, high-dimensional, and nonunimodal, exhibiting nonlinear correlations such as temporal autocorrelations and spatial dependencies. These patterns cannot be accurately captured by conventional uncertainty sets such as those introduced in (<ref>). As a result, tightening uncertainty representations for robust capacity expansion planning has the potential to significantly lower investment costs over conventional ARO methods while still maintaining low operational costs day-to-day.
In this section, we apply AGRO to a two-stage adaptive robust generation and transmission expansion planning problem for the New England transmission system. We assume a data-driven setting with high-dimensional uncertainty corresponding to hourly demand as well as solar, onshore wind, and offshore wind capacity factors (average power output divided by the plant's maximum production capacity) at each node. Our results demonstrate AGRO's ability to reduce both costs and runtimes when compared to solutions obtained by the CCG algorithm. In what follows, we briefly describe the adaptive robust generation and transmission expansion problem and leave the complete formulation in Appendix. <ref>.
§.§ Formulation
The generation and transmission expansion problem we present most closely follows that of <cit.> and has two stages: an initial investment stage and a subsequent economic dispatch (i.e., recourse) stage. The objective function is a weighted sum of investment costs plus annualized worst-case economic dispatch costs. Investment decisions are made at the node level and include installed capacities for (1) dispatchable, solar, onshore wind, and offshore wind power plants at each node; (2) battery storage at each node; (3) transmission lines. In the first stage, the capacity of new dispatchable and renewable power plants as well as new transmission lines are constrained by their respective upper bounds. The economic dispatch stage occurs after the realization of uncertain demand and supply. This stage determines optimal hourly generation, power flow, and storage decisions to minimize the combined cost of load shedding and variable cost for dispatchable generation; these decisions are subject to ramping, storage, and flow balance constraints. To simulate inter-day dynamics of storage and ramping, we implement circular indexing, which links battery state-of-charge and dispatchable generation rates at the end of the day with those at the beginning <cit.>.
§.§ Experimental Setup
In our experiments, we utilize hourly demand and capacity factor projections for a six-node zonal representation of New England, which yields an uncertain random variable ξ with 349 distinct features after data processing (see <ref>) <cit.>. We segment the dataset to perform a five-fold cross-validation with each fold having a held-out set of 1000 samples and report cross-validated estimates of total costs. Specifically, for a given fold, we train one VAE using an 80/20 training/validation split on the in-sample data, solve the ARO problem using Alg. <ref>, and obtain an out-of-sample cost estimate in each iteration by fixing x^* and computing c^⊤ x^* + λF̂^-1(α), where F̂^-1(α) is the out-of-sample α-quantile estimate of f(ξ, x^*).
As a baseline method, we report results for CCG using an uncertainty set constructed from the intersection of the budget and box uncertainty sets (see (<ref>)), which is commonly used in the power systems literature <cit.>. For our experiments, Γ is selected such that 𝒰 covers α=95% of the training set. For AGRO, we likewise select r such that the uncertainty set 𝒵 = ℬ(0, r) covers 95% of the training set after being projected by h_ϕ into the L-dimensional latent space. Additionally, we report results for a range of latent dimensionalities, L∈{2,4,8,16}. Additional details regarding the VAE architecture and training setup are given in Appendix <ref>
§.§ Results
1.1
Cost and runtime. Results for AGRO and CCG costs and runtimes are shown in Table <ref>. Regarding planning costs, we observe that AGRO obtains investment decisions that yield lower costs than CCG as evaluated by out-of-sample estimates. In particular, costs are minimized by AGRO with L=2 (yielding a 7.8% reduction in costs over CCG) and appear to increase with L. AGRO with L=2 also obtains the lowest total runtime, which includes both VAE training and optimization, while CCG obtains the highest total runtime (approximately 4.4 times that of AGRO with L=2). We also find that the CCG subproblem is solved more quickly than the AGRO subproblem, whose runtime we observe to increase with L. We note that the AGRO subproblem includes multiple initializations of z, which causes its runtime to vary as more initializations are required in later iterations to obtain a new worst-case realization.
Approximation of value-at-risk. Besides sample-estimated costs, we also compare the accuracy of all methods in approximating value-at-risk (VaR), a risk measure that quantifies near worst-case costs, defined by min{β∈ℝ|ℙ_ξ (f(ξ, x) ≤β) > α}. First, we compare the final objective value, c^⊤ x^* + λγ with x^* and γ obtained from the final iteration of solving, against the sample-estimated cost. We observe that, for all methods, the final objective value is greater than the out-of-sample recourse cost estimate. We also report the mean absolute percent error of the main problem objective value, i.e., c^⊤ x^* + λγ, compared to the out-of-sample estimate across all iterations. For both quantities, we observe that AGRO is able to obtain a much tighter estimate of VaR than CCG, which yields an average error of 78.6% over the course of solving. The looseness of this approximation explains the high costs obtained by CCG. Specifically, the identification of improbable but cost-maximizing uncertainty realizations results in consistent over-estimation of VaR and subsequent over-investment as CCG aims to reduce costs for these improbable realizations. We dedicate the remainder of this subsection to more precisely understanding how AGRO improves over CCG in tightly approximating VaR and consequently reducing costs.
We first visualize the relative performance in approximating VaR for all methods in Fig. <ref>, which plots the estimated worst-case cost obtained from solving the subproblem f(ξ^i, x^*) against the out-of-sample estimate F̂^-1(α) for all iterations of solving in all five folds of cross-validation. We also show standard metrics of generative model fidelity (i.e., precision, density) and diversity (i.e., recall, coverage) for the VAE models in Table <ref>. Additional details regarding these metrics based on <cit.> are provided in Appendix <ref>. From Fig. <ref>, we first observe that f(ξ^i, x^*) and F̂^-1(α) are highly correlated, which suggests that maximizing the recourse cost serves as a useful proxy for estimating VaR. We also observe that in all cases except AGRO with L=2, f(ξ^i, x^*) is almost always greater than F̂^-1(α). This is intuitive as the uncertainty set is designed to contain an α-fraction of realizations, and consequently, the worst-case recourse cost obtained over this set will provide a “safe” approximation (i.e., over-estimate) of VaR at level α. This over-estimating effect is also reflected in the consistent gap between the final objective value and the out-of-sample cost estimate (see Table <ref>). This is not the case, however, with L=2 as the associated VAE models obtain a much lower coverage than those for L > 2 (see Table <ref>). As such, the uncertainty set does not sufficiently cover the high-density region of p_ξ, which leads to occasional under-estimation of VaR. AGRO with L=2 nevertheless obtains the lowest sample-estimated costs of all methods. Together, these results suggests that, while poor VAE fidelity or diversity can cause AGRO to fail to identify cost-maximizing realizations, the end result is not necessarily a consistent under-estimation of VaR. This is a consequence of the “safe” uncertainty set, which introduces a systemic over-estimating effect with regard to VaR and counteracts the systemic under-estimation resulting from diminished VAE diversity and fidelity. These results highlight the dependence of solution quality on VAE performance while also demonstrating that the ideal choice of L is not necessarily one that maximizes sample fidelity and diversity, and consequently, should be tuned with consideration of downstream optimization outcomes.
Visualizing worst-case uncertainty. Towards understanding how different uncertainty sets impact conservatism of first-stage decision-making, we compare samples of obtained worst-case realizations in Fig. <ref>. Specifically, we plot the solution to the adversarial subproblem for the final iteration of one fold of cross-validation for CCG and AGRO with L=2 and L=16. Fig. <ref> demonstrates the way in which worst-case realizations obtained by CCG using a budget uncertainty set are unrealistically adversarial. In particular, CCG generates an uncertainty realization in which solar and wind capacity factors drop sharply to zero before immediately rising to their nominal values. Additionally, weather-related correlations between variables are not maintained; most notably, the positive correlation between load and solar capacity factors as well as the spatial correlation of load across the system are not observed. Accounting for these supply and demand features during investment planning is essential for effectively leveraging the complementarities between generation, storage, and transmission. On the other hand, AGRO is able to identify uncertainty realizations that are simultaneously cost-maximizing and realistic. This can be observed from the way in which selected realizations demonstrate realistic temporal autocorrelation (i.e., smoothness) and show a less improbable combination of low solar capacity factors with moderate load.
Comparing the two AGRO-generated realizations, we observe that the L=16 case yields a more cost-maximizing realization than the L=2 case. In particular, loads are higher while capacity factors are lower. Additionally, the generated load and capacity factor time series in the L=2 case are smoother than those in the L=16 case, which can result in under-estimation of load shedding costs resulting from highly fluctuating loads and under-investment in dispatchable generation. These observations, which reflect the fidelity and density results shown in Table <ref>, explain the occasional under-estimation of recourse costs by AGRO with L=2 (see Fig. <ref>) as well as the lower overall costs resulting from reduced conservatism (see Tab. <ref>).
§ CONCLUSION
Increasing availability of data for operations of large-scale systems has inspired growing interest in data-driven optimization. In this work, we extend learning-based approaches for more precise uncertainty representation to the setting of ARO, a powerful framework for planning under uncertainty. We propose AGRO, a modification of the conventional column-and-constraint generation algorithm for chance-constrained ARO problems. AGRO takes a gradient ascent approach to maximizing the value of the inner minimization problem using differentiable convex optimization and adds worst-case uncertainty realizations to a master problem. By leveraging a VAE to project spherical uncertainty sets from a latent space to nonconvex uncertainty sets in a higher dimensional uncertainty space, AGRO identifies worst-case realizations that are not only adversarial but also more likely to occur than those identified from conventional uncertainty sets. Finally, we applied AGRO to the case of capacity expansion planning under supply/demand uncertainty to experimentally demonstrate AGRO's efficacy in greatly reducing both costs and runtimes for ARO.
plainnat
§ CAPACITY EXPANSION MODEL
The capacity expansion model is given by
min_x ∑_i∈𝒩 c^d x_i^d + ∑_i∈𝒩∑_p∈𝒫 c^p x_i^p + ∑_i∈𝒩 c^b x_i^b + ∑_(i,j)∈ℰ c^ℓ x_ij^ℓ + λmax_ξ∈𝒰 f(ξ, x)
s.t. ∑_i∈𝒩 x_i^p ≤Γ^p p ∈𝒫
x_i^d ≤Γ^d i ∈𝒩
x_ij^ℓ≤Γ_ij^ℓ (i,j)∈ℰ
x_i^d ∈ℤ^+, x_i^p, x_i^b, x^ℓ_i',j'∈ℝ^+ p∈𝒫, i ∈𝒩, (i',j') ∈ℰ
Here the first-stage node-level decision variables include installed dispatchable generation capacity (x_i^d ∈ℤ_+), installed renewable plant capacity (x_i^p ∈ℝ_+) for renewable technologies p∈𝒫, and installed battery storage capacity (x_i^b ∈ℝ_+). Additionally, we consider transmission capacity (x_ij^ℓ∈ℝ_+) as a first-stage decision variable at the link level. Both installed dispatchable generation capacities and installed renewable plant capacities obey a system-wide resource availability constraint while installed transmission capacities obey a link-level resource availability constraint. Finally, max_ξ∈𝒰 f(ξ, x) denotes the worst-case recourse cost (i.e., economic dispatch cost given by (<ref>)), which is incurred with unit cost λ as part of the capacity expansion objective function. We “annualize” the worst-case recourse cost by setting λ = 365. For a given investment decision x and uncertainty realization ξ, the recourse cost is given by
f(ξ,x) = min_y ∑_t∈𝒯∑_i∈𝒩 c^f y_it^1 + c^s y_it^6
s.t. y_it^1 + ∑_j∈𝒩 y_jit^2 + y_it^3 + y_it^4 = ξ_it^d - ∑_p∈𝒫ξ_it^p x_i^p i ∈𝒩, t∈𝒯
y_ijt^2 + y_jit^2 = 0 (i, j) ∈ℰ, t∈𝒯
y_ijt^2 ≤ x_ij^ℓ (i, j) ∈ℰ, t∈𝒯
-y_ijt^2 ≤ x_ij^ℓ (i, j) ∈ℰ, t∈𝒯
y_it^1 ≤ν x_i^d i ∈𝒩, t∈𝒯
y_it^1 - y_i,t-1^1 ≤κν x_i^d i ∈𝒩, t∈𝒯
y_i,t-1^f - y_it^1 ≤ -κν x_i^d i ∈𝒩, t∈𝒯
y_it^5 ≤ x_i^b i ∈𝒩, t∈𝒯
y^5_it - y^5_i,t-1 + y_i,t-1^4 = 0 i ∈𝒩, t∈𝒯
y_it^3 - y_it^6 ≤ 0 i ∈𝒩, t∈𝒯
y^1_it, y^5_it, y^6_it≥ 0 i ∈𝒩, t∈𝒯.
Here, the total dispatchable generation at node i in period t is denoted by y_it^1. y_ijt^2 denotes the inflow of power from node i to node j in hour t. y_it^4 denotes the discharge rate of the battery located at node i in hour t while y_it^5 denotes its total charge. Load shedding is denoted by y_it^6, which is assumed to be the positive part of y_it^3. ν denotes the nameplate capacity of the dispatchable plants while κ and κ respectively denote the maximum ramp-up and ramp-down rates for dispatchable generation. The objective function (<ref>) minimizes the combined cost of fuel and a carbon tax for dispatchable generation (with unit cost jointly denoted by c^f) as well as load shedding (with unit cost c^s). First-stage decisions x_i^d, x_i^p, x_i^b, and x_ij^ℓ parameterize the constraints of (<ref>). Constraints (<ref>) impose flow conservation at each node. Specifically, the constraint balances dispatchable generation, power inflows, nodal load shedding, and battery discharging with net load (i.e., demand minus total generation from renewables). Importantly, y_ijt^2 is negative if node j receives power from node i, which is captured by the equality constraint (<ref>). Constraints (<ref>) and (<ref>) impose line limits on power flows. Constraints (<ref>) impose limits on total dispatchable generation while constraints (<ref>) and (<ref>) limit ramping up and down of dispatchable generation. Constraints (<ref>) limits the total state-of-charge of batteries. Constraints (<ref>) links battery discharging or charging (taken to be the negative of discharging) rates to battery state-of-charge. Constraints (<ref>) ensure that only the positive part of load shedding incurs a cost.
§ EXPERIMENTAL SETUP DETAILS
§.§ Data Processing
To process the data, we remove components corresponding to solar capacity factors for nighttime hours and offshore wind capacity factors for landlocked nodes, both of which correspond to capacity factors of zero in all observations, which gives ξ∈ℝ^427. In implementing CCG, we also remove redundant (i.e., perfectly multicollinear) components so that the dataset of ξ observations has full column rank when defining the uncertainty set. We then re-introduce the redundant components as affine transformations of the retained components in the adversarial subproblem using equality constraints. Consequently, the effective dimensionality of 𝒰 is 349 while the dimensionality of ξ is 427. For AGRO, we only remove components that are constant (i.e., zero) and train the VAE on the original 427-dimensional dataset.
§.§ Computational Details
VAE training. To obtain AGRO results, we train a 4-layer VAE and evaluate a range of latent space dimensionalities, L ∈{2,4,8,16}. Each hidden layer has 64 hidden units and uses ELU activations. All models were trained with the Adam optimizer on a batch size of 2048 for 10000 epochs.
Optimization details. For the CCG subproblem, we introduce trivial valid inequalities in the subproblem to reduce computational runtimes. To solve the adversarial subproblem for AGRO, we assume a step size of η=0.01 and terminate the subproblem when subsequent iterations of gradient ascent yield less than 0.001% change in cost or when 100 iterations have been performed. We perform at least I=3 initializations of the subproblem in each iteration of AGRO but perform additional initializations (up to a maximum of 10 total) if the solution to the adversarial subproblem does not increase costs over the current estimate of worst-case recourse costs (i.e., if f(ξ^i, x^*) < γ).
Computational resources. Optimization results for the AGRO subproblem are obtained using the Cvxpylayers package <cit.> while results for the AGRO and CCG main problems as well as the CCG subproblem are obtained using Gurobi 11.0 <cit.> with a mixed-integer optimality gap of 0.01%. All optimization and model training is performed on the MIT Supercloud system <cit.> using an Intel Xeon Gold 6248 machine with 40 CPUs and two 32GB NVIDIA Volta V100 GPUs.
§.§ VAE Performance Metrics
We quantify sampling performance of the VAE using standard metrics for fidelity (precision and density) and diversity (recall and coverage). These metrics are defined by <cit.> as follows:
precision = 1/M∑_j=1^M 1_ξ̂_j∈manifold(ξ_1,…,ξ_N)
density = 1/kM∑_j=1^M ∑_i=1^N 1_ξ̂_j∈ℬ(ξ_i,NND_k(ξ_i))
recall = 1/N∑_i=1^N 1_ξ_i∈manifold(ξ̂_1,…,ξ̂_M)
coverage = 1/N∑_i=1^N 1_∃ j s.t. ξ̂_j ∈ℬ(ξ_i,NND_k(ξ_i)),
where ξ and ξ̂ respectively denote real (N total) and generated samples (M total), 1 denotes the indicator function, NND_k(ξ_i) denotes the distance from ξ_i to its k-th nearest neighbor (excluding itself) in the dataset {ξ_1,…,ξ_N}, and manifolds are defined as
manifold(ξ_1, …, ξ_N) = ⋃_i=1^N ℬ(ξ_i, NND_k(ξ_i)).
Precision and density quantify the portion of generated samples that are “covered” by the real samples while recall and coverage quantify the portion of real samples that are “covered” by the generated samples. All results in Table <ref> are obtained using k=5 and with M=N=1260.
|
http://arxiv.org/abs/2409.03672v1 | 20240905162530 | Wind turbine condition monitoring based on intra- and inter-farm federated learning | [
"Albin Grataloup",
"Stefan Jonas",
"Angela Meyer"
] | cs.LG | [
"cs.LG",
"cs.SY",
"eess.SY"
] |
September 9, 2024@pprintTitle
oddheadempty
evenheadempty
oddfoot September 9, 2024
evenfootoddfoot
1
.001
biel]Albin Grataloupcor1
[email protected]
[biel]Bern University of Applied Sciences, School of Engineering and Computer Science, Quellgasse 21, 2501 Biel, Switzerland
biel,lugano]Stefan Jonas
[email protected]
[lugano]Università della Svizzera italiana, Faculty of Informatics, Via la Santa 1, 6962 Lugano-Viganello, Switzerland
biel,deft]Angela Meyer
[cor1]Corresponding author
[email protected]
[deft]Delft University of Technology, Department of Geoscience and Remote Sensing, Stevinweg 1, 2628 Delft, The Netherlands
wind turbineswind farmsfederated learningcondition monitoringnormal behaviour model fault detection distributed collaborative privacy-preservingwind energy wind farm clustersindustrial fleets
§ ABSTRACT
As wind energy adoption is growing, ensuring the efficient operation and maintenance of wind turbines becomes essential for maximizing energy production and minimizing costs and downtime. Many AI applications in wind energy, such as in condition monitoring and power forecasting, may benefit from using operational data not only from individual wind turbines but from multiple turbines and multiple wind farms. Collaborative distributed AI which preserves data privacy holds a strong potential for these applications. Federated learning has emerged as a privacy-preserving distributed machine learning approach in this context. We explore federated learning in wind turbine condition monitoring, specifically for fault detection using normal behaviour models. We investigate various federated learning strategies, including collaboration across different wind farms and turbine models, as well as collaboration restricted to the same wind farm and turbine model. Our case study results indicate that federated learning across multiple wind turbines consistently outperforms models trained on a single turbine, especially when training data is scarce. Moreover, the amount of historical data necessary to train an effective model can be significantly reduced by employing a collaborative federated learning strategy. Finally, our findings show that extending the collaboration to multiple wind farms may result in inferior performance compared to restricting learning within a farm, specifically when faced with statistical heterogeneity and imbalanced datasets.
Wind turbine condition monitoring based on intra- and inter-farm federated learning
[
Accepted XXX. Received YYY; in original form ZZZ
===================================================================================
§ INTRODUCTION
The deployment of wind turbines for renewable energy generation is witnessing exponential growth globally <cit.>, driven by the transition towards sustainable energy sources. Ensuring the efficient and reliable operation of wind turbines is critical to maximizing energy production and minimizing downtime and maintenance costs. Condition monitoring and anomaly detection play a pivotal role, offering insights into the health and performance of critical components. Deep learning methods, in particular, have risen as an efficient approach to anomaly detection <cit.>. However, the demanding data prerequisites of deep learning models present a major challenge as they necessitate either an abundance of labeled data from faulty operation or, in our scenario, a large amount of fault-free data for training a normal behaviour model (NBM). A NBM operates by predicting target variables like component temperatures or power output that are crucial for assessing system health or performance. Anomalies are identified when the predicted target variable diverges significantly from the measured value of the target variable, such as detecting an abnormal spike in component temperatures compared to the system's normal operational values. Condition monitoring and anomaly detection are extensively studied fields within the area of wind turbine operations. In recent years, deep learning has emerged as a particularly promising approach for condition-monitoring tasks. Among the methodologies employed, NBMs have gained prominence. These models rely on the comparison between critical features measured in wind turbines and their corresponding predicted values, serving as indicator for assessing wind turbine health <cit.>.
Training an effective NBM requires a substantial amount of data, which can be time-consuming or even impractical to obtain. For wind turbines, the fastest way to amass sufficient data is by collecting training data from multiple turbines. However, this approach raises significant data privacy concerns as manufacturers and operators are hesitant to share operational data due to strategic business interests <cit.>. A single wind turbine would require a significant amount of time to gather enough data to train a representative and accurate NBM. To address this issue, we propose privacy-preserving collaborative learning methods to leverage training data collected from multiple wind turbines simultaneously. Federated Learning (FL) emerged as a promising paradigm to address these challenges <cit.>. FL enables collaborative decentralised model training across multiple wind turbines while preserving their data privacy. By exchanging only FL model parameters and not operational data, the sensitive operation data of each wind turbine remains local and inaccessible to others. This approach allows wind turbines to collaboratively train an effective NBM with less data, without compromising their privacy.
Federated learning has gained traction across various domains <cit.>. It was also adopted in renewable energy sectors <cit.>, notably in wind energy applications, for tasks such as wind power forecasting <cit.>, to obtain significantly more accurate forecasts compared to local models. It also has shown promising results in fault detection applications, exhibiting improved performance over local training methodologies for tasks such as blade icing detection <cit.>, fault detection <cit.> and condition monitoring with an NBM <cit.>. Despite these advancements, the application of FL for training NBMs for wind turbines remains largely unexplored. By investigating collaborative NBM training across multiple wind turbines and wind farms, our study builds upon previous work on the potential of FL for training NBMs <cit.>. We assess the benefits and limits of collaboration particularly by addressing the effect of statistical heterogeneity between different wind turbines and wind farms. We also analyse how FL affects the time required to collect training data for a NBM through collaborative learning of multiple wind turbines.
The objectives of our study are twofold: First, we assess the effectiveness of collaborative federated learning strategies among wind turbines of multiple wind farms, comparing intra- and inter-farm collaborative federated learning (Figure <ref>). Second, we quantify the time savings in collecting training data for a NBM through collaborative FL across multiple wind turbines compared to collecting training data from a single WT (referred to as "local" data).
§ FEDERATED LEARNING FOR CONDITION MONITORING OF WIND TURBINES
FL is a collaborative deep-learning framework that involves distributed participants referred to as clients. In our scenario, each wind turbine acts as an individual client and aims to collaboratively train a machine learning model for condition monitoring. In FL, it is crucial that the clients never share their locally stored data in order to preserve data privacy. The iterative FL training process involves that clients train local models, such as a NBM, using only their private local dataset, and then transmit the model parameters to a central server where they are aggregated. This approach ensures the privacy of locally stored client data and can provide a viable solution to overcome data scarcity <cit.>. FL with a central server involves the following iterative steps:
The most widely applied FL framework is the Federated Averaging () algorithm <cit.> in which the aggregation step consists of averaging the received model parameters
ω^t+1 = ∑_i=1^N n_i/nω_i^t+1
where ω^t and ω_i^t denote the global model parameters and the model parameters of client i, respectively, in training round t. n_i denotes the amount of data available to client i while n is the total amount of available data across all clients involved in the aggregation.
Although has demonstrated empirical success and serves as a cornerstone in many FL algorithms, its effectiveness in real-world applications can be hindered by statistical heterogeneity, where the data distributions differ across the clients participating in the learning process. The clients' data may differ in their statistical properties and in size, for example, because of differences in feature distributions or in label distributions. In wind turbines, individual turbines may display differing mechanical characteristics and possibly even differing turbine models and supervisory control and data acquisition (SCADA) systems may be involved. Statistical heterogeneity poses a challenge for FL model training and convergence because the aggregated model must learn to generalize across the diverse datasets. The variations among clients result in differences in the statistical distributions of their local datasets, leading to non-identically distributed (non-iid) data distributions.
Fleets of industrial assets, such as wind turbines, can display significant statistical heterogeneity across clients.
In such settings, global FL models tend to exhibit suboptimal performance <cit.>
compared to locally trained models. The latter may even achieve higher accuracy than their globally trained counterparts <cit.>.
As a result, some clients may lack incentives to participate in training the global FL model (<cit.>). To address this challenge, Personalised FL (PFL) has been proposed to customize global FL models to individual clients. PFL retains the advantages of collaborative learning while tailoring the resulting global FL models to each client's specific local data. Various PFL approaches exist, including client clustering <cit.>, personalised model layers <cit.>, meta-learning <cit.>, and fine-tuning methods <cit.>. We refer to <cit.> and <cit.> for a comprehensive overview of customisation approaches, and to <cit.> for PFL applications in renewable energy contexts.
Training a NBM on data from a single WT requires a significant amount of data representative of the WT's normal operational behaviour, which may not always be available. For example, this is typically the case in newly installed wind farms or after component updates and replacements. The resulting lack of data to train a representative and accurate model is known as the cold start problem in computer science, e.g., <cit.>. We propose to exploit data gathered from multiple wind turbines to reduce the amount of time required for collecting data for training NBMs. We refer to the time savings as the cold start speed up because it is the speed up achieved by training a NBM from scratch through collaborative training rather than training on only local data. Due to privacy considerations, the data from individual turbines are kept confidential, so no data sharing with other wind turbines or servers is allowed. We employ FL for collaborative learning across different wind turbines and different wind farms. We assess the impact of having multiple turbines with different specifications in different wind farms on the efficacy of collaborative learning.
Condition monitoring often relies on NBMs which simulate the normal operation behaviour of the monitored WT components under the current environmental and operation conditions. NBMs are trained on WT data from fault-free operation periods, and allow quantifying the deviations between the measured target variables and their expected values as simulated by the NBM.
We consider identical feature and label spaces of all client wind turbines in this study, so the same SCADA variables are used as features and target variables of the NBMs of all WTs and wind farms considered in this study. Other types of FL, such as vertical FL and federated transfer learning <cit.> are not considered in this study.
§ INTRA- AND INTER-FARM FEDERATED LEARNING OF NORMAL BEHAVIOUR MODELS
§.§ Wind farm data
We investigate FL for wind turbine condition monitoring using SCADA data of WTs from three different wind farms (Table <ref>). The wind farms provide different WT models and site conditions, which can give rise to statistical heterogeneity of the WTs' SCADA variables.
The wind farms Penmanshiel and Kelmarsh exhibit similar configurations, sharing identical SCADA variables, whereas the EDP wind farm features a different SCADA system. We chose 10-minute averages of wind speed, ambient temperature, and wind direction as input features for the NBM, with gear-bearing temperature as the target variable to be predicted. The SCADA data were cleaned by removing curtailment periods and outliers. Wind speed and ambient temperature were normalised, while wind direction was cyclically encoded by a sine-cosine transformation.
The SCADA datasets of the EDP, Kelmarsh and Penmanshiel wind farms contain significant fractions of missing values of 1%, 5%, and 30%, respectively. The FL algorithm applied in our study weighs the WTs' contribution to the training in accordance with the fraction of training data available from them (eq. <ref>), so data imbalance can affect the learning process in intra- and inter-farm FL.
§.§ Model architecture and training
The NBM trained in this study is an LSTM (Long Short-Term Memory) network consisting of layers of LSTM units followed by fully connected layers. The LSTM layers capture temporal dependencies in the data. We utilized a 24-hour trailing window to make sure the model can capture the operational conditions of the past 24 hours. The NBM predicts the expected gear-bearing temperature at the end of this time window.
Hyperparameters of the LSTM network were optimised for a single randomly selected turbine from the Penmanshiel wind farm whose data presented greater complexity than the turbines of other wind farms. The resulting LSTM model comprises two LSTM layers of sizes 16 and 64, respectively, and ReLU activation, followed by two fully connected layers of sizes 64 and 32 with ReLU activation. For this case study, we randomly selected four turbines from each of the three wind farms to reduce data imbalance and computational cost for FL model training. SCADA data from the resulting 12 WTs was used to train a NBM with FL.
To assess the reduction of the training data accumulation time (cold start speed up), we train our NBM using increasing time intervals of training data. We will start by selecting a specific date as the start date. Then, for each start date, we will train multiple NBMs, each incorporating progressively more historical data by using different time ranges of training data. The time ranges commence from the selected start date and incrementally increase by one week, reaching a maximum of 12 weeks. This approach allows us to evaluate the impact of FL on reducing the time required to accumulate adequate data for model training.
To account for seasonal variations and avoid bias towards any particular season, we select four different start dates spread evenly throughout the year: December 2016, March 2017, June 2017, and September 2017. For each of these start dates, we train models using all 12 different time ranges, enabling a comprehensive evaluation of the cold start speed up achieved by FL across different seasons. For each start date, a test set is given by the 4-week time window that follows the 12th week of training data.
Each local dataset is split into 80% for training and 20% for validation. We note that in the case of only one or up to two weeks of data, this may result in a validation dataset that is not fully independent from the training dataset due to autocorrelation of environmental condition time series.
For each training set described above (defined by a selected start date and time range), we trained normal behaviour models of gear bearing temperature according to three different learning strategies representing different types of collaboration:
* Local learning: Each wind turbine independently trains its own model using only its local data without any collaboration with other turbines.
* Intra-farm learning: Each wind farm utilizes to train its own FL model, with no exchange of data or knowledge between different wind farms. In intra-farm learning, FL models are trained on similar WT clients, as turbines within a given wind farm typically exhibit similar and correlated SCADA data patterns.
* Inter-farm learning: Turbines of multiple wind farms participate in a single learning process. The participating wind turbines involve different WT models, SCADA systems, and geographic locations, which results in significant data heterogeneity among the participating WT clients.
The intra- and inter-farm learning strategies are illustrated in Figure <ref>. For each FL strategy, we assessed the performance of the trained normal behaviour models with and without fine-tuning. Fine-tuning involves retraining the global model on each wind turbine's local training data after the FL training process. We provide our implementation on GitHub [Code available at <https://github.com/EnergyWeatherAI/FL-Wind-NBM>].
§ RESULTS AND DISCUSSION
§.§ FL outperforms local training if training data are limited
We compare the FL strategies by analysing the quality of the NBMs trained for the gear-bearing temperatures of the twelve WTs from the three wind farms. We assess the average of the NBMs for the individual WTs when trained with the respective learning strategy and training data time range.
The accuracies of gear-bearing temperature NBMs trained with different learning strategies were averaged across all time
ranges, start dates, and all twelve wind turbines. As shown in Table <ref>, intra-farm with fine-tuning resulted in NBMs with the lowest mean absolute errors (MAEs), followed by inter-farm learning with fine-tuning. Thus, NBMs trained collaboratively by multiple wind turbines can outperform those trained with only local data, especially when little training data (less than a few months, as in this study) is available. Furthermore, our results show that fine-tuning retains some of the collaborative knowledge, even though non-fine-tuned models exhibit moderate to poor performances, highlighting the importance of fine-tuning in consolidating collaborative learning gains and improving model performance.
Overall, intra-farm with fine-tuning demonstrates a significant improvement over local training, reducing the MAE by approximately 40%. While inter-farm with fine-tuning also demonstrates improvement over local training, it falls slightly behind fine-tuned intra-farm learning. This discrepancy may be caused by the significant statistical heterogeneity between the wind farm SCADA variables involved in the NBM training. Intra-farm FL without fine-tuning outperforms local training by approximately 9%. For inter-farm learning in presence of significant statistical heterogeneity across wind farms, the loss increases by almost fivefold without fine-tuning (Table <ref>).
The relative performance between local training and fine-tuned intra- and inter-farm remains consistent across all time ranges considered, as shown in Figure <ref>. FL consistently exhibits a strong improvement compared to local training, even with 12 weeks of training data. Our above results were averaged over different start dates. We investigated whether the results depend on the time of year and found they remain largely consistent across different seasons, as shown in Figure <ref> in the appendix. In all seasons, intra- and interfarm FL enable more accurate NBMs than local training.
We also found our conclusions do not depend on the start date, with the exception of June 2017 (Figures <ref> and <ref> in the appendix), which could possibly be due to a seasonality shift between the training and test set for the Kelmarsh wind farm.
§.§ FL strongly reduces the training data collection time
The cold start speed up refers to the reduction of time needed to accumulate the amount of historical training data required to achieve performances comparable to local training when employing FL. As shown in Figure <ref>, with fine-tuning reduces the training data collection time by approximately eight to nine weeks out of the twelve weeks under consideration in intra-farm and inter-farm FL. For example in Figure <ref>, to achieve the equivalent accuracy of the best performing gear bearing temperature NBM trained with local data, intra-farm with fine-tuning requires only 18 days of training data compared to the 84 days of training data required using local data only. This speed up allows for the earlier deployment of accurate NBMs, enabling an earlier fault detection and thereby reducing the risk of undetected incipient faults.
§.§ Results by wind farm
The statistical heterogeneity of the datasets of different clients can present a significant challenge to collaborative learning. Our results suggest that increasing the number of WTs involved in the training does not necessarily lead to improved collaborative learning outcomes, even after fine tuning. In particular, if the data distributions vary among the different wind turbines, the performance of collaborative learning across WTs of different wind farms (inter-farm learning) can be worse than that of collaborative learning within a given wind farm (intra-farm learning).
We also assessed the performance of FL for NBM training in the context of the individual wind farms. We averaged the accuracies of the NBMs across the four turbines involved in the FL training from each wind farm, as shown in Table <ref>. with fine-tuning consistently outperformed the other learning methods at each wind farm. Intra-farm FL with fine-tuning significantly surpasses the accuracy of inter-farm learning with fine-tuning at the Penmanshiel wind farm, as shown in Table <ref> and Figure <ref>.
The comparatively poor performance of inter-farm FL is likely related to the low fraction of SCADA data from the Penmanshiel wind farm (see Figure <ref> in the appendix), as its small data contribution to the inter-farm FL training results in a small contribution to the inter-farm FL model.
The non-fine-tuned inter-farm MAE is substantially higher (16.65°C, see also Figure <ref> in the appendix), indicating that the NBM primarily learns from the other two wind farms in the case of inter-farm learning.
Conversely, the EDP wind farm exhibits minimal disparity between intra- and inter-farm learning. This is likely because a significant portion of the global model's influence stems from EDP's data, which accounts for roughly 50% of the total data across the various wind farms (as depicted in Figure <ref>). Thus, the global model's performances on EDP's wind turbines are less affected by the heterogeneity introduced by other wind farms' data.
For each wind farm, the implementation of FL strategies results in a significant cold start speed up, with saved time ranging from four to more than ten weeks. This phenomenon is illustrated in Figure <ref>, where the cold start speed up for fine-tuned is depicted. The most substantial reduction in the time required to collect training data occurs with intra-farm learning with fine-tuning in the Penmanshiel wind farm.
We considered four different start dates, one in each season, to investigate how seasonality impacts the accuracy of NBMs trained with different FL strategies at each wind farm. This resulted in twelve combinations of start date and wind farm, as shown in Figure <ref> and Table <ref> in the appendix.
Intra-farm learning with fine-tuning emerged as the best-performing strategy in most cases (10 out of 12), and inter-farm learning with fine-tuning in the remaining two cases which pertain to the Kelmarsh wind farm.
We also investigated how the different learning strategies perform for individual wind turbines and found that our above results are confirmed also at WT level. Federated learning enables more accurate normal behaviour models than local training with limited data, and a reduction in the amount of time needed to collect the required model training data. Fine-tuning provided more accurate NBMs in all cases. Moreover, intra-farm FL tended to provide more accurate NBMs than inter-farm FL.
In addition to , all experiments were repeated with an alternative federated learning algorithm called <cit.>. follows a similar learning process as but incorporates a regularization term in the loss function during local training, which measures the discrepancy between the current global model and the updated local model of the clients. Our experiment indicates that performs comparably, albeit slightly worse, than for both inter-farm and intra-farm learning. This may be attributed to the fact that
slows down local training to retain more knowledge from the global FL model but does not address the models competing against each other due to statistical heterogeneity.
§ CONCLUSIONS
Our study demonstrated the effectiveness of privacy-preserving collaborative learning of wind turbine normal behaviour models for condition monitoring applications. We demonstrated that federated learning enables a reduction of the training data accumulation time, allowing for an earlier detection of developing faults, compared to local training. We also found that having more collaborators is not necessarily better in collaborative learning. In the presence of high statistical heterogeneity, i.e., significant differences between the data distributions of the involved wind turbines, the performance of federated learning across wind turbines of different wind farms (inter-farm learning) is worse than that of federated learning within a given wind farm (intra-farm learning).
We assessed two distinct collaborative learning approaches: inter-farm learning, which involves collaboration across turbines from different wind farms, and intra-farm learning, which restricts collaboration to turbines in the same wind farm. Our analysis shows that high levels of statistical heterogeneity present significant challenges to collaborative learning. The accuracy of NBMs trained in intra-farm learning surpassed that of inter-farm learning in most situations, underscoring the adverse impacts of heterogeneity on collaborative learning. We demonstrated fine-tuning as an approach to address FL model training in view of significant statistical heterogeneity.
We propose several directions of future research. Firstly, our model selection and hyperparameter tuning processes were conducted on a full dataset from a single turbine, potentially diverging from real-world conditions where historical data accumulation occurs incrementally and disregarding the contributions of the other turbines. Addressing this challenge entails the development of automated and adaptive model selection methods capable of accommodating evolving data volumes and complexities <cit.>. Moreover, integrating FL techniques for hyperparameter tuning <cit.> could enhance efficiency and scalability in FL settings. Furthermore, we propose to investigate continuous learning strategies <cit.> for their potential to reduce communication costs and training time in FL by enabling incremental model updates instead of periodic full re-training. Incorporating such strategies would not only contribute to the ongoing evolution and refinement of collaborative learning methodologies in renewable energy applications but may also render them more applicable to real industrial settings. Finally, we considered identical feature and label spaces of all client wind turbines but did not consider other types of FL in this study, such as federated transfer learning <cit.>. They may form subject of future research in wind energy applications.
In conclusion, our study demonstrated the potential and challenges of collaborative learning in wind turbine condition monitoring through FL. By advancing our understanding of effective collaboration strategies and addressing challenges such as statistical heterogeneity and model adaptation, we move closer to realizing the full potential of FL for enhancing the reliability and efficiency of wind farms and other renewable energy systems.
§ CREDIT AUTHOR STATEMENT
Grataloup, A.: Conceptualization, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing.
Jonas, S.: Conceptualization, Methodology, Writing – review & editing.
Meyer, A.: Conceptualization, Writing – review & editing, Funding acquisition, Project administration, Supervision, Resources.
§ ACKNOWLEDGMENT
This research was supported by the Swiss National Science Foundation (Grant No. 206342).
§ APPENDIX
§
Figure <ref> shows that the evolution of the mean absolute errors for fine-tuned and local training by starting date is also consistent across all seasons with the exception of the starting date in June 2017. A possible explanation for this behavior is a seasonality-based data distribution shift between the training and test set for the Kelmarsh wind farm, which is discussed further in Appendix B.
§
We examine the model prediction compared to the ground truth of the gear-bearing temperature for one selected turbine from each wind farm. We restrict ourselves here to models trained on three weeks of training data in December for Figure <ref> and June for Figure <ref>, using the corresponding four-week test dataset
for evaluation. This analysis provides insights into the performance of various learning methods, enabling a qualitative assessment of FL. Notably, predictions from local training (no collaboration) and fine-tuned closely align with the ground truth. However, non-fine-tuned exhibits inferior performance, especially in scenarios characterized by significant data heterogeneity among clients, such as inter-farm learning.
The models were trained on three weeks of data in December 2016. Figure <ref> illustrates the performance of the local, intra- and interfarm learning strategies on the respective test sets. A single WT has been picked randomly from each wind farm to this end.
One important observation from Figure <ref> is the widening error of non-fine-tuned inter-farm as the data proportion decreases. Specifically, the red curve representing non-fine-tuned inter-farm for Penmanshiel (the wind farm with the smallest data proportion) shows a pronounced deviation from ground truth and is closer to the temperature ranges observed in the Kelmarsh and EDP wind farms. This disparity arises from the observable heterogeneity, with Penmanshiel exhibiting a temperature range of 25°C to 35°C, contrasting with the 40°C to 60°C range observed in Kelmarsh and EDP wind farms. attributes a larger weight to clients with more data, resulting in Penmanshiel's turbine contributing significantly less than those of Kelmarsh and EDP wind farms (see Figure <ref>).
Non-fine-tuned intra-farm learning outperforms non-fine-tuned inter-farm learning in this case study. This indicates that the different wind farms involved in the collaborative FL process end up competing to achieve different learning objectives rather than collaborate. However, this disparity is no longer visible after fine-tuning and opens the question of whether there is a retention of collaborative knowledge.
While these observations hold across most start dates and time ranges, a different behaviour emerges in situations where the local training fails to fit the test set, as shown in Figure <ref>:
In Figure <ref>, the overall observations remain consistent, except for the Kelmarsh wind farm during the testing period between September 11 and September 17. During this period, local training fails to effectively fit the ground truth, likely indicating either potential anomalies in the data, or data points lying outside the range of the model’s training data, leading to poor generalization. Upon examining the data distribution of various features, we find no indications of anomalous behavior during that
period. Furthermore, we observe wind speed and ambient temperature value ranges slightly higher and lower, respectively, in the affected test set compared to the training set. Such variations are expected when comparing weather conditions between June and September. A distribution shift being a possible cause is further supported by the observation that during this period, inter-farm learning, both with and without fine-tuning, outperforms intra-farm learning (see also Figure <ref>). This suggests that inter-farm learning may benefit from insights gained from other farms, enabling it to better adapt to locally unseen data. However, in practical scenarios, our primary concern is not whether a model trained on data from June will perform well on test data in September. Instead, our focus would lie on ensuring that the model performs effectively in the weeks that follow the training set. We could continuously retrain the model with new data as it becomes available, thereby mitigating any seasonality shift and increasing the likelihood of the model performing well on near-future data in a continuous learning setting.
§
elsarticle-num
|
http://arxiv.org/abs/2409.03202v1 | 20240905025032 | Generalized Ghost Dark Energy Model in Extended Modified Theory of Symmetric Telleparallel Gravity | [
"M. Sharif",
"Iqra Ibrar"
] | gr-qc | [
"gr-qc"
] |
Generalized Ghost Dark Energy Model in Extended Modified Theory of Symmetric
Telleparallel Gravity
M. Sharif [email protected] and Iqra Ibrar
[email protected]
Department of Mathematics and Statistics, The University of Lahore,
1-KM Defence Road Lahore-54000, Pakistan.
==========================================================================================================================================================================================
§ ABSTRACT
This paper focuses on developing a generalized ghost dark energy
model f(𝒬,𝒯) through the correspondence scheme for a
non-interacting scenario involving pressureless matter and a
power-law scale factor. We analyze the cosmological implications of
our model using the equation of state parameter,
ω_GGDE-ω'_GGDE and the r-s planes. We also examine
stability of the model by considering squared speed of sound
parameter. The equation of state parameter shows the phantom era,
while the squared speed of sound indicates a stable generalized
ghost dark energy model throughout the cosmic evolution. The
ω_GGDE-ω'_GGDE plane depicts a freezing region,
whereas the r-s plane corresponds to the Chaplygin gas model. We
conclude that our results are consistent with the current
observational data.
Keywords: f(𝒬,𝒯) gravity; Generalized ghost
dark energy; Cosmic diagnostic parameters.
PACS: 04.50.kd; 95.36.+x; 64.30.+t.
§ INTRODUCTION
In the early 20th century, Albert Einstein introduced the general
theory of relativity (GR), which revolutionized our comprehension of
the cosmos. It is expanding through a significant accumulation of
accurate observations and delving into the concealed realms of the
universe in current cosmology. The current concept of cosmic
expansion has been reinforced by several observable findings,
including large-scale structures, cosmic microwave background
radiations and supernovae type Ia <cit.>. The universe in which we
live is experiencing a unique epoch, in which it expands at an
accelerated rate. Cosmologists examined this expansion by conducting
numerous attempts on distant galaxies. It is assumed that this
universe expansion is due to an extraordinary force called dark
energy (DE), which works through immense negative pressure. Several
models have been proposed to explain phenomena related to DE and the
evolution of the universe. There are two main approaches to studying
DE, i.e., dynamical DE models and modified theories of gravity
<cit.>.
In recent years, scholars have proposed various approaches to
address these issues, yet they remain enigmatic to this day. To
characterize DE, researchers found the equation of state (EoS)
parameter beneficial for the model being examined. The Veneziano
ghost DE (GDE) model possesses noteworthy physical properties within
the cosmos <cit.>. In flat spacetime, the GDE does not contribute
to vacuum energy density but in curved spacetime, it enhances small
vacuum energy density in proportion to Λ_QCD^3 H, where
Λ_QCD represents the quantum chromodynamics (QCD) mass
scale and H denotes the Hubble parameter. With Λ_QCD∼
100 MeV and H∼ 10^-33eV, Λ^3_QCDH gives the
order of observed GDE density. This small value efficiently provides
the essential exotic force driving the accelerating universe,
thereby resolving the fine-tuning issue. <cit.>. The energy
density of GDE is given as
ρ_GDE=α H,
where α is an arbitrary constant.
The GDE model solves various issues effectively, but it encounters
stability problems <cit.>. It suggests that the energy density
relies not only on H explicitly but also on higher-order terms of
H, leading to what is known as the generalized GDE (GGDE) model.
In this model, the vacuum energy associated with the ghost field can
be regarded as a dynamic cosmological constant. In reference
<cit.>, the author explained that the Veneziano QCD ghost field's
contribution to vacuum energy does not precisely follow the
H-order. The additional term H^2 is particularly important
during the initial phase of the universe evolution, serving as early
DE <cit.>. Instead, a subleading term H^2 arises because the
conservation of the energy-momentum tensor holds independently
<cit.>. The GGDE density is given as follows
ρ_GGDE=α H+β H^2,
where β is another arbitrary constant.
Numerous DE models can be explored to study the ongoing accelerated
expansion of the universe. Khodam et al. <cit.> investigated the
reconstruction of f(ℛ,𝒯) gravity (here ℛ and
𝒯 are the Ricci scalar and trace of the energy-momentum
tensor, respectively) within the GGDE model, delving into cosmic
evolution through the analysis of cosmological parameters. Malekjani
<cit.> explored the cosmographic aspects of GGDE and concluded
that this model demonstrates strong consistency with observational
data. Ebrahimi et al. <cit.> studied the interacting GGDE within
a non-flat cosmos, revealing that the EoS parameter aligns with the
phantom era of the universe. Chattopadhyay <cit.> reconstructed
the QCD ghost f(T) model (T represents the torsion scalar) to
examine the cosmic evolution and found both the phantom and
quintessence phases of the cosmos. Sharif and Saba <cit.>
explored the cosmography of GGDE in f(𝒢, 𝒯)
(𝒢 is the Gauss-Bonnet invariant) gravity. Biswas et al.
<cit.> investigated the evolution trajectories of the EoS
parameter, ω_D-ω'_D and the state finder planes in
GGDE DGP model. Recently, Sharif and Ajmal <cit.> studied the
cosmography of GGDE f(𝒬) theory.
Several gravitational theories, including GR, are acquiring
importance because of their ability to explain the accelerating
expansion of the universe. The Levi-Civita connection in GR explains
gravitational interactions in Riemannian space based on the
assumption of geometry free from torsion and non-metricity.
Moreover, it is important to consider that the general affine
connection has a broader formulation <cit.>, and GR can be
derived within alternative spacetime geometries beyond Riemannian.
Teleparallel gravity presents an alternative framework to GR, where
gravitational interaction is characterized by torsion, denoted as
T <cit.>. Teleparallel equivalent of GR utilizes the
Weitzenböck connection, which entails zero curvature and
non-metricity <cit.>. In the study of a cosmological model
within Weyl-Carten (WC) spacetime, the Weitzenböck
connection is investigated with vanishing of the combined curvature
and scalar torsion <cit.>.
In a torsion-free space, gravity is modified by non-metricity,
defined as
𝒬_λζξ=∇_λg_ζξ. This
theory is called symmetric teleparallel gravity (STG) <cit.>.
Jimenez et al. <cit.> recently introduced f(𝒬) theory
also named as non-metric gravity, the scalar function 𝒬
represents the non-metricity. Lazkoz et al. <cit.> described the
constraints on f(𝒬) gravity by using polynomial
functions of the red-shift. The energy conditions for two distinct
f(𝒬) gravity models have also been discussed <cit.>.
The behavior of cosmic parameters in the same context was examined
by Koussour et al. <cit.>. Chanda and Paul <cit.> studied the
formation of primordial black holes in this theory.
Xu et al. <cit.> extended f(𝒬) gravity to
f(𝒬,𝒯) gravity by adding trace of the energy-momentum
tensor into f(𝒬) theory. The Lagrangian governing the
gravitational field is considered to be a gravitational function of
both 𝒬 and 𝒯. The field equations of the
corresponding theory are derived by varying the gravitational action
with respect to both the metric and the connection. The goal of
introducing this theory is to analyze its theoretical implications,
its consistency with real-world experimental evidence, and its
applicability to cosmological scenarios. Nájera and
Fajardo <cit.> studied cosmic perturbation theory in
f(𝒬,𝒯) gravity and found that non-minimal coupling
between matter and curvature perturbations have a major influence on
the universe evolution. Pati et al. <cit.> investigated the
dynamical features and cosmic evolution in the corresponding
gravity. Mandal et al. <cit.> studied the cosmography of
holographic DE f(𝒬,𝒯) gravity.
In this paper, we use a correspondence technique to recreate the
GGDE f(𝒬,𝒯) model in a non-interacting scenario. We
investigate cosmic evolution using the EoS parameter and phase
planes. The paper is structured as follows. In section 2,
we introduce f(𝒬,𝒯) gravity and its corresponding field
equations. In section 3, we explore the reconstruction
procedure to formulate the GGDE f(𝒬,𝒯) paradigm. In
section 4, we analyze cosmic behavior using the EoS
parameter and phase planes. We also examine the stability of the
resulting model. Finally, we discuss our findings in the last
section.
§ THE F(𝒬,𝒯) THEORY
This section presents the basic structure of modified
f(𝒬,𝒯) theory. In this theory, the framework of
spacetime is the torsion-free teleprallel geometry, i.e.,
R^ρ_ ηγν=0 and T^ρ_γν=0. The
connection in the WC geometry is expressed as <cit.>
Γ̂^λ_αβ=Γ^λ_αβ
+ℂ^λ_αβ+𝕃^λ_αβ,
where Γ^ζ_ γα is the Levi-Civita
connection, ℂ^λ_αβ is the contortion
tensor and 𝕃^λ_αβ is the deformation
tensor, given by
Γ^ζ_μα =1/2g^ζσ
(g_σα,μ+g_σμ,α-g_σμ,α),
ℂ^λ_αβ=Γ̂^λ_[αβ]
+g^λϕg_ακΓ̂^κ_[βϕ]
+g^λϕg_βκΓ̂^κ_[αϕ],
and
𝕃^λ_αβ=1/2g^λϕ(𝒬_βαϕ
+𝒬_αβϕ-𝒬_λαβ),
where
𝒬_λαβ=∇_λg_αβ=-g_αβ,_λ
+g_βϕΓ̂^ϕ_αλ
+g_ϕαΓ̂^ϕ_βλ.
The gravitational action can be written as <cit.>
S=1/2k∫ g^αβ(Γ^ρ_ϕαΓ^ϕ_βρ -Γ^ρ_ϕρΓ^ϕ_αβ)√(-g)d^4x.
Using the anti-symmetric relation
(Γ^μ_ςα=-𝕃^μ_ ςα)
in Eq.(<ref>), the integral action takes the form
S=-1/2k∫ g^αβ(𝕃^ρ_ϕα𝕃^ϕ_βρ - 𝕃^ρ_ϕρ𝕃^ϕ_αβ) √(-g) d^4x.
This action is known as the action of STG which is equivalent to the
Einstein-Hilbert action. There are some significant differences
between two gravitational paradigms. One of them is that the
vanishing of curvature tensor in STG causes the system to appear as
flat structure throughout. Furthermore, the gravitational effects
occur due to variations in the length of vector itself, rather than
rotation of an angle formed by two vectors in parallel transport.
Now, we look at an extension of STG and take an arbitrary function
of 𝒬, the above action takes the form
S=∫[1/2kf(𝒬)+L_m]√(-g)d^4x,
where
𝒬=-g^αβ(𝕃^ρ_μα𝕃^μ_βρ
-𝕃^ρ_μρ𝕃^μ_αβ),
which leads to f(𝒬) theory. If we couple this theory
with the trace of the energy-momentum tensor, we can obtain
f(𝒬,𝒯) theory whose action is given as
𝒮=1/2k∫ f(𝒬,𝒯) √(-g)d^4x+ ∫
L_m√(-g)d^4x,
here L_m represents the matter lagrangian. The traces of
non-metricity tensor are given by
𝒬_ρ= 𝒬^ α_ρ α, 𝒬̃_ρ= 𝒬^α_ρα.
The superpotential in view of 𝒬 is written as
P^ρ_αβ =-1/2ℒ^ρ_αβ
+1/4(𝒬^ρ-𝒬̃^ρ)
g_αβ- 1/4δ ^ρ_ [α𝒬_β].
Furthermore, the expression of 𝒬 obtained using the
superpotential becomes
𝒬=-𝒬_ραβP^ραβ
=-1/4(-𝒬^ρβζ𝒬_ρβζ
+2𝒬^ρβζ𝒬_ζρβ
-2𝒬^ζ𝒬̃_ζ+𝒬^ζ𝒬_ζ).
Taking the variation of S with respect to the metric tensor as
zero yields the field equations
δ S =0=∫1/2δ
[f(𝒬,𝒯)√(-g)]+δ[L_m√(-g)]d^4x
0 =∫1/2( -1/2 f g_αβ√(-g)δ g^αβ + f_𝒬√(-g)δ𝒬 + f_𝒯√(-g)δ𝒯)
-1/2𝒯_αβ√(-g)δ g^αβd^ 4x.
Furthermore, we define
𝒯_αβ = -2/√(-g)δ
(√(-g) L_m)/δ g^αβ, Θ_αβ= g^ρμδ𝒯_ρμ/δ g^αβ,
which implies that δ𝒯=δ
(𝒯_αβg^αβ)=(𝒯_αβ+
Θ_αβ)δ g^αβ. Inserting the
aforementioned factors in Eq.(<ref>), we have
δ S = 0=∫1/2{-1/2f
g_αβ√(-g)δ g^αβ +
f_𝒯(𝒯_αβ+
Θ_αβ)√(-g)δ g^αβ
- f_𝒬√(-g)(P_αρμ𝒬_β ^ρμ- 2𝒬^ρβ _α
P_ρμβ) δ g^αβ+2f_𝒬√(-g)
P_ραβ∇^ρδ g^αβ
+ 2f_𝒬√(-g)P_ραβ∇^ρδ
g^αβ}- 1/2𝒯_αβ√(-g)δ g^αβd^ 4x.
Integration of the term 2 f_𝒬√(-g)
P_ραβ∇^ρδ g^αβ along with
the boundary conditions takes the form -2 ∇^ρ
(f_𝒬√(-g) P_ραβ)δ
g^αβ. The terms f_𝒬 and f_𝒯
represent partial derivatives with respect to 𝒬 and
𝒯, respectively. Finally, we obtain the field equations
as
T_αβ =-2/√(-g)∇_ρ
(f_𝒬√(-g) P^ρ_αβ)- 1/2 f
g_αβ + f_𝒯 (𝒯_αβ +
Θ_αβ) -f_𝒬 (P_αρμ𝒬_β ^ρμ
-2𝒬^ρμ _α P_ρμβ).
Its covariant derivative yields the non-conservation equation as
∇_αT^α_ β =1/f_𝒯-1[∇_α(1/√(-g)∇_ρH^ ρα_β)-∇_α
(f_𝒯Θ^α_ β)
-1/√(-g)∇_ρ∇_αH^ ρα_β
-2∇_αA^α_ β
+1/2f_𝒯∂_β𝒯],
where hyper-momentum tensor density is defined as
H^ ρα_β=√(-g)/2f_𝒯δ𝒯/δΓ̂^β _ρα+δ√(-g)L_m/δΓ̂^β _ρα.
§ RECONSTRUCTION OF GGDE F(𝒬,𝒯) MODEL
In this section, we use a correspondence technique to recreate the
GGDE f(𝒬,𝒯) model. The line element of the isotropic and
spatially homogeneous universe model is given by
ds^2 = -dt^2+ a^2(t)[dx^2+ dy^2+dz^2],
where the scale factor is represented by a(t). The
configuration of isotropic matter with four-velocity fluid
u_μ, pressure (P_M), and normal matter density
(ρ_M) is given as
𝒯̃_μν=(ρ_M+P_M)u_μu_ν+P_Mg_μν.
The modified Friedmann equations in the context of
f(𝒬,𝒯) gravity are expressed as
3H^2 =ρ_eff=ρ_M+ρ_GGDE,
2Ḣ+3H^2 =P_eff=P_M+P_GGDE,
where H =ȧ/a. The dot demonstrates
derivative with respect to cosmic time t. The non-metricity
𝒬 in terms of the Hubble parameter is
𝒬=-1/4[-𝒬_ρξη𝒬^ρξη
+2𝒬_ρξη𝒬^ξρη-2𝒬̃^ρ𝒬_ρ+𝒬^ρ𝒬_ρ].
Simplifying this equation leads to
𝒬=6H^2.
Furthermore, ρ_GGDE and P_GGDE denote the density and
pressure of DE, respectively, given as
ρ_GGDE =1/2f(𝒬,𝒯)-𝒬f_𝒬
-f_𝒯(ρ_M+P_M),
P_GGDE =-1/2f(𝒬,𝒯)+2f_𝒬𝒬H+2f_𝒬Ḣ
+𝒬f_𝒬.
The conservation equation (<ref>) takes the following form for an
ideal fluid
ρ̇_M+3H(ρ_M+P_M)=1/(f_𝒯-1)[2∇_β(P_Mμ^βf_𝒯)
+f_𝒯∇_βμ^β𝒯
+2μ^β𝒯_αβ∇^α
f_𝒯μ^β].
The first field equation (<ref>) leads to
Ω_M+Ω_GGDE=1,
where Ω_M=ρ_M/3H^2 and
Ω_GGDE=ρ_GGDE/3H^2 represent the fractional
energy densities of normal matter and dark source, respectively.
Dynamic DE models with energy density proportional to Hubble
parameter are crucial to explaining the accelerated expansion of the
universe.
Next, we will use a correspondence approach in an ideal fluid
configuration to create the GGDE f(𝒬,𝒯) model with
emphasis on the dust case (P_M=0).
§.§ Non-interacting GGDE f(𝒬,𝒯) Model
Here, we consider the standard f(𝒬,𝒯) function in the
following form <cit.>
f(𝒬,𝒯)=f_1(𝒬)+f_2(𝒯),
where f_1 and f_2 depend upon 𝒬 and
𝒯, respectively. In this scenario, it is evident that
there is a minimal coupling between curvature and matter
constituents. This version of the generic function reveals that the
interaction is purely gravitational and hence easy to handle. This
can effectively explain the ongoing expansion of the universe.
Moreover, the reconstruction methodology demonstrates that such
generated models are physically viable <cit.>. Using dust fluid
and Eq.(<ref>), the field equations (<ref>) and (<ref>)
yield
3H^2=ρ_eff=ρ_M+ρ_GGDE,
2Ḣ+3H^2=P_eff=P_GGDE,
where
ρ_GGDE =1/2f_1(𝒬)+1/2f_2(𝒯)-𝒬
f_1𝒬+ f_2𝒯ρ_M,
P_GGDE =-1/2f_1(𝒬)-1/2f_2(𝒯)
+2f_1𝒬Ḣ+2
f_1𝒬𝒬H+𝒬f_1𝒬.
The associated conservation equation (<ref>) reduces to
ρ̇_M+3Hρ_M=1/f_2𝒯-1[2𝒯
f_2𝒯𝒯+f_2𝒯𝒯̇].
This equation is consistent with the standard continuity equation
when the right-hand side is assumed to be zero, implying
ρ̇_M+3Hρ_M=0 ⟹ ρ_M=ρ_0a(t)^-3,
with the constraint
f_2𝒯+2𝒯f_2𝒯𝒯= 0,
whose solution provides
f_2(𝒯)=a𝒯^1/2+b,
where a and b represent the integration constants.
We utilize Eqs.(<ref>) and (<ref>), in conjunction with the
constraint on f_2(𝒯) as given in (<ref>), to
formulate a reconstruction framework employing the correspondence
approach. The differential equation for f_1(𝒬) is
expressed as
f_1(𝒬)/2-𝒬
f_1𝒬+a𝒯^1/2+1/2b=α
H+β H^2.
We use the power-law solution for the scale factor given as follows
a(t)=a_0t^m, m > 0.
In this context, a_0 denotes the current value of the
scale factor. Employing this relation, the expressions for H, its
derivative, and the non-metricity scalar in terms of cosmic time t
are given as follows
H=m/t, Ḣ=-m/t^2, 𝒬=6
m^2/t^2.
Substituting (<ref>) in (<ref>), it follows that
ρ_M=d(a_0t^m)^-3,
where ρ_0=d for the sake of similicity. Using Eq.(<ref>) in
(<ref>), we can find the function f_1(𝒬) as
f_1(𝒬) =√(𝒬)[-a √(d)
2^3 m/8+1 3^3 m/8(𝒬/m^2)^-3 m/8/(-3
m/8-1/2) √(𝒬)+2
b/√(𝒬)+β m 𝒬/√(6)+22^3/4α𝒬/3 √(3)]
+c_1
√(𝒬),
where c_1 is the integration constant. Consequently, the
reconstructed f(𝒬,𝒯) model is obtained by substituting
Eqs.(<ref>) and (<ref>) into (<ref>) as follows
f(𝒬,𝒯) =1/18[-a √(d) 6^3
m/8+2(𝒬/m^2)^-3
m/8/-3 m/8-1/2+18 √(ad)+54 b+18 c_1
√(𝒬)
+3 √(6)β m 𝒬^3/2+4 6^3/4α𝒬^3/2].
Now we write down this function in terms of redshift parameter z.
The expression for the deceleration parameter is given by
q=-aä/ȧ^2=-1+1/m.
In relation to the deceleration parameter, the evolution of the
cosmic scale factor can be characterized as
a(t)=t^1/(1+q),
Here, we assume a_0 to be equal to unity. It is worth
noting that the power-law model corresponds to an expanding universe
when q > -1. The description of both the expanding phase and the
present cosmic evolution is given by
H=(1+q)^-1t^-1, H_0=(1+q)^-1t^-1_0.
In power-law cosmology, the evolution of the universe expansion is
determined by two fundamental parameters, the Hubble constant
(H_0) and the deceleration parameter (q). By examining the
correlation between the scale factor a and redshift z, we can
elucidate
H=H_0Γ^1+q,
where Γ=1+z. Using Eq.(<ref>) in (<ref>), 𝒬
appears as
𝒬=6H_0^2Γ^2+2q.
The reconstructed model for the GGDE f(𝒬,𝒯) in terms of
the redshift parameter is derived by substituting this value in
Eq.(<ref>), resulting in
f(𝒬,𝒯) =16 a √(d)(H_0^2 Γ^2
q+2/m^2)^-3 m/8/3 m+4+√(ad)+3
b+√(6) c_1 √(H_0^2 Γ^2 q+2)
+(H_0^2 Γ^2 q+2)^3/26 β m +8 √(6)α(H_0^2 Γ^2 q+2)^3/2.
For the graphical analysis, we set a = 1, b = -4, and c_1 =
-15. Figure 1 indicates that the reconstructed GGDE model
remains positive and increases with respect to z. The GGDE model
indicates a rapid expansion, according to this analysis. Figure
2 illustrates the behavior of ρ_GGDE and P_GGDE
with the redshift parameter. The energy density ρ_GGDE is
positive and increasing, whereas P_GGDE is negative and
consistent with the DE behavior. We then investigate the properties
of ρ_GGDE and P_GGDE in the context of the reconstructed
GGDE f(𝒬,𝒯) gravity model. Putting (<ref>) into
(<ref>) and (<ref>), we have
ρ_GGDE =1/18(H_0^2 Γ^2 q+2/m^2)^-3 m/8[36 a √(d)-18 (H_0^2 Γ^2
q+2/m^2)^3 m/8{√(ad) (d-1)
+b (d-2)+2 (H_0^2 Γ^2 q+2)^3/2(4
√(6)α +3 β m)}],
P_GGDE =1/36[1/H_0{(3 H_0-1)
{6 √(6)√(H_0^2 Γ^2 q+2){c_1+3
√(6)β H_0^2 m Γ^2 q+2
+4 6^3/4α H_0^2 Γ^2 q+2}-72 a
√(d) m (H_0^2 Γ^2 q+2/m^2)^-3
m/8/3 m+4}}
+1/12 H_0^3[Γ^-3 (q+1){{18 a
√(d) m(3 m+8)(H_0^2 Γ^2
q+2/m^2)^-3 m/8}(3 m+4)
+√(6)√(H_0^2 Γ^2 q+2){-6 c_1+18 √(6)β H_0^2 m Γ^2 q+2+24 6^3/4α H_0^2 Γ^2
q+2}}]
-288 a √(d)(H_0^2 Γ^2 q+2/m^2)^-3 m/8/3
m+4 -18 √(ad)-54 b-18 √(6) c_1 √(H_0^2
Γ^2 q+2)
-108 β m (H_0^2 Γ^2 q+2)^3/2-144
√(6)α_0a (H_0^2 Γ^2
q+2)^3/2].
§ COSMOLOGICAL ANALYSIS
In this section, we explore the evolutionary stages of the universe
through different phases. To achieve this, we utilize the
reconstructed GGDE f(𝒬,𝒯) model under non-interacting
conditions, as defined in Eq.(<ref>). Furthermore, we depict the
dynamics of various cosmological parameters, including the EoS
parameter, ω_GGDE-ω'_GGDE and state finder planes.
The stability of this model is also examined.
§.§ Equation of State Parameter
The EoS parameter (ω=P/ρ) of DE plays a crucial
role in describing both the cosmic inflationary phase and the
subsequent expansion of the universe. We analyze the criterion for
an accelerating universe, which occurs when the EoS parameter
ω<-1/3. When ω=-1, it corresponds to the
cosmological constant. However, when ω=1/3 and
ω=0, it signifies the radiation-dominated and
matter-dominated universe, respectively. Moreover, the phantom
scenario manifests when we assume ω<-1. The expression for
the EoS parameter is given by
ω_GGDE=
P_eff/ρ_eff=P_GGDE/ρ_GGDE+ρ_M.
Equations (<ref>), (<ref>) and (<ref>) are employed in
the aforementioned expression to compute the respective parameter as
ω_GGDE =-[f^3 Γ^-2 q-2{a √(d)
2^3 m/8+1 3^3 m/8+2(-432 h^4 m Γ^4
q+4 -48 h^2 Γ^2 q+2(12 h^2
×Γ^2 q+2-3 h m Γ^2 q+2) +3 h m (3 m+8)
Γ^q+1) -6^3 mx/8 +1/2 (3 m+4) √(h^2
Γ^2 q+2)
×(h^2 Γ^2 q+2/m^2)^3 m/8{108 √(6)√(ad)(H_0^2 Γ^2
q+2)^3/2+324 √(6) b (H_0^2 Γ^2
q+2)^3/2+216
× c_1 H_0^3 Γ^4 q+4+18 c_1 H_0Γ^q+1-1296
√(6)β H_0^6 m Γ^6 q+6 -1728 6^3/4α
H_0^6 Γ^6 q+6
+648 √(6)β H_0^5 m Γ^6 q+6 +864 6^3/4α
H_0^5 Γ^6 q+6-54 √(6)β H_0^3 m Γ^3
q+3-72 6^3/4
×α H_0^3 Γ^3 q+3}}] [18
H_0^2 (3 m+4) {a √(d) f^3 2^3 m/8 3^3
m/8+2 m +H_0^2 6^3 m/8+1Γ^2 q+2
×(H_0^2 Γ^2 q+2/m^2)^3
m/8(6 f^3 √(H_0^2 Γ^2 q+2)(4 √(6)α +3 β m)-12 d)}]^-1.
Figure 3 illustrates the behavior of EoS parameter against
z from which one can find that phantom epoch for current as well
as late time cosmic evolution.
§.§ ω_GGDE-ω'_GGDE Plane
Here, we utilize the ω_GGDE-ω'_GGDE phase plane,
where ω'_GGDE indicates the evolutionary mode of
ω_GGDE and prime denotes the derivative with respect to
𝒬. This cosmic plane was established by Caldwell and
Linder <cit.> to investigate the quintessence DE paradigm which
can be split into freezing (ω_GGDE <0, ω'_GGDE<0 )
and thawing (ω_GGDE<0, ω'_GGDE>0 ) regions. To
depict the prevailing cosmic expansion paradigm, the freezing region
signifies a more accelerated phase compared to thawing. The cosmic
trajectories of ω_GGDE-ω'_GGDE plane for specific
choices of m are shown in Figure 4 which provides the
freezing region of the cosmos. The expression of ω'_GGDE is
given in Appendix A.
§.§ State Finder Analysis
One of the techniques to examine the dynamics of the cosmos using a
cosmological perspective is state finder analysis. It is an
essential approach for understanding numerous DE models. As a
combination of the Hubble parameter and its temporal derivatives,
Sahni et al. <cit.> established two dimensionless parameters
(r,s) given as
r=⃛a/a H^3, s=r-1/3(q-1/2).
The acceleration of cosmic expansion is determined by the parameter
s, while the deviation from pure power-law behavior is precisely
described by the parameter r. This is a geometric diagnostic that
does not favor any particular cosmological paradigm. It is such an
approach that does not depend upon any specific model to distinguish
between numerous DE scenarios, i.e., CG (Chaplygin gas), HDE
(Holographic DE), SCDM (standard CDM) and quintessence.
Several DE scenarios for the appropriate choices of r and s
parametric values are given below.
* When r=1, s=0, it indicates the CDM model.
* If r=1, s=1, then it denotes SCDM epoch.
* When r=1, s=2/3, this epoch demonstrates the HDE
model.
* When we have r>1, s<0, we get CG scenario.
* Lastly, r<1, s>0 corresponds to quintessence paradigm.
For our considered setup, the parameters r and s in terms of
required factors are given in Appendix B. The graphical
analysis of r-s phase plane in Figure 5 gives r>1 and
s<0, indicating the CG model.
§.§ Squared Speed of Sound Parameter
Perturbation theory provides a direct analysis for evaluating the
stability of the DE model through examination of the sign of
(ν_s^2). In this context, we investigate the squared speed
of sound parameter to analyze the stability of the GGDE
f(𝒬,𝒯) model, represented by
ν_s^2=P_GGDE/ρ'_GGDEω'_GGDE
+ω_GGDE.
A positive value signifies a stable configuration, whereas a
negative value indicates unstable behavior for the associated model.
Substituting the necessary expressions on the right-hand side of the
equation above for the reconstructed model, we derive the squared
speed of sound parameter as provided in Appendix C. Figure
6 illustrates that the speed component remains positive for
all assumed values of m, indicating the stability of the
reconstructed GGDE f(𝒬,𝒯) model throughout the cosmic
evolution.
§ FINAL REMARKS
In this study, we have explored various features of the GGDE model
within the framework of the recently developed f(𝒬,𝒯)
theory of gravity. To comprehend the influence of 𝒬 and
𝒯 in the GGDE ansatz, we have adopted a specific
f(𝒬,𝒯)=f_1(𝒬)+f_2(𝒯) model. We
have formulated a reconstruction framework involving a power-law
scale factor and a corresponding scenario for the flat FRW universe.
Subsequently, we have analyzed the EoS parameter,
ω_GGDE-ω'_GGDE and state finder planes, and
conducted a squared sound speed analysis for the derived model. We
summarize the main results as follows.
* In the non-interacting case, the reconstructed GGDE f(𝒬,𝒯)
model shows an increasing trend for z, indicating the realistic
nature of the reconstructed model (Figure 1).
* The energy density demonstrates positive behavior, showing an
increase, while the pressure exhibits negative behavior. These
patterns align with the characteristics of DE (Figure 2).
* The early and late time universe are characterized by the EoS
parameter, which involves phantom field and DE. Additionally, we
have noted a trend where this parameter assumes increasingly
negative values below -1 (Figure 3). These observations
align with the current understanding of accelerated cosmic behavior.
* The evolutionary pattern in the ω_GGDE-ω'_GGDE plane
indicates a freezing region for all values of m (Figure
4). This affirms that the non-interacting GGDE
f(𝒬,𝒯) gravity model implies a more accelerated
expanding universe.
* The r-s plane depicts the CG model as both r
and s satisfy the respective model criteria (Figure 5).
* We have found that the squared speed of sound parameter is
positive and hence GGDE f(𝒬,𝒯) gravity model is stable
for all values of m (Figure 6).
Finally, we conclude that our results align with the current
observational data <cit.>, as provided below
ω_GGDE = -1.023^+0.091_-0.096 (Planck
TT+LowP+ext),
ω_GGDE = -1.006^+0.085_-0.091 (Planck
TT+LowP+lensing+ext),
ω_GGDE = -1.019^+0.075_-0.080 (Planck TT, TE,
EE+LowP+ext).
These values have been established using diverse observational
methodologies, ensuring a confidence level of 95%. Moreover, we
have verified that the cosmic diagnostic state finder parameters for
our derived model align with the constraints and limitations on the
kinematics of the universe <cit.>. Sharif and Saba <cit.>
established the correspondence of modified Gauss-Bonnet theory with
GGDE paradigm and found phantom phase as well as the stable
reconstructed model in non-interacting case. Our findings align with
these results. Our results are also consistent with the recent work
in f(𝒬) theory <cit.>.
§ APPENDIX A: CALCULATION OF Ω'_GGDE
ω'_GGDE =[f^3 Γ^-4 q-4{2^3 m/8+1
3^3 m/8+2 a √(d)(3 H_0 m (3 m+8)
Γ^q+1-48 H_0^2 (12 H_0^2 Γ^2 q+2
-3 H_0 m Γ^2 q+2) Γ^2 q+2-432 H_0^4 m
Γ^4 q+4) -6^3 m/8+1/2 (3 m+4)
√(H_0^2 Γ^2 q+2)
×(H_0^2 Γ^2 q+2/m^2)^3
m/8{324 √(6) b (H_0^2 Γ^2
q+2)^3/2+108 √(6)√(ad)(H_0^2 Γ^2
q+2)^3/2
-726^3/4 H_0^3 Γ^3 q+3α -1728 6^3/4 H_0^6
Γ^6 q+6α+864 6^3/4 H_0^5 Γ^6 q+6α
-54 √(6) H_0^3m Γ^3 q+3β -1296 √(6) H_0^6m
Γ^6 q+6β+648 √(6) H_0^5 m Γ^6 q+6β
+18 H_0Γ^q+1 c_1+216 H_0^3 Γ^4 q+4
c_1}}][108 H_0^4 (3 m+4) {6^3
m/8+1 H_0^2 Γ^2 q+2
×(6 f^3 √(H_0^2 Γ^2 q+2)(4 √(6)α +3 m β) -12 d) (H_0^2 Γ^2
q+2/m^2)^3 m/8 +2^3 m/8 3^3
m/8+2
× a √(d) f^3 m}]^-1+[f^3 Γ^-2 q-2{2^3 m/8 3^3 m/8+1 f^3 √(H_0^2
Γ^2 q+2)(4 √(6)α+3 m β)
×(H_0^2 Γ^2 q+2/m^2)^3
m/8 +6^3 m/8(6 f^3 √(H_0^2 Γ^2 q+2)(4 √(6)α +3 m β) (H_0^2
Γ^2 q+2/m^2)^3 m/8
+[2^3 m/8-3 3^3 m/8+1H_0^2 Γ^2
q+2(6 f^3 √(H_0^2 Γ^2 q+2)(4 √(6)α+3 m β)(H_0^2 Γ^2 q+2/m^2)^3 m/8-1]
×1/m}(2^3 m/8+1 3^3
m/8+2a √(d){3 H_0 m (3 m+8) Γ^q+1-48 H_0^2
(12 H_0^2 Γ^2 q+2
-3 H_0 m Γ^2 q+2) Γ^2 q+2-432 H_0^4 m
Γ^4 q+4} -6^3 m/8+1/2 (3
m+4)(H_0^2 Γ^2 q+2/m^2)^3 m/8
×√(H_0^2 Γ^2 q+2)(324 √(6) b
×(H_0^2 Γ^2 q+2)^3/2+108 √(6)√(ad)(H_0^2 Γ^2 q+2)^3/2
-726^3/4 H_0^3 Γ^3 q+3α -1728 6^3/4 H_0^6
Γ^6 q+6α +864 6^3/4 H_0^5 Γ^6 q+6α
-54 √(6) H_0^3 m Γ^3 q+3β -1296 √(6) H_0^6
m Γ^6 q+6β +648 √(6) H_0^5mΓ^6 q+6β
+18 H_0Γ^q+1 c_1+216 H_0^3 Γ^4 q+4
c_1))] [18 H_0^2 (3 m+4) (6^3
m/8 +1 H_0^2 Γ^2 q+2
×(6 f^3 √(H_0^2 Γ^2 q+2)(4√(6)α +3 m β)-12 d) (H_0^2 Γ^2
q+2/m^2)^3 m/8 +2^3 m/8 3^3
m/8+2
× a √(d) f^3 m)^2]^-1-[f^3Γ^-2
q-2(-6^3 m/8+1/2 (3 m+4)√(H_0^2
Γ^2 q+2){-12 6^3/4
× H_0αΓ^q+1-9 √(6) H_0 m βΓ^q+1 +108 H_0^2 c_1 Γ^2 q+2+36 H_0 c_1 Γ^2
q+2
-432 6^3/4 H_0^4 αΓ^4 q+4+2886^3/4 H_0^3
αΓ^4 q+4 -324 ×√(6) H_0^4 m βΓ^4
q+4
+216 √(6) H_0^3 m βΓ^4 q+4+81 √(6) b
√(H_0^2 Γ^2 q+2) +27 √(6)√(ad)√(H_0^2 Γ^2 q+2)}
×(H_0^2 Γ^2 q+2/m^2)^3
m/8 +2^3 m/8+1 3^3 m/8+2 a √(d){-96 H_0^2 Γ^2 q+2 -72 H_0^2 m Γ^2 q+2
-8 (12 H_0^2 Γ^2 q+2-3 H_0 m Γ^2
q+2)}-[2^3 m/8-3/2 3^3
m/8-1/2 (3 m+4)(H_0^2 Γ^2
q+2/m^2)^3 m/8
×{324 √(6) b (H_0^2 Γ^2 q+2)^3/2
+108√(6)√(ad)(H_0^2 Γ^2 q+2)^3/2
-6^3/4 H_0^3 Γ^3 q+3α
-1728 6^3/4 H_0^6 Γ^6 q+6α+864 6^3/4
H_0^5Γ^6 q+6α -54 √(6) H_0^3 m Γ^3 q+3β
-1296 √(6) H_0^6 m Γ^6 q+6β +√(6) H_0^5
mΓ^6 q+6β +18 H_0Γ^q+1 c_1 +H_0^3 Γ^4
q+4 c_1}]
×[√(H_0^2 Γ^2 q+2)]^-1
-[2^3 m/8-7/2 3^3 m/8+1/2
(3 m+4) √(H_0^2 Γ^2 q+2)(H_0^2 Γ^2
q+2/m^2)^3 m/8-1
×(324 √(6) b (H_0^2 Γ^2
q+2)^3/2+108 √(6)(H_0^2 Γ^2 q+2)^3/2
-72 6^3/4 H_0^3 Γ^3 q+3α
-1728 6^3/4 H_0^6 Γ^6 q+6α +864 6^3/4
H_0^5Γ^6 q+6α -54 √(6) H_0^3 m Γ^3 q+3β
-1296 √(6) H_0^6 m Γ^6 q+6β +648 √(6) H_0^5
m Γ^6 q+6β+18 H_0Γ^q+1 c_1
+216 H_0^3 Γ^4 q+4 c_1)][18 H_0^2 (3 m+4)
(6^3 m/8+1 H_0^2 Γ^2 q+2(6 f^3
√(H_0^2 Γ^2 q+2)
×(4 √(6)α +3 m β)-12 d)
(H_0^2 Γ^2 q+2/m^2)^3
m/8+2^3 m/8 3^3 m/8+2 a √(d) f^3
m)].
§ APPENDIX B: EVALUATION OF R AND S PARAMETERS
r =2 (3/2[f^3 Γ^-4 q-4(2^3
m/8+1 3^3 m/8+2 a √(d){3 H_0 m (3 m+8)
Γ^q+1-48 H_0^2 (12 H_0^2 Γ^2 q+2
- Γ^2 q+2) Γ^2 q+2 -432 H_0^4 m Γ^4
q+4}-6^3 m/8 +1/2 (3 m+4) √(^H_02
Γ^2 q+2)(h^2 Γ^2 q+2/m^2)^3
m/8
×(324 √(6) b (H_0^2 Γ ^2 q+2)^3/2
+108 √(6)√(ad)(H_0^2 Γ^2
q+2)^3/2-72 6^3/4 h^3 Γ^3 q+3α
× 6^3/4 H_0^6 Γ^6 q+6α +864 6^3/4 h^5
Γ^6 q+6×α -54 √(6) H_0^3 m Γ^3 q+3β -1296 √(6) H_0^6m
×Γ^6 q+6β +648 √(6) H_0^5 m Γ^6 q+6β +18 H_0Γ^q+1 c_1+216 h^3 Γ^4 q+4
c_1))] [108 h^4
×(6^3 m/8+1H_0^2 Γ^2 q+2(6 f^3
√(H_0^2 Γ^2 q+2){4 √(6)α +3 m β}(H_0^2 Γ^2 q+2/m^2)^3
m/8
× 3^3 m/8+2 a √(d) f^3 m)]^-1
+[f^3 Γ^-2 q-2(2^3 m/8 3^3
m/8+1 f^3 √(H_0^2 Γ^2 q+2)(4 √(6)α
r
+3 m β)(H_0^2 Γ^2
q+2/m^2)^3 m/8 +6^3 m/8(6 f^3
√(H_0^2 Γ^2 q+2)(4√(6)α +3 m β)
-12 d) (H_0^2 Γ^2 q+2/m^2)^3
m/8+1/m[2^3 m/8-3 3^3 m/8+1 h^2
Γ^2 q+2(6 f^3 √(h^2 Γ^2 q+2)(4
√(6)α
+3 m β)- (h^2 Γ^2
q+2/m^2)^3 m/8-1](2^3 m/8+1
3^3 m/8+2 a √(d)(3 h m (3 m+8) Γ^q+1
-48 h^2 (12 H_0^2 Γ^2 q+2 -3 H_0 m Γ^2
q+2)-432 h^4 m Γ^4 q+4) - (h^2
Γ^2 q+2/m^2)^3 m/8
×√(h^2 Γ^2 q+2)(324 √(6) b (h^2
Γ^2 q+2)^3/2 108 √(6)√(ad) + (h^2
Γ^2 q+2)^3/2-72 6^3/4
× h^3 Γ^3 q+3α -1728 6^3/4 h^6 Γ^6 q+6α +864 6^3/4 h^5 Γ^6 q+6α-54 √(6) h^3 m
Γ^3 q+3β
-1296 √(6) H_0^6 m Γ^6 q+6β+216 H_0^3 Γ^4
q+4 c_1][18 h^2 (3 m+4) (6^3 m/8+1 H_0^2
Γ^2 q+2
× 6 f^3 √(H_0^2 Γ^2 q+2)(4 √(6)α 3 m β)-12 d) (H_0^2 Γ^2
q+2/m^2)^3 m/8+2^3 m/8 3^3
m/8+2f^3 m)^2]^-1
-[f^3 Γ^-2 q-2(-6^3 m/8+1/2(3
m+4) √(H_0^2 Γ^2 q+2)(-12 6^3/4 H_0αΓ^q+1
+36 H_0 c_1 Γ^2 q+2-432 6^3/4 h^4 αΓ^4 q+4
+288 6^3/4 h^3 αΓ^4 q+4 -324 √(6) H_0^4 m
×βΓ^4 q+4+216 √(6) H_0^3 m βΓ^4
q+4 + √(H_0^2 Γ^2 q+2) +27 √(6)√(ad)√(H_0^2 Γ^2 q+2))
×(H_0^2 Γ^2 q+2/m^2)^3
m/8+2^3 m/8+1 3^3 m/8+2 a √(d){-
H_0^2 Γ^2 q+2- H_0^2 m Γ^2 q+2
-8 (12 h^2 Γ^2 q+2-3 H_0 m Γ^2
q+2)}-[2^3 m/8-3/2 3^3
m/8-1/2 (3 m+4) (H_0^2 Γ^2
q+2/m^2)^3 m/8
×(324 √(6) b (H_0^2 Γ^2 q+2)^3/2
+108 √(6)√(ad)(H_0^2 Γ^2
q+2)^3/2 -72 6^3/4
× H_0^3 Γ^3 q+3α -1728 6^3/4 H_0^6
Γ^6 q+6α +864 6^3/4 H_0^5 Γ^6 q+6α
-54 √(6)
× H_0^3 m Γ^3 q+3β - √(6) h^6 m Γ^6
q+6β + √(6) h^5 m Γ^6 q+6β +18 H_0Γ^q+1 c_1
+216 h^3 Γ^4 q+4 c_1)] ×[√(H_0^2
Γ^2 q+2)]^-1-1/m[2^3 m/8
-7/2 3^3 m/8+1/2(H_0^2
Γ^2 q+2/m^2)^3 m/8-1
×√(h^2 Γ^2 q+2)(324 √(6) b (H_0^2
Γ^2 q+2)^3/2 +√(6)√(ad)(h^2
Γ^2 q+2)^3/2
-72 6^3/4 H_0^3 Γ^3 q+3α -1728 6^3/4 H_0^6
Γ^6 q+6α + 6^3/4 H_0^5 Γ^6 q+6α -54
√(6) h^3
× m Γ^3 q+3β -1296 √(6) H_0^6 m Γ^6
q+6β +648 √(6) h^5 m Γ^6 q+6β +18 h
Γ^q+1 c_1
+216 h^3 Γ^4 q+4 c_1)]][18 h^2 (3 m+4)
(6^3 m/8+1 h^2 Γ^2 q+2(6 f^3 √(h^2
Γ^2 q+2)(4 √(6)α
+3 m β)-) (h^2 Γ^2
q+2/m^2)^3 m/8+2^3 m/8 3^3
m/8+2 a √(d) f^3 m)]^-1)
+1/2)+3/2[f^3 Γ^-4 q-4
×(2^3 m/8+1 3^3 m/8+2 a
√(d)(3 H_0 m (3 m+8) Γ^q+1 -48 H_0^2 (12
H_0^2 Γ^2 q+2 -3 H_0 m
×Γ^2 q+2) Γ^2 q+2 -H_0^4 m Γ^4
q+4)-6^3 m/8 +1/2 (3 m+4) √(H_0^2
Γ^2 q+2)(H_0^2 Γ^2
q+2/m^2)^3 m/8
×(324 √(6) b (H_0^2 Γ^2 q+2)^3/2
+108 √(6)√(ad)(H_0^2 Γ^2
q+2)^3/2 -72 6^3/4 H_0^3 Γ^3 q+3α
-1728 6^3/4 H_0^6 Γ^6 q+6α +864 6^3/4 H_0^5
Γ^6 q+6α -54 √(6) H_0^3 m Γ^3 q+3β
-1296 √(6) H_0^6 m
×Γ^6 q+6β +648 √(6) H_0^5 m Γ^6 q+6βΓ^q+1 c_1 +216 H_0^3 Γ^4 q+4
c_1))] [108 H_0^4 (3 m+4)
×(6^3 m/8+1 H_0^2 Γ^2 q+2(√(H_0^2 Γ^2 q+2)(4 √(6)α +3 m
β) -12 d) (H_0^2 Γ^2
q+2/m^2)^3 m/8
×{2^3 m/8 3^3 m/8+1 f^3
√(H_0^2 Γ^2 q+2)(4 √(6)α +3 m β) (H_0^2 Γ^2 q+2/m^2)^3
m/8 + (√(H_0^2 Γ^2 q+2)
×(4 √(6)α +3 m β)-12 d)
(H_0^2 Γ^2 q+2/m^2)^3 m/8
+[2^3 m/8-3 3^3 m/8+1 H_0^2 Γ^2
q+2√(H_0^2 Γ^2 q+2)
×(4 √(6)α +3 m β) -12 d
(H_0^2 Γ^2 q+2/m^2)^3
m/8-1][m]^-1}(2^3 m/8+1
3^3 m/8+2 a √(d)(3 H_0
× m (3 m+8) Γ^q+1 -48 H_0^2 (12 H_0^2
Γ^2 q+2-3 H_0 m Γ^2 q+2) Γ^2 q+2 -432
H_0^4 m Γ^4 q+4)
-6^3 m/8 +1/2 (3 m+4)√(H_0^2 Γ^2
q+2)(H_0^2 Γ^2 q+2/m^2)^3 m/8(324 √(6) b (H_0^2 Γ^2 q+2)^3/2+108
√(6)
×(H_0^2 Γ^2 q+2)^3/2 -72 6^3/4 H_0^3
Γ^3 q+3α-1728 6^3/4 H_0^6 Γ^6 q+6α
+864 6^3/4 H_0^5 Γ^6 q+6
×α -54 √(6) H_0^3 m Γ^3 q+3β -√(6)
H_0^6 m Γ^6 q+6β +√(6) H_0^5 m Γ^6 q+6β +H_0Γ^q+1
+H_0^3 Γ^4 q+4 c_1))] [18 H_0^2 (3
m+4) (6^3 m/8+1 H_0^2 Γ^2 q+2{6 f^3
√(H_0^2 Γ^2 q+2)(4 √(6)α +β)
×(H_0^2 Γ^2 q+2/m^2)^3
m/8 +2^3 m/83^3 m/8+2 a √(d) f^3
m)^2] -[f^3 Γ^-2 q-2(-6^3
m/8+1/2 (3 m+4)
×√(H_0^2 Γ^2 q+2)(-12 6^3/4 H_0αΓ^q+1 - √(6) H_0 m βΓ^q+1+ H_0^2 c_1
Γ^2 q+2+ H_0 c_1 Γ^2 q+2
- 6^3/4 H_0^4 αΓ^4 q+4+ 6^3/4× H_0^3
αΓ^4 q+4-√(6) H_0^4 m βΓ^4 q+4
+√(6) H_0^3 m βΓ^4 q+4
+ √(6) b √(H_0^2 Γ^2 q+2)+ √(6)√(ad)√(H_0^2 Γ^2 q+2))
(H_0^2 Γ^2 q+2/m^2)^3 m/8[3 m/8 +2^3 m/8+1 3^3 m/8+2
×(-96 H_0^2 Γ^2 q+2-72 H_0^2 m Γ^2 q+2
-8 (12 H_0^2 Γ^2 q+2-3 H_0 m Γ^2
q+2)) -(3 m+4)
×(H_0^2 Γ^2 q+2/m^2)^3
m/8 b (H_0^2 Γ^2 q+2)^3/2( √(6)+
√(6)√(ad)(H_0^2 Γ^2 q+2)^3/2
-72 6^3/4 H_0^3
×Γ^3 q+3α -6^3/4 H_0^6 Γ^6 q+6α
+(324 √(6)(324 √(6) 6^3/4 H_0^5 Γ^6
q+6α -54 √(6) H_0^3 m
×Γ^3 q+3β -√(6) H_0^6 m Γ^6 q+6β
+√(6) H_0^5 m Γ^6 q+6β +18 H_0Γ^q+1
c_1+H_0^3 Γ^4 q+4 c_1)]
×[√(H_0^2 Γ^2 q+2)] -[2^3
m/8 -7/2 3^3 m/8+1/2√(H_0^2
Γ^2 q+2)(H_0^2 Γ^2
q+2/m^2)^3 m/8-1(H_0^2 Γ^2
q+2)^3/2
+√(6)√(ad)(H_0^2 Γ^2 q+2)^3/2
- 6^3/4 H_0^3 Γ^3 q+3α -1728 6^3/4 H_0^6
Γ^6 q+6α +864 6^3/4
× H_0^5 Γ^6 q+6α -54 √(6) H_0^3 m
Γ^3 q+3β -1296 √(6) H_0^6 m Γ^6 q+6β
+648 √(6) H_0^5 m Γ^6 q+6β
+18 H_0Γ^q+1 c_1+216 H_0^3 Γ^4 q+4
c_1)][m]^-1] [18 H_0^2 (3 m+4)
(6^3 m/8+1 H_0^2 Γ^2 q+2(6 f^3
×√(H_0^2 Γ^2 q+2)(4 √(6)α +3
m β)-12 d ) (H_0^2 Γ^2
q+2/m^2)^3 m/8 +2^3 m/8 3^3
m/8+2 a √(d) f^3 m)]^-1
-3 Γ^-q-1(f^3 Γ^-2 q-2([2^3
m/8-2 3^3 m/8+1α +3 m β)
(H_0^2 Γ^2 q+2/m^2)^3
m/8][√(H_0^2 Γ^2 q+2)]^-1
+[2^3 m/8-4 3^3 m/8(3
m/8-1) H_0^2 Γ^2 q+2(6 f^3 √(H_0^2
Γ^2 q+2)(4 √(6)α +3 m β)-
(H_0^2 /m^2)^3 m/8-2]
×[m^3]^-1 +2^3 m/8-3 3^3 m/8+1
f^3 √(H_0^2 Γ^2 q+2)(4 √(6)α +3 m
β) (H_0^2 Γ^2 q+2/m^2)^3
m/8-1 +[2^3 m/8-3
× 3^3 m/8(6 f^3 √(H_0^2 Γ^2 q+2)
(4 √(6)α +3 m β) -12 d)
(H_0^2 Γ^2 q+2/m^2)^3
m/8-1][m]^-1(2^3 m/8+1
× 3^3 m/8+2 a √(d)(3 H_0 m
Γ^q+1-48 H_0^2 (12H_0^2 Γ^2 q+2 -3 H_0 m
Γ^2 q+2) Γ^2 q+2 432 H_0^4 m
×Γ^4 q+4)-6^3 m/8 +1/2 -(3 m+4)
√(H_0^2 Γ^2 q+2)(H_0^2 Γ^2
q+2/m^2)^3 m/8(324 √(6) b (H_0^2
Γ^2 q+2)^3/2
+108 √(6)√(ad)(H_0^2 Γ^2
q+2)^3/2-72 6^3/4 H_0^3 Γ^3 q+3α -17286^3/4H_0^6 Γ^6 q+6α +864 6^3/4
× h^5 Γ^6 q+6α -54 √(6) h^3 m Γ^3 q+3β -1296 √(6) h^6 m Γ^6 q+6β +648 √(6) h^5
× m Γ^6 q+6β
+18 h Γ^q+1 c_1 +216 h^3 Γ^4 q+4 c_1))18
h^2 (3 m+4) (6^3 m/8+1 h^2 Γ^2 q+2(6 f^3
√(h^2 Γ^2 q+2)
×(4 √(6)α +3 m β) -12 d)
(h^2 Γ^2 q+2/m^2)^3
m/8+2^3 m/8 3^3 m/8+2 a √(d) f^3 m)^2
+f^3 Γ^-4 q-4
×(-6^3 m8 +1/2 (3 m+4) √(h^2 Γ^2
q+2)(-6^3/4 H_0αΓ^q+1-9 √(6) H_0 m
βΓ^q+1 +108
× H_0^2 c_1 Γ^2 q+2 + H_0 c_1 Γ^2 q+2 -6^3/4 H_0^4 αΓ^4 q+4 + 6^3/4 H_0^3 αΓ^4 q+4 -√(6) H_0^4 m βΓ^4 q+4
+√(6) H_0^3 m βΓ^4 q+4 +81 √(6) b
√(H_0^2 Γ^2 q+2)+27 √(6)√(ad)√(H_0^2 Γ^2 q+2) ) (H_0^2 Γ^2
q+2/m^2)^3 m/8
×+2^3 m/8+1 3^3 m/8+2 a √(d)(-96
H_0^2 Γ^2 q+2-72H_0^2 m Γ^2 q+2 -8 (12
H_0^2 Γ^2 q+2 -3 H_0 m
×Γ^2q+2-2^3 m/8-3/2 3^3
m/8-1/2 (3 m+4) (H_0^2 Γ^2
q+2/m^2)^3 m/8(324 √(6) b (H_0^2
Γ^2 q+2)^3/2 +108
×√(6)√(ad)(H_0^2 Γ^2
q+2)^3/2 -72 6^3/4 H_0^3 Γ^3 q+3α -
6^3/4 H_0^6 Γ^6 q+6α +6^3/4 H_0^5
×Γ^6 q+6α -54 √(6) H_0^3 m Γ^3 q+3β -1296 √(6) H_0^6 m Γ^6 q+6β +648 √(6)
H_0^5 m Γ^6 q+6β
+18 H_0Γ^q+1 c_1].
s =1/2+[H_0αΓ^q+1/72
(H_0^2 Γ^2 q+2)^7/2-[5 H_0(3 √(6)log(6 H_0^2 Γ^2 q+2) α -2 √(6)α
+36 c_1)Γ^q+1]
×[√(6)(H_0^2 Γ^2
q+2)^7/2]^-1 +[ (3 H_0^2 Γ^2 q+2
-H_0Γ^2 q+2)(αΓ^-2
q-2/√(6) H_0^2 +2 √(2/3)β/√(H_0^2 Γ^2 q+2))]
×{√(H_0^2 Γ^2 q+2)}^-1-8 β
-2 α/√(H_0^2 Γ^2 q+2) -2 √(6)
c_1/√(H_0^2 Γ^2 q+2)-αlog(6 H_0^2
Γ^2q+2)/√(H_0^2 Γ^2 q+2)
-[(3 H_0^2 Γ^2 q+2 -H_0Γ^2 q+2)
(√(6)log(6 H_0^2 Γ^2 q+2) α +
α +√(H_0^2 Γ^2 q+2)β +12 c_1)]
×[(H_0^2 Γ^2 q+2)^3/2]^-1]
(-H_0^2 βΓ^2 q+2+(d+1)+b
(d+3/2) - √(H_0^2 Γ^2 q+2)α
×1/2+a√(d)+d/f^3)]^-1
-[(-α/24 √(H_0^2 Γ^2 q+2)
-β/6) ([H_0(3 √(6)log
(6 H_0^2 Γ^2 q+2) α
-2 √(6)α +36 c_1) Γ^q+1] [36
√(6)(H_0^2 Γ^2 q+2)^5/2]-48 H_0^2
βΓ^2 q+2-√(H_0^2 Γ^2 q+2) c_1
-12 √(H_0^2 Γ^2 q+2)α +8 √(6)√(H_0^2
Γ^2 q+2)β +12 c_1)][√(H_0^2
Γ^2 q+2)]^-1-48 a √(d))]
×[-H_0^2 βΓ^2 q+2 +√(ad) (d+1)
+b (d+3/2)-1/2√(H_0^2 Γ^2
q+2)α+a√(d)+d/f^3)^2]
+36 c_1(H_0^2 Γ^2 q+2)^9/2-H_0αΓ^q+1/72 (H_0^2 Γ^2 q+2)^9/2+
√(2/3)(3 H_0^2 Γ^2 q+2 -H_0Γ^2
q+2)
-(αΓ^-4 q-4)(6 √(6) H_0^4)^-1q+2)^3/2 +[αlog(6 H_0^2
Γ^2q+2)][12 (H_0^2 Γ^2 q+2
)^3/2]^-1
+(3 H_0^2 Γ^2 q+2 -H_0Γ^2 q+2)
(√(6)log(6H_0^2 Γ^2 q+2] ) α +2
√(6)α + √(h^2 Γ^2 q+2)β + 12 c_1
×12 √(6)(h^2 Γ^2 q+2)^5/2 - (3 h^2
Γ^2 q+2-h Γ^2 q+2) (αΓ^-2
q-2/√(6) h^2 +2 √(2/3)β/√(h^2
Γ^2 q+2))3 (h^2 Γ^2 q+2)^3/2
×32 (-h^2 βΓ^2 q+2 +√(ad) (d+1)+b
(d+3/2)-1/2√(h^2 Γ^2 q+2)α +a √(d)+d/f^3 )
×-5 (-α/24 √(h^2 Γ^2 q+2)
-β/6 ) (h αΓ^q+1/72
(h^2 Γ^2 q+2)^7/2 -(5 H_0(3 √(6)log(6 H_0^2 Γ^2 q+2)
×α -2 √(6)α +36 c_1)
Γ^q+1)(432 √(6)(H_0^2 Γ^2
q+2)^7/2)^-1 +2 √(2/3)(3 H_0^2
Γ^2 q+2
-[αlog(6 H_0^2 Γ^2
q+2)][√(H_0^2 Γ^2 q+2)]
-[(3 H_0^2 Γ^2 q+2 -H_0Γ^2 q+2)
(6 H_0^2Γ^2 q+2) α
+2 √(6)α +8 √(6)√(H_0^2 Γ^2 q+2)β
+12 c_1][3 √(6)(H_0^2 Γ^2
q+2)^3/2]].
§ APPENDIX C: DETERMINATION OF Ν_S^2
ν_s^2 =[(h^2 Γ ^2
q+2/m^2)^-3 m/836 a √(d)-18 (h^2
Γ^2 q+2/m^2)^3 m/82 (4 √(6)α +3 m β) (h^2 Γ^2 q+2)^3/2
+b (d-2)+√(ad) (d-1)[f^3 Γ^-4 q-4 2^3
m/8+1 3^3 m/8+2 a √(d)(3 h m (3 m+8)
Γ^q+1
-48 h^2 (12 h^2 Γ^2 q+2-3 h m Γ^2 q+2)
Γ^2 q+2-432 h^4 m Γ^4 q+4)-6^3 m/8
+1/2√(h^2 Γ^2 q+2)
×(H_0^2 Γ^2 q+2/m^2)^3
m/8 324 √(6) b (H_0^2 Γ^2 q+2)^3/2+108
√(6)√(ad)(H_0^2 Γ^2 q+2)^3/2
-72 6^3/4 H_0^3 Γ^3 q+3α -1728 6^3/4 H_0^6
Γ^6 q+6α +864 6^3/4 H_0^5 Γ^6 q+6α
-54 √(6) H_0^3
× m Γ^3 q+3β -1296 √(6) H_0^6 m Γ^6
q+6β +648 √(6) H_0^5 m Γ^6 q+6β +18 H_0Γ^q+1 c_1
+216 H_0^3 Γ^4 q+4 c_1] [108 H_0^4 (3 m+4)
(6^3 m/8+1 H_0^2 Γ^2 q+2(6 f^3
√(H_0^2 Γ^2 q+2)(4 √(6)α
+3 m β) -12 d) (H_0^2 Γ^2
q+2/m^2)^3 m/8 +2^3 m/8 3^3
m/8+2 a √(d) f^3 m)]^-1+[f^3 Γ^-2 q-2
×2^3 m/8 3^3 m/8+1 f^3 √(H_0^2
Γ^2 q+2)(4 √(6)α +3 m β)
(H_0^2 Γ^2 q+2/m^2)^3
m/8+(√(H_0^2 Γ^2 q+2)
×(4 √(6)α +3 m β) -12 d)
(H_0^2 Γ^2 q+2/m^2)^3 m/8
+[2^3 m/8-3 3^3 m/8+1 H_0^2 (z+1) ^2
q+2(6 f^3
×√(H_0^2 Γ^2 q+2)(4 √(6)α-432
H_0^4 m Γ^4 q+4) +3 m β)(H_0^2
(z+1) ^2 q+2/m^2)^3 m/8-1][m]^-1
×2^3 m/8+1 3^3 m/8+2 a √(d)3 H_0 m
(3 m+8) (z+1) ^q+1-48 H_0^2 (12 H_0^2 Γ^2 q+2-3
H_0 m
×(z+1) ^2 q+2) Γ^2 q+2-6^3 m/8
+1/2 (3 m+4) √(H_0^2 (z+1) ^2 q+2)(H_0^2 Γ^2 q+2/m^2)^3 m/8
- (H_0^2 (z+1) ^2 q+2)^3/2 +108 √(6)√(ad) (H_0^2 Γ^2 q+2)^3/2-726^3/4 H_0^3 (z+1)^3 q+3α
-1728 6^3/4 H_0^6 Γ^6 q+6α +864 6^3/4 -54
√(6) H_0^3 m (z+1) ^3 q+3β -1296 √(6) H_0^6
× m (z+1) ^6 q+6β +648 √(6) H_0^5 m Γ^6
q+6β +H_0Γ^q+1 c_1 +H_0^3 (z+1) ^4 q+4 c_1]
×[18 H_0^2 (3 m+4) (6^3 m/8+1 H_0^2
Γ^2 q+2(6 f^3 √(H_0^2 Γ^2 q+2) (4
√(6)α +3 m β)-12 d)
×(H_0^2 Γ^2 q+2/m^2)^3
m/8+2^3 m/8 3^3 m/8+2 a √(d) f^3
m)^2]^-1 -[f^3 (z+1) ^-2 q-2(-6^3
m/8 +1/2
×(3 m+4) √(H_0^2 Γ^2 q+2)-12 6^3/4 H_0α (z+1) ^q+1-9 √(6) H_0 m βΓ^q+1 +108
H_0^2
× c_1 (z+1) ^2 q+2 +36 H_0 c_1 Γ^2 q+2-432 6^3/4
H_0^4 α (z+1) ^4 q+4 +6^3/4 H_0^3 αΓ^4 q+4
-324 √(6) H_0^4 m β (z+1) ^4 q+4+216 √(6) H_0^3 m
βΓ^4 q+4]+b √(H_0^2 (z+1) ^2 q+2)
+27 √(6)√(ad)√(H_0^2 Γ^2 q+2)(H_0^2 (z+1) ^2 q+2/m^2)^3 m/8
+2^3 m/8+1 3^3 m/8+2 a √(d) [-96
H_0^2
×Γ^2 q+2-72 H_0^2 m (z+1) ^2 q+2 -8 (12
H_0^2 (z+1) ^2 q+2-3 H_0 m Γ^2 q+2)].
Data availability: No new data were generated or analyzed
in support of this research.
55
1 Riess, A.G.: Astron. J. 116(1998)1009.
2 Nojiri, S.I. and Odintsov, S.D.:
Int. J. Geom. Methods Mod. Phys. 4(2007)115; Sotiriou, T.P.
and Faraoni, V.: Rev. Mod. Phys. 82(2010)451.
3 Urban, F.R. and Zhitnitsky, A.R.: Phys. Rev. D
80(2009)063001.
4 Ohta, N.: Phys. Lett. B 695(2011)41.
5 Ebrahimi, E. and Sheykhi, A.: Int. J. Mod. Phys. D
20(2011)2369; Cai, R.G.: Phys. Rev. D
84(2010)123501.
6 Zhitnitsky, A.R.: Phys. Rev. D 86(2012)045026.
8 Cai, R.G. et al.: Phys. Rev. D 86(2012)023511.
7-a Kangal, E.E., Salti, M. and Aydogdu, O.: Mod. Phys. Lett. A
86(2021)2150090.
9-a Khodam-Mohammadi, A., Malekjani, M. and Monshizadeh,
M.: Mod. Phys. Lett. A 27(2012)1250100.
11-a Malekjani, M.: Int. J. Mod. Phys. A 22(2013)1350084.
10 Ebrahimi, E., Sheykhi, A. and Alavirad, H.: Eur. J. Phys. 7(2013)949.
13 Chattopadhyay, S.: Eur. Phys. J. Plus 129(2014)82 .
61-a Sharif, M. and Saba. S.: Int. J. Mod. Phys. D
28(2019)1950077.
61-b Biswas, M., Debnath, U. and Ghosh, S.: Int. J. Geom. Methods Mod.
Phys. 16(2019)1950178.
61 Sharif, M. and Ajmal, M.: Chin. J. Phys. 88(2024)706.
12b Hehl, F.W. et al.: Phys. Rep. 258(1995)1.
12c Harko, T. et al.: Phys. Rev. D 89(2014)124036.
12d Maluf, J.W.: Universe 2(2016)19.
12e Gadbail, G., Arora, S. and Sahoo, P.K.:
Eur. Phys. J. Plus 136(2021)1040.
12g Nester, J.M. and Yo, H.-J.: Chin. J. Phys.
37(1999)113.
41 Jiménez, J.B., Heisenberg, L. and Koivisto, T.S.:
J. Cosmol. Astropart. Phys. 2018(2018)039.
7 Lazkoz, R. et al.: Phys. Rev. D 100(2019)104027.
31 Mandal, S., Sahoo, P.K. and Santos, J.R.: Phys. Rev. D 102(2020)024057.
9 Koussour, M. et al.: Ann. Phys. 445(2022)169092.
11 Chanda, A. and Paul, B.C.: Eur. Phys. J. C 82(2022)616.
20 Xu, Y. et al.: Eur. Phys. J. C 79(2019)708.
18 Nájera, A. and Fajardo, A.:
J. Cosmol. Astropart. Phys. 2022(2022)020.
46-a Pati, L. et al.: Phys. Dark Universe 35(2022)100925.
18a Mandal, S., Singh, A. and Chaubey, R.: Int. J. Geom. Methods Mod. Phys.
20(2023)2350084.
30a Landau, L.D and Lifshitz, E.M.: The Classical Theory of
Fields (Elsevier, 2013).
29 Sharif, M. and Ikram, A.: Int. J. Mod. Phys.
D 26(2017)1750084.
32 Caldwell, R. and Linder, E.V.: Phys. Rev. Lett. 95(2005)141301.
33-a Sahni, V. et al.: Int. J. Mod. Phys. D 15(2006)2105.
34-a Aghanim, N. et al.: Astron. Astrophys. 641(2020)A6.
35 Dunsby, P.K. and Luongo, O.:
Int. J. Geom. Methods Mod. Phys. 13(2016)1630002.
62 Sharif, M. and Saba, S.: J. Exp. Theor. Phys. 128(2019)571.
|
http://arxiv.org/abs/2409.03087v1 | 20240904212254 | Coupling AI and Citizen Science in Creation of Enhanced Training Dataset for Medical Image Segmentation | [
"Amir Syahmi",
"Xiangrong Lu",
"Yinxuan Li",
"Haoxuan Yao",
"Hanjun Jiang",
"Ishita Acharya",
"Shiyi Wang",
"Yang Nan",
"Xiaodan Xing",
"Guang Yang"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
Coupling AI and Citizen Science in Creation of Enhanced Training Dataset for Medical Image Segmentation
Amir Syahmi*, Xiangrong L. Lu*, Yinxuan Li*, Haoxuan Yao*, Hanjun Jiang,
Ishita Acharya, Shiyi Wang, Yang Nan, Xiaodan Xing, Guang Yang†
Department of Bioengineering and Imperial-X, Imperial College London
London, W12 7SL, United Kingdom
September 9, 2024
==========================================================================================================================================================================================================================================================
§ ABSTRACT
Recent advancements in medical imaging and artificial intelligence (AI) have greatly enhanced diagnostic capabilities, but the development of effective deep learning (DL) models is still constrained by the lack of high-quality annotated datasets.
The traditional manual annotation process by medical experts is time- and resource-intensive, limiting the scalability of these datasets.
In this work, we introduce a robust and versatile framework that combines AI and crowdsourcing to improve both the quality and quantity of medical image datasets across different modalities.
Our approach utilises a user-friendly online platform that enables a diverse group of crowd annotators to label medical images efficiently.
By integrating the MedSAM segmentation AI with this platform, we accelerate the annotation process while maintaining expert-level quality through an algorithm that merges crowd-labelled images.
Additionally, we employ pix2pixGAN, a generative AI model, to expand the training dataset with synthetic images that capture realistic morphological features.
These methods are combined into a cohesive framework designed to produce an enhanced dataset, which can serve as a universal pre-processing pipeline to boost the training of any medical deep learning segmentation model.
Our results demonstrate that this framework significantly improves model performance, especially when training data is limited.
§ INTRODUCTION
*Contributed equally
†Correspondence Author
The code for this proposed framework is publicly available on GitHub: <https://github.com/Am1rSy/SGC-Enhanced-Dataset>.
The rapid advancements in medical imaging technologies, such as CT and MRI, have revolutionised diagnostic capabilities in healthcare by enabling non-invasive visualisation of internal anatomical structures.
Alongside this, the recent progress in artificial intelligence (AI) has led to growing interest and significant improvements in developing medical image analysis algorithms, which subsequently improve automated diagnostics.
These advancements enable healthcare professionals to make more precise diagnoses and develop more effective treatment plans<cit.><cit.><cit.>.
Image segmentation is one of the most commonly used analysis techniques and involves the delineation of structures and forms the building blocks of any AI-assisted image labelling algorithm.
However, a major limiting factor to the development of a robust and accurate AI-based medical image analysis model is the lack of high-volume and high-quality annotated training datasets<cit.>.
High-quality labelled datasets are crucial as they directly influence the accuracy and reliability of AI models, ensuring these technologies can make precise diagnostic predictions<cit.>.
Generating these labelled datasets is both time-consuming and resource-intensive, making scalability a challenge<cit.>.
Furthermore, hiring medical experts to manually label a large quantity of medical images is costly, and the process is often tedious due to its repetitive nature<cit.>.
Thus, crowdsourcing has been seen as an attractive method to help improve the annotation rate for medical images.
It operates based on allowing untrained individuals to help annotate new, unlabelled datasets<cit.>.
Researchers have explored the potential of crowdsourcing to cost-effectively expand the annotated datasets for medical images<cit.>.
Studies have shown that with clear instructions and examples, non-experts can achieve performance that matches that of trained experts, particularly for certain imaging modalities<cit.>.
However, the complexity of the medical images still needs to be investigated to understand the limitations of crowd-sourcing fully.
The challenge of acquiring a substantial volume of medical images for training AI solutions is another significant bottleneck in the field.
This is due to the high cost and logistical complexity involved in producing such datasets, which require specialised equipment and trained personnel.
Additionally, privacy concerns are also limiting the acquisition due to the handling of sensitive personal information, which usually requires extra data masking procedures.
The diversity of medical imaging modalities (e.g. CT, MRI) further complicates the collection process, as acquiring comprehensive data across all these modalities is a daunting task<cit.>.
Although, the release of datasets publicly by various research groups in recent years (MMWHS<cit.>, FLARE22<cit.>, Cancer Imaging Archive<cit.>) has somewhat mitigated the issue, given the data-intensive nature of AI model training, particularly with Deep Learning (DL) approaches, the demand for extensive datasets remains unabated<cit.>.
Consequently, exploring the potential of generative AI to augment real datasets by creating synthetic but close-to-realistic images presents a promising area of research<cit.>.
This approach can help overcome the inherent limitations of insufficient variety and volume of real datasets, by generating diverse and extensive training data.
Such synthetic data can improve the training of AI models, enabling them to help achieve higher accuracy and generalisability in medical image analysis<cit.>.
In this work, we present a versatile framework that enhances medical image dataset in both quality and quantity by coupling crowdsourcing and generative AI.
This method ultimately increases segmentation accuracy of DL models when available training dataset is limited.
§ RELATED WORKS
Crowdsourcing in image analysis, seen in Google's reCAPTCHA<cit.> and Duolingo<cit.>, also applies to biomedical imaging.
Studies show that crowds can accurately label complex medical images, demonstrating the potential for medical image analysis.
Platforms like Amazon Mechanical Turk (MTurk)<cit.> and Appen Figure Eight<cit.> streamline crowdsourcing by providing a diverse pool of participants and reducing the need for custom setups.
Alternatively, custom platforms like Label Studio<cit.>, though more time-intensive to develop, offer precise control over the crowdsourcing tasks, improving the engagement and specificity of the work.
In summary, crowdsourcing in image analysis extends its utility to biomedical fields, demonstrating significant potential in medical diagnostics.
By utilising diverse participant pools and flexible setup options, these methods elevate data accuracy and streamline the labelling process, and prove essential for advancing accurate DL approach for medical image segmentation.
Moreover, there has been a lot of research in using generative AI to improve data augmentations<cit.><cit.>.
Specifically, Generative Adversarial Network (GAN)-based models<cit.><cit.> are widely used for synthesising different medical images, successfully expanding the size of biomedical imaging datasets<cit.>.
Deep Convolutional GAN (DCGAN), an unconditional GAN-based model, has been used to generate high-quality liver lesion region of interests (ROIs)<cit.>.
These synthetic images, combined with real images, are used to train Convolutional Neural Networks (CNNs) and greatly improve the performance of CNN classification models, thereby enhancing diagnosis.
Additionally, synthetic MRI images generated by a conditional GAN model have successfully boosted the training of segmentation networks and improved the performance of brain tumour MR image segmentation<cit.>.
§ OUR APPROACH
§.§ Main Contributions
By coupling AI and citizen science, we aim to improve the data gathering rate and create an extensive and labelled dataset for future medical image DL research.
This enhancement can be achieved by: deploying a flexible segmentation model to facilitate the labelling process, thereby reducing both time and tedium for crowd annotators; enlisting the aid of the public via crowdsourcing, with adequate guidance, to accelerate the annotation rate; implementing generative AI to expand the medical image datasets with synthetic images.
By achieving the above aims, we highlight 4 key contributions of this work:
* We proposed a versatile framework to efficiently and effectively resolve the scarcity and costliness of medical image datasets for DL model training.
* We implemented a state-of-the-art segmentation AI MedSAM to simplify crowdsourced segmentation which attains labelling at an expert quality.
* We incorporated a novel generative AI pix2pixGAN to expand the quantity of existing dataset in different modalities.
* We verified that, using the framework we proposed, the accuracy and performance of DL models for medical images would be significantly improved with small training dataset.
In short, we established a versatile framework that can expand limited medical image dataset in quantity and also with similar label quality as the domain experts.
Such dataset will be called enhanced dataset (quality + quantity) for the rest of this work.
An overview of our framework can be seen in Figure <ref>.
§.§ Crowd Labelling Platform Deployment
Image labelling tasks for crowds, especially with medical images, are often complex and result in a lack of accuracy and willingness to participate.
We first needed to implement an online platform for the ease of communication and labelling operation from the researchers to a wide audience with various types of device.
Label Studio was chosen as the main data labelling platform as it is open source and contains a simple, friendly user interface (UI) (See Figure <ref>)<cit.>.
An easily navigable UI is key in this study as the labelling process needs to be straightforward to account for various computer literacy in the public.
We designed the platform to allow the use of a few tools which includes labelling brush, eraser, zooming, etc.
Furthermore, we provided all users with an instructional PDF file containing ground truth label exemplars and a short video to guide them on using the platform (see Supplementary Section 3 and 4).
Label Studio was also chosen because it is easily deployed on online servers<cit.>, a feature well supported by its extensive developer community.
Our current implementation of Label Studio is hosted on the Hugging Face Spaces platform<cit.>.
This hosting platform was selected for its capability to support application deployment using Docker containers.
§.§ Segmentation AI Integration
Segmentation AI assistance has been proven to be an effective approach to further resolve the complexity of labelling<cit.>.
However, the use of segmentation AI aiding in the labelling process needs to be versatile for a wide range of tasks regarding medical images, and intuitive for the users to operate.
Label Studio<cit.> also supports AI integration during the labelling process.
It operates by using another server as a Machine Learning (ML) backend, where the AI model is hosted.
We chose MedSAM<cit.> for our AI integration (see Section <ref>).
MedSAM is a zero-shot segmentation model based on Meta's Segment Anything model (SAM)<cit.> that has been trained on 1 million medical images.
For integration with the labelling platform, we only allow the bounding box functionality to appear when a toggle switch is activated.
A user would select the rectangular label from the available selection and draw a box on the image (see Supplementary Section 3).
Then, Label Studio will send the necessary information (bounding box coordinates and current image) to MedSAM.
MedSAM will consider the spatial relationship and contextual information inherent in medical images when processing the information for segmentation<cit.>.
Finally, it will send its predicted labels of the specified region back to Label Studio.
This would allow for faster and more accurate labelling created by the users.
§.§ Generative AI Integration
As crowd-labelling methods are limited by the availability of raw medical images, generative AI, particularly Generative Adversarial Networks (GANs)<cit.><cit.>, is used for data augmentation purposes.
Using the GAN model of our choice, synthetic medical images are generated using labels provided by the crowd and from the input dataset.
To achieve this, the GAN model can be extended to a conditional GAN (cGAN) model<cit.>.
A condition will be given to both the generator and the discriminator during the training process to partially control the generated images.
We used user-generated labels from crowdsourcing as input into the cGAN.
As the labels are in image format, a variant of cGAN named pix2pixGAN<cit.> was adapted for our project.
This is due to pix2pixGAN being specially designed to solve image-to-image translation problems, where the input consists of image-label pairs.
These synthetic images generated by pix2pixGAN are then integrated with the annotated image dataset to meet the needs of training future medical Image models.
§.§ Test Trials
The framework was tested on 3 different datasets which are MMWHS-MRI, MMWHS-CT and FLARE22<cit.><cit.>.
We recruited 12 annotators to investigate the effectiveness of the general public in labelling medical images, and their results were further used to verify the potential of improving DL model training with crowdsourcing.
We assigned each annotator 6 tasks, containing 5 images each. The objective and criteria of each task are listed as follows:
* Task 1: Labelling of the specified heart regions (MMWHS-CT) without any AI assistance or ground truth exemplars.
* Task 2: Labelling of the specified heart regions (MMWHS-CT) with AI assistance but no ground truth exemplars are provided.
* Task 3: Labelling of the specified heart regions (MMWHS-CT) with AI assistance and ground truth exemplars.
* Task 4: Labelling of the specified heart regions of the artificial GAN-generated dataset with AI assistance.
* Task 5: Labelling of the specified abdominal organs (FLARE22) with AI assistance and ground truth exemplars.
* Task 6: Labelling of the specified heart regions (MMWHS-MRI) with AI assistance and ground truth exemplars.
Tasks 1, 2, and 3 serve to evaluate the necessity of AI assistance and instructions with ground truth exemplars in crowdsourcing platforms.
Task 4 serves to assess the crowdsourcing label performance on GAN generated dataset.
Tasks 3, 5, and 6 serve to understand the versatility of the platform in various datasets.
(Detailed results see Supplementary Section 6)
§.§ Merged Crowd Labels
To combine the ensemble of crowd labels of a single image into a merged label, a pixel-wise majority voting approach is taken<cit.>.
In this approach, the labels of each pixel are summed to create a frequency map.
Subsequently, a threshold is applied to this map to generate the merged label.
This threshold represents the minimum number of crowd annotators required to agree that a specific pixel belongs to the object of interest.
In this work, a minimum threshold of 4 was chosen based on the consideration of the number of unique crowd annotators.
§.§ Evaluation Metrics - Comparison with Ground Truth
To evaluate the quality of the segmentations results, the Sørensen-Dice index (DSC)<cit.> and the Jaccard Index (IoU)<cit.> are commonly used.
These metrics, defined in Equation <ref> and <ref> respectively, are selected due to their widespread usage and ease of comparison with other publications and methods.
[t]0.49
D(X,Y) = 2|X ∩ Y |/|X | + |Y |
[t]0.45
J(X,Y) = |X ∩ Y |/|X ∪ Y |
§ EXPERIMENT
The experiment section is comprised of 3 sub-sections.
Section <ref> contains our initial findings on the framework and evaluates the performance of crowd labellers in achieving expert-level labelling.
Section <ref> evaluates the performance of an enlarged dataset using synthetic images generated by pix2pixGAN.
Lastly, Section <ref> evaluates the effectiveness of training DL segmentation models by combining crowd-labelled images and synthetic images into one enhanced dataset, which is the overall outcome of the framework.
§.§ Quality – Achieving Image Crowd Labelling at a Professional Level
§.§.§ Segmentation AI Comparison for Crowd Use
The primary objective of incorporating an AI assistance labelling tool on our platform is to improve the efficiency and ease of the segmentation task for crowd annotators.
As a preliminary study, we investigated the most suitable segmentation AI model that is capable of assisting users in labelling tasks[Abbreviation of the ROIs: LA - Left Atrium; RA - Right Atrium; LV - Left Ventricle; RV - Right Ventricle].
A comparative analysis is conducted on prediction masks generated by 4 segmentation models: UNet_1, UNet_19, SAM, and MedSAM.
Both UNet_1 and UNet_19 are based on simple UNet structure, UNet_1 is trained on 10 training slices from 1 sub-dataset of MMWHS-CT, whereas UNet_19 is trained on 76 training slices from 19 sub-datasets of MMWHS-CT.
Figure <ref> shows example prediction masks generated by UNet_1 and UNet_19 models on MMWHS-CT slice 110.
Notably, both predicted masks are characterized by un-smooth and irregular contours with small, scattered regions due to overlapping labels.
These graphical observations are confirmed by the metrics in Table <ref>, where UNet_1 outperforms UNet_19 with a higher overall metric score.
Both models achieve relatively high metric scores above 0.65 for ventricle labels and relatively low scores below 0.53 for atrium labels.
Results from Figure <ref> and Table <ref> suggest that the UNet models are unsuitable for platform AI assistance due to their poor versatility across different datasets.
Each label task in the platform requires a new UNet model specifically trained for the corresponding dataset, and even ideally, the sub-datasets, despite the same modality and similar morphological structure.
This is validated by the superior performance of UNet_1 over UNet_19, which was achieved even with less training data and reduced training time.
SAM and MedSAM are tested as large-scale models without specific training on any MMWHS datasets.
Figure <ref> also illustrates the prediction masks of the same testing slice generated by SAM and MedSAM models, characterized by smooth contours and significantly fewer overlapping regions.
These observations are corroborated by metric scores detailed in Table <ref>.
In contrast to the UNet models, which demonstrated higher performance of ventricle labels over atrium labels, both SAM and MedSAM exhibit consistent performance across all labels, which demonstrate their high versatility.
Specifically, SAM achieves an average DSC of 0.7344 (IoU of 0.6183), while MedSAM reaches an average DSC of 0.8123 (IoU of 0.7197).
MedSAM outperforms SAM and the UNet models in graphical representation and metric evaluations and hence, the chosen segmentation AI for crowd-labelling.
§.§.§ Necessity of AI Assistance and Instruction
To investigate if AI assistance can improve crowd segmentation results, the comparison and analysis between the results from Task 1 and Task 2 (Hand-drawn vs. AI assistance) are as follows, using the DSC and IoU metrics.
From Figure <ref>, it is demonstrated that the distribution of metrics scores does not vary significantly between Task 1 and Task 2.
A noticeable amount of data points clustered around 0 is observed.
Quantitatively in Table <ref>, it is statistically evident that, for all compartments of the heart, crowd segmentation accuracy from Task 1 and Task 2 are not significantly different (p>0.05).
This indicates that with only segmentation AI assistance provided, the accuracy of the crowd result would not be improved.
It is hypothesised that most of the volunteers have no prior knowledge of heart anatomy, leading to almost random annotations that do not fit with the ground truth.
Furthermore, some users also reported being confused with the orientation of the images.
It should be noted that once participants became accustomed to the AI tool, the majority of users reported an easier labelling process and reduction in labelling time by simply making quick refinements on the AI-generated regions.
This result highlights the success of making the segmentation process easier and less tedium with the deployment of the MedSAM segmentation facilitated tool.
To seek for actual improvement in accuracy, we hypothesised that an instruction, in addition to AI assistance, would provide fundamental knowledge to users that will in turn increase labelling accuracy.
To investigate whether the crowd segmentation results would improve when crowd annotators receive AI assistance along with detailed instructions including ground truth label exemplars.
The results from Task 2 and Task 3 (AI assistance vs. AI assistance with instructions) using DSC and IoU metrics are computed as follows.
It is evident in Table <ref> that the crowd segmentation results from Task 3 are statistically more accurate than those from Task 2 with 95% CI (p<0.05), demonstrating that the instructions with ground truth exemplars are crucial to the accuracy of crowd labelling outcomes.
To account for the variance between annotators, we merged the crowd labels using pixel-wise majority voting approach with threshold of 4 (as discussed in Section <ref>).
Table <ref> shows the metrics after merging.
Notably, LA has a relatively low DSC of 0.3839 (IoU of 0.3627), indicating the difficulty in labelling this ROI.
Hence, it is demonstrated that crowd annotators tend to perform better with simple anatomical structures that have less variance between slices.
This observation suggests that crowdsourcing should be limited to datasets with relatively simple structures.
Nonetheless, this indicates the importance of providing clear guidance and ground truth exemplars to the crowd annotators by the researchers when setting up segmentation tasks.
§.§.§ Crowd Segmentation in Different Modalities
It has been demonstrated that AI assistance in combination with instructions can dramatically improve the accuracy of labelling tasks.
To further investigate the versatility of our platform, we conducted tests involving AI-assisted labelling on MRI images from the MMWHS dataset, which differ significantly from the CT images from the same dataset, and CT images of the abdomen from the FLARE22 dataset.
It is notable that in Figure <ref>, visually the merged crowd segmentation is close to the ground truth, with the edges of the organ identified with high definition.
Quantitatively, Table <ref> shows that the labelling accuracy is very high in liver and kidney segmentation.
Specifically, the DSC is approximately 0.96 for both the liver and aorta (IoU of about 0.93) and about 0.75 for the kidney (IoU of about 0.65).
According to Table <ref>, despite the complexity of MRI images, the DSC and IoU metrics are acceptable, yielding a DSC of about 0.7 (IoU of about 0.6) for all.
These results illustrate that our platform is versatile to ensure the accuracy of labelling tasks across different modalities of images.
This endorses the customizability of the crowdsourcing platform by ensuring that different datasets can all be segmented efficiently by the merged crowd labels.
§.§ Quantity – Enlarging Dataset with Synthetic Data
§.§.§ Synthetic Images in Different Modalities
To evaluate the versatility of the pix2pixGAN model, we trained it on datasets comprising different diverse segmentation pairs from medical imaging modalities and organs.
The results demonstrated the model's ability to generate synthetic images across different modalities, including MMWHS-CT, MMWHS-MRI, and FLARE22.
Furthermore, the model's versatility was evidenced by its capacity to generate clearly identifiable organs and compartment tissues, such as the heart in MMWHS, and the liver and kidneys in FLARE22, as we can see in Figure <ref>.
The generated synthetic images exhibited distinct edges and good contrast, particularly when multiple organs are present within a single image, with characterisable and identifiable morphology.
These findings demonstrated pix2pixGAN's high versatility in generating synthetic images across various modalities and anatomical structures in medical imaging.
However, it is noted that the synthetic image organs are often found at wrong vertebral levels, which indicates a lack of realism.
A potential improvement is suggested where landmarks apart from ROIs could be included during synthesis to refine anatomical accuracy.
§.§.§ Efficiency of Enlarged Dataset
To further evaluate the feasibility of using an enlarged dataset to improve model training in scenarios with limited data, we conducted a segmentation task comparison between the original and enlarged datasets.
For each of the MMWHS-CT, MMWHS-MRI, and FLARE22 datasets, 20 slices were randomly selected, with 10 used for training and 10 for testing.
Additionally, 10 synthetic images were generated from the ground truth labels of the training slices using pix2pixGAN.
For the control dataset, a UNet model was trained on 10 real images with their corresponding ground truth labels.
For the enlarged dataset, a UNet model was trained on 20 images, comprising of 10 real and 10 synthetic images, along with their ground truth labels.
Results from Table <ref>, <ref>, and <ref> suggest all modalities have shown notable improvements in training scores with the enlarged dataset, apart from few IoUs fluctuated at a slightly lower score.
Notably, Table <ref> shows an average of 15.9% increase for DSC and 11.1% increase for IoU.
Aorta, as the hardest segmented ROI in FLARE22, has improved from a DSC value of 0.1490 (IoU of 0.6580) to a DSC value of 0.3392 (IoU of 0.7802).
Overall, this result validates the effectiveness of incorporating synthetic data to improve model training outcomes in data-limited scenarios.
§.§ Enhanced Dataset – Combining Generative AI and Crowdsourcing
Finally, we combined the high quality merged crowd labels with GAN enlarged dataset as an enhanced dataset to evaluate the potential to improve model training in limited data scenarios further.
To ensure the training dataset's quality, only 5 FLARE22 Liver and Aorta merged crowd labels are used for the enhanced dataset due to their high similarity to the ground truth, with DSC above 0.95 and IoU above 0.92 as shown in Table <ref>.
Therefore, as a preliminary evaluation, we trained three segmentation UNet models for the Aorta and Liver using three versions of the FLARE22 dataset: a control dataset, an enlarged dataset, and an enhanced dataset.
The control dataset included 10 real images; the enlarged dataset consisted of 10 real images and 10 synthetic images; and the enhanced dataset comprised 10 real images, 10 synthetic images, and 5 merged crowd labels.
The metrics present in Table <ref> indicate a significant improvement (p<0.001 for both liver and aorta DSC using unpaired t-test) in segmentation accuracy from the control dataset to the enlarged dataset, with further enhancement (p<0.001 for both liver and aorta DSC using unpaired t-test) when utilising the enhanced dataset compared to the enlarged dataset.
Overall, the enhance dataset performance has significant improvement when compared with the control dataset (p<0.0001 for both liver and aorta DSC using unpaired t-test).
Notably, the enhanced dataset achieves a 12.3% increase in DSC for liver segmentation compared to the control dataset.
Furthermore, the DSC for the Aorta increase substantially, from 0.1467 to 0.5045, and IoU improve from 0.4932 to 0.7291, highlighting enhanced feature extraction for challenging segmented ROIs.
These findings validate the potential of augmenting a limited training dataset with GAN-generated synthetic images and high-quality merged crowd labels, supporting the feasibility of our proposed framework.
§ CONCLUSION
To conclude, it is evident that it is possible to improve the data-gathering rate to create a fully labelled dataset by using crowdsourcing.
We demonstrated that using a flexible zero-shot segmentation AI model such as MedSAM can improve the user experience and efficiency of labelling.
Including synthetic images generated by GAN models like pix2pixGAN to enlarge the dataset has been proven to improve the accuracy of the segmentation model.
A prototype platform is implemented to demonstrate the workflow and can act as a provision for a more robust platform that can effectively collect labelling data from the crowd.
Crowdsourcing can be included as a data-gathering pipeline for future researchers in training their AI models and algorithms.
Building upon the foundation of this research, we demonstrated a framework exploiting the potential of coupling AI and crowdsourcing to resolve the scarcity in the availability of medical images for model training.
Our framework is general and versatile, and can be extended by others to contribute and incorporate for specific modalities.
§ ACKNOWLEDGEMENTS
This study was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC/NSFC/211235),
the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, NIHR Imperial Biomedical Research Centre (RDA01), Wellcome Leap Dynamic Resilience,
UKRI guarantee funding for Horizon Europe MSCA Postdoctoral Fellowships (EP/Z002206/1), and the UKRI Future Leaders Fellowship (MR/V023799/1).
ieee_fullname
|
http://arxiv.org/abs/2409.03197v1 | 20240905023939 | Active Galactic Nuclei in the Green Valley at z$\sim$0.7 | [
"Charity Woodrum",
"Christina C. Williams",
"Marcia Rieke",
"Kevin N. Hainline",
"Raphael E. Hviding",
"Zhiyuan Ji",
"Robert Kennicutt",
"Christopher N. A. Willmer"
] | astro-ph.GA | [
"astro-ph.GA"
] |
0000-0001-5962-7260]Charity Woodrum
0000-0003-2919-7495]Christina C. Williams
NSF’s National Optical-Infrared Astronomy Research Laboratory, 950 North Cherry Avenue, Tucson, AZ 85719, USA
0000-0002-7893-6170]Marcia Rieke
0000-0003-4565-8239]Kevin N. Hainline
0000-0002-4684-9005]Raphael E. Hviding
Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany
0000-0001-7673-2257]Zhiyuan Ji
0000-0001-5448-1821]Robert Kennicutt
Department of Physics and Astronomy and George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A&M University, 4242 TAMU, College Station, TX 77843-4242, USA
0000-0001-9262-9997]Christopher N. A. Willmer
§ ABSTRACT
We present NIR spectroscopy using MMT/MMIRS for a sample of twenty-nine massive galaxies (log M_* / M_⊙≳10) at z∼0.7 with optical spectroscopy from the LEGA-C survey.
Having both optical and NIR spectroscopy at this redshift allows us to measure the full suite of rest-optical strong emission lines, enabling the study of ionization sources and the rest-optical selection of active galactic nuclei (AGN), as well as the measurement of dust-corrected Hα-based SFRs.
We find that eleven out of twenty-nine galaxies host AGN.
We infer the nonparametric star formation histories with the SED fitting code and classify galaxies as star-forming, green valley, or quiescent based on their most recent sSFRs.
We explore the connection between AGN activity and suppressed star formation and find that 89±15% of galaxies in the green valley or below host AGN, while only 15%±8% of galaxies above the green valley host AGN.
We construct the star-forming main sequence (SFMS) and find that the AGN host galaxies are 0.37 dex below the SFMS while galaxies without detectable AGN are consistent with being on the SFMS. However, when compared to a bootstrapped mass-matched sample, the SFRs of our sample of AGN host galaxies are consistent with the full LEGA-C sample.
Based on this mass-matched analysis, we cannot rule out that this suppression of star formation is driven by other processes associated with the higher mass of the AGN sample.
We therefore cannot link the presence of AGN activity to the quenching of star formation.
§ INTRODUCTION
One of the most outstanding open questions in galaxy evolution is identifying the physical processes that drive the quenching of star formation in massive galaxies. A critical finding is the existence of the galaxy bimodality: the majority of galaxies are either blue, lower mass, and star-forming, or are red, massive galaxies that no longer form stars <cit.>. The star-forming population follows a well-defined correlation between the star formation rate (SFR) and stellar mass (M_*), known as the star forming main sequence <cit.>.
Between the blue cloud and the red sequence lies a transitional region, often called the green valley <cit.>. Due to its sparse population, it has been suggested that galaxies rapidly <cit.> transition through this region, making it a short-lived phase in galactic evolution.
Studying galaxies in the green valley, and any physical processes that preferentially occur there, can provide critical constraints on galaxy quenching <cit.>.
One hypothesized process for quenching involves supermassive black holes (SMBHs), which are now understood to co-evolve and grow in mass alongside the mass growth of their host galaxy. Active galactic nucleus (AGN) activity can induce feedback such as winds, jets, and radiation <cit.>.
AGN feedback is thought to possibly quench galaxies through many different mechanisms, such as heating the gas or removing gas and dust from the host galaxy <cit.>. Also, low-power AGN may inject turbulence, which could potentially stabilize the gas disk from fragmentation and prevent star formation <cit.>. However, simulations show that these mechanisms can also occur as a result of many other processes in the evolution of a galaxy unrelated to AGN activity, such as virial shock heating <cit.>, stellar feedback <cit.>, morphological quenching <cit.>, and cosmological starvation <cit.>. Many empirical studies have attempted to place constraints on these processes <cit.> and have also found evidence for environmental quenching <cit.>.
Modern galaxy formation simulations require incorporating feedback into the galaxy formation process to match observational properties of galaxy populations, such as the mass function and colors of quiescent galaxies. At the most massive end, AGN are thought to be the source of this feedback and without it, both cosmological simulations and semi-analytic models produce too many massive blue galaxies <cit.>. However, the feedback implementations used in these simulations and models vary substantially between each other yet produce relatively similar galaxy populations <cit.>. Even though there is strong theoretical evidence for AGN quenching, it has been challenging to causally link AGN to quenching empirically. Therefore, it is vital to place observational constraints on the impact of AGN on their host galaxies.
AGN can be identified across the entire electromagnetic spectrum. For example, the narrow-line regions surrounding AGN are traced by nebular emission lines at rest-frame optical wavelengths. Some of the most widely used AGN selection techniques are diagnostics called the Baldwin, Phillips, and Terlevich diagrams <cit.>, which are used to determine the dominant source of ionization in galaxies. This method is one of the most sensitive AGN selection techniques, as it is able to probe to lower accretion rates than other techniques that use X-ray emission or IR colors <cit.>. However, BPT-derived optical selection traditionally requires spectroscopic coverage of [O3] and Hβ, as well as the Hα, [N2], or [S2] emission lines, which have longer wavelengths and are redshifted out of the visible spectrum at z≳0.5.
Consequently, large statistical studies using BPT diagrams have been limited to low redshift, such as with the Sloan Digital Sky Survey (SDSS). To look for observational evidence of AGN quenching, we need to study galaxies at higher redshifts while they are in the process of quenching. A solution to this problem is follow-up observations with NIR spectrographs. Sky emission is very bright in this part of the electromagnetic spectrum, making NIR observations challenging from ground-based facilities. Therefore, completing the full suite of diagnostics at higher redshifts currently requires significant investments from both optical and NIR spectrographs, such as with the MOSFIRE DEEP Evolution Survey <cit.> and the Keck Baryonic Structure Survey with MOSFIRE <cit.>.
By studying populations of galaxies at different epochs, it has been shown that massive galaxies form rapidly at z>2 <cit.> and most are fully assembled and quenched by z≈1 <cit.>. We can also explore the individual evolutionary pathways of galaxies by utilizing the archaeological record of their stellar populations <cit.>. However, as the stellar populations age, these star formation history (SFH) signatures fade. It is therefore necessary to study the SFHs of galaxies closer to the peak epoch of quenching at z≈1 to provide constraints on their quenching mechanisms.
Traditionally, parametric models, such as the commonly used delayed-τ model, have been used to infer the detailed SFHs of galaxies using stellar population modeling. Quenching events may cause sharp transitions in SFR(t), which cannot be captured by these models. A solution to this problem is to infer nonparametric SFHs, which use a highly flexible functional form. Nonparametric models have been shown to match the complexity of galaxy SFHs, such as sharp bursts of star formation or quenching events <cit.>. To infer accurate SFHs of galaxies, both high-quality spectra and photometry are needed to break the SFH-dust-metallicity degeneracy <cit.>. Studies with such high quality data have often been limited to the local universe, e.g. SDSS. The Large Early Galaxy Astrophysics Census (LEGA-C) Survey <cit.> has produced similar quality data for galaxies at 0.6 < z < 1 with high-signal-to-noise spectroscopy, enabling the estimation of the properties of galaxies to be characterized in exquisite detail. However, due to their higher redshifts, these galaxy spectra currently lack the full suite of diagnostics available in SDSS, i.e. all of the strong emission lines in the rest-optical (λ_rest≈3000-7000Å).
To complete this full suite of diagnostics, we follow up a subsample of LEGA-C galaxies to obtain deep NIR spectroscopy using the MMT and Magellan infrared spectrograph <cit.> at the 6.5 m MMT Observatory, targeting three strong rest-frame optical emission lines, Hα, [N2], and [S2]. By combining these line fluxes with [O3] and Hβ from LEGA-C, we are thus able to construct the [N2] BPT diagram. We infer the individual, nonparametric SFHs for twenty-nine galaxies at z∼0.7. We compare the SFHs of galaxies that host AGN to galaxies without detectable AGN to determine if there is a link between the presence of an AGN and the suppression of star formation. We thus explore the role AGN have on the quenching of massive galaxies at z∼0.7, soon after the peak epoch of quenching <cit.>.
In Section <ref>, we present our data and data reduction methods. In Section <ref>, we explain our data analysis methods and models. In Section <ref>, we explain our AGN selection methods. In Section <ref>, we discuss the inferred nonparametric SFHs and compare between AGN host galaxies and non-AGN galaxies. In Section <ref> we compare our results to other studies in the literature. We assume a flat Λ cold dark matter (ΛCDM) cosmology with WMAP-9 parameters, Ω_m=0.287, H_0=69.3 km s^-1 Mpc^-1 <cit.>.
§ DATA AND OBSERVATIONS
§.§ LEGA-C and UltraVISTA
LEGA-C <cit.> is an ESO public spectroscopic survey conducted over 128 nights with VIMOS on the Very Large Telescope in the wide-area COSMOS field <cit.>. The survey includes ∼ 3500 K_s-band selected galaxies at redshifts 0.6<z<1.0 with stellar masses M_* > 10^10 M_⊙. The 20 hour long integrations produce continuum spectra with S/N ∼ 20 Å^-1 and high-resolution (ℛ≈ 3500; ) at the observed wavelength range 6300 Å≤λ≤ 8800 Å. We use the spectra from the third data release <cit.>.
In addition, we utilize the exceptional ancillary data for the LEGA-C galaxies from the UltraVISTA catalog <cit.>, which is a collection of photometric data across 30 passbands from 0.2 to 24μ m. This catalog includes UV imaging from the GALEX satellite <cit.>, optical imaging from the Canada–France–Hawaii Telescope (CFHT) and Subaru telescope <cit.>, near-infrared data from the VISTA telescope <cit.>, and mid-infrared data from Spitzer <cit.>. The IRAC and MIPS deblended photometry from Spitzer was measured using a source-fitting code that has been thoroughly tested for many K_s selected catalogs <cit.>. For more details, see Section 3.5 of <cit.>.
lll[!htbp]
Parameters
0pt
Parameter Description Prior
log M/M_⊙ Total mass formed Uniform: min=9.5, max=12.5
z Redshift Uniform: min=z_spec-0.005, max=z_spec+0.005
log Z/Z_⊙ Stellar metallicity Uniform: min=-1.0, max=0.19
n Power-law modifier to shape of the <cit.> attenuation
curve of the diffuse dust Uniform: min=-1.0, max=0.4
τ_dust,2 Diffuse dust optical depth Clipped normal: min=0, max=4, μ=0.3, σ=1
τ_dust,1 Birth-cloud dust optical depth Clipped normal in (τ_dust,1/τ_dust,2): min=0, max=2, μ=1, σ=0.3
γ_e Mass fraction of dust in high radiation intensity Log-uniform: min=10^-4, max=0.1
U_min Minimum starlight intensity to which the dust mass is exposed Uniform: min=0.1, max=15
q_PAH Percent mass fraction of PAHs in dust Uniform: min=0, max=7.0
f_AGN AGN luminosity as a fraction of the galaxy bolometric luminosity Log-uniform: min=10^-5, max=3
τ_AGN Optical depth of AGN torus dust Log-uniform: min=5, max=150
log Z_gas/Z_⊙ Gas-phase metallicity Uniform: min=-2.0, max=0.5
log U_gas Gas Ionization Parameter Uniform: min=-4.0, max=-1.0
σ Velocity Dispersion Uniform: min=σ_LEGA-C-50, max=σ_LEGA-C+50
SFR Ratios Ratio of the SFRs in adjacent bins; N_SFH-1 bins total with
N_SFH=10 Student’s t-distribution: σ = 0.3, ν = 2
f_out Spectra outlier fraction Uniform: min=10^-5, max=0.1
j_spec Spectral noise multiplier Uniform: min=0, max=3.0
z_spec and σ are the spectroscopic redshift and velocity dispersion from LEGA-C DR3
§.§ Sample Selection and Follow-up at MMT
In this work, we select a subsample of galaxies drawn from the LEGA-C survey. To complete the full suite of rest-frame diagnostics, we obtained follow-up NIR observations using the MMIRS spectrograph <cit.> at MMT Observatory, targeting three strong rest-frame optical emission lines, including Hα, [N2], and [S2]. For our selection, we prioritized LEGA-C galaxies for which the measured line fluxes for Hβ and [O3] were well-detected with S/N>3 after continuum subtraction and were not likely to be contaminated by sky lines. We chose mask pointings that would maximize the number of these galaxies for each mask. When remaining mask space was available, we assigned slits to other LEGA-C galaxies as fillers if they had a quality flag in the catalog that indicated they were primary survey targets that can be used for scientific purposes (see Section 3.3 of <cit.> for more details). We obtained spectra with four masks for a total of sixty-one galaxies, thirty-five filler galaxies and twenty-six galaxies that met our primary criteria.
Twenty-nine galaxies in total showed Hα emission and are the sample discussed in this paper, seven of which were filler galaxies.
Our NIR spectroscopy is observed with MMIRS, which is an IR multi-object spectrograph with a wide field of view (4× 7). We observed with the J grism and the zJ filter with a wavelength range of 0.94-1.51μ m and ℛ≈1020 at the mid-wavelength of 1.225μ m. We obtained spectra with a total of 4 masks, each of which was observed for ∼5.5-6.5 hours (combining individual exposures of 300 s), with average seeing of 0.51-1.08. The slit widths are 1 and the slit lengths are 8 to be consistent with the LEGA-C observations. The orientation of the slits are shown for our observations as well as the LEGA-C observations in Appendix <ref>.
To investigate selection effects related to the local environment of the galaxies in our sample, we use the overdensity value estimates from <cit.>. The overdensity is defined as the local number density of galaxies, which is estimated using a Voronoi tessellation technique. The mean and standard deviation of the overdensity value for the full LEGA-C sample is log(1 + δ) =0.25+-0.28. For the sample studied in this paper, the mean and standard deviation of the overdensity value is log(1 + δ) =0.03+-0.31. Therefore, our sample is not in systematically overdense environments compared to the full LEGA-C sample or the field overall. We therefore find no evidence that potential quenching mechanisms are driven by the local environmental density of the galaxies in our sample.
§.§ Data Reduction
The raw frames were processed through the MMIRS data reduction pipeline <cit.>. The raw data consist of science images and dark exposures of the same duration as the science data. All data use a non-destructive readout such that a 3 dimensional file is created where the third dimension contains the individual readouts. The dark frames are averaged on a frame-by-frame basis and then subtracted from each of the science frames. Subsequently, data rate frames are calculated by using a linear fit for each pixel which also allows to reject cosmic rays. These data rate frames are calibrated in wavelength using the OH sky lines and re-sampled to be linear in wavelengths. The linearized spectra are combined on a per-mask basis taking into account the dither pattern used during the observations, to generate the combined 2-dimensional spectrum.
We extract 1D spectra by first fitting the profile along the spatial axis with a Gaussian function and then using optimal extraction <cit.>. The aperture size for this extraction varies from galaxy to galaxy, but has an average FWHM of 1.2. The LEGA-C 1d spectra were extracted with an aperture that also varied from galaxy to galaxy, however the individual aperture sizes are not published, but they have a very similar typical size of 1. The absolute flux calibration was done using the photometry from UltraVISTA, as was done for the LEGA-C spectra. To scale the MMT spectra to the photometry, we first fit the photometry alone with <cit.> where the fit redshift is set to the spectroscopic redshift to obtain a best-fit photometric SED. We use a median filter, 100Å wide, along the wavelength range for both the MMT spectra and the best-fit photometric SED and find the ratio between the two. We use this ratio to scale the MMT spectra to the best-fit photometric SED in a wavelength-dependent way. We note that any differential wavelength effects should be negligible in our AGN selection technique, which depends on the intensity ratios of emission lines that are sufficiently close in wavelength. We also note that while the flux calibration does compensate for slit losses under the assumption that the spectrum is the same shape across the galaxy, it does not account for any spatial gradients along the slit that may vary depending on wavelength.
§ DATA ANALYSIS
We use SED fitting to infer physical properties for our sample including M_*, the detailed SFHs, and dust attenuation. In addition, we fit emission lines to measure Hβ, [O3], [N2], Hα, and [S2] fluxes. In this section we discuss the details of our analysis. The simultaneous fitting of the photometry and LEGA-C optical spectra with is described in Section <ref>. The emission line fitting of the spectra with the flexible framework GELATO <cit.> is described in Section <ref>.
§.§
We simultaneously fit the photometry and LEGA-C optical spectra with the inference framework. <cit.> uses the Flexible Stellar Population Synthesis code () <cit.> via <cit.>. The posterior distributions are sampled using the dynamic nested sampling code <cit.>.
We use a similar physical model as in <cit.>, modified for star-forming galaxies. In brief, we employ the MIST stellar evolutionary tracks and isochrones <cit.> which utilizes the MESA stellar evolution package <cit.>. We use MILES for the stellar spectral library <cit.> and adopt a Chabrier IMF <cit.>. The IGM absorption is modeled after <cit.>. For dust attenuation, we assume a flexible attenuation curve with the UV bump tied to the slope of the curve <cit.> and a two-component dust model <cit.>. For nebular emission, we use nebular marginalization, which models the emission lines with a Gaussian fit, therefore decoupling the emission lines from the SFH, see Section <ref>. Compared to our model in <cit.>, in this work we allowed a wider range in priors for the gas-phase metallicity and gas ionization parameter. Our dust emission model assumes energy balance, where all starlight is attenuated by dust and re-emitted in the IR <cit.>. We use the <cit.> dust emission templates, which constrains the shape of the IR SED. We include both noise and calibration models for the simultaneous fitting of both high resolution spectra and photometry. We adopt a ten-component nonparametric SFH using the continuity prior.
We adopt AGN templates from <cit.> and <cit.>, which are CLUMPY AGN models incorporated into . Only dust emission from the AGN's dusty torus is included in this model. The UV and optical emission from the central engine is mostly obscured by the AGN dust torus, and if any emission leaks out it is then attenuated by the galaxy dust attenuation model. CLUMPY AGN models successfully reproduce the observed MIR characteristics of AGN in the local universe <cit.>, however are not the best method to identify emission from unobscured AGN <cit.>. The free parameters for our physical model are listed in Table <ref>.
§.§ GELATO
To study the source of ionization in our sample, we fit the emission lines in the NIR and optical spectra with the flexible framework GELATO <cit.>. GELATO models the stellar continuum as a combination of simple stellar populations (SSPs) from the Extended MILES stellar library <cit.>. The SSP models assume a Chabrier IMF and isochrones of <cit.> with solar alpha abundance, and span a range of metallicities and ages.
We extract fluxes for Hβ, [O3], [N2], Hα, and [S2]. Each emission line is treated as a Gaussian model and parameterized by its redshift, velocity dispersion (the standard deviation of the Gaussian), and flux. For more information, see <cit.>. GELATO also allows emission lines to be fit with an additional broad line component. However, fitting with broad lines for our sample does not statistically improve the fit. The GELATO fits for our entire sample are shown and the fluxes are listed in Table <ref> in Appendix <ref>.
§ IDENTIFYING AGN
We identify AGN activity in our sample through the BPT diagnostic diagram. In Figure <ref>, we show the [N2] BPT diagram, which compares the ratio of [O3]λ5007Å to Hβ with [N2]λ6583Å to Hα. We note that these line ratios have been shown to evolve at higher redshift <cit.>, however galaxies at our redshift of z≈ 0.7 have interstellar medium conditions consistent with local galaxies <cit.>. Demarcation lines from <cit.> and <cit.> are shown as solid and dashed curves, respectively, to separate star-forming galaxies from AGN. AGN tend to lie above the <cit.> line, while galaxies below this line but above the <cit.> line are considered to be in an area on the BPT diagram where composite galaxies lie, meaning both star formation and AGN activity may contribute to the ionization. In this work, we consider composite galaxies to be AGN.
Using this diagnostic, we find that five galaxies have line ratios consistent with AGN ionization, and four are in the composite region. Seven galaxies do not have measurements for [O3] or Hβ because the line was redshifted out of range for the LEGA-C observations but not redshifted into the range accessible to MMIRS, lying between the two. These galaxies are therefore not included in the BPT diagram, however their values for log([N2]λ6584/Hα) are shown in the top panel as a histogram; we consider two of these galaxies with log([N2])>-0.2 to be either AGN or composite galaxies. Therefore, in total, our sample contains eleven AGN selected using the [N2] BPT diagram. We note that when using optical diagnostics, it is difficult to separate line emission between excitation sources if shock emission is suspected <cit.>. We therefore cannot rule out that the galaxies in the AGN and composite regions may be indicative of the presence of shocks instead of AGN.
We match our sample of galaxies with the Chandra COSMOS Legacy Survey catalogs <cit.>. Four galaxies in our sample were X-ray detected, all of which were also identified as AGN or composite galaxies with the BPT diagram. We include the 0.5–7.0 keV rest frame luminosities (L_X,int) in Table <ref>. The criterion to be classified as an X-ray AGN requires L_X,int≥ 3× 10^42 erg s^-1 <cit.>. Three of the four X-ray detections meet this criterion.
In addition, we check the IRAC colors for evidence of obscured AGN that might be missed by X-ray or BPT selection.
We use the IRAC color-color diagram from <cit.> and find that none of the galaxies in our sample meet the criteria for IR AGN. This is not surprising because some optically-selected AGN can have IR colors consistent with star-forming galaxies and IR selection is very dependent on AGN to host luminosity ratio <cit.>. In addition, <cit.> found that only 46% of AGN identified with , defined as f_AGN>0.1, would be detected with typical MIR color selections. The two galaxies in our sample with f_AGN>0.1 were also identified as AGN or composite using the BPT diagram.
Some AGN can produce jets which emit synchrotron radiation that can be detected at radio wavelengths. We match our sample with the radio-loud galaxy catalog in <cit.>. They classify radio-loud AGN by using the radio luminosity limit from <cit.>. We find that one of our galaxies is included in their radio-loud AGN catalog. Based on its location in the BPT diagram, this radio-loud AGN is also a composite galaxy.
In summary, all of the galaxies in our sample that were classified as AGN from radio, IR, or X-ray emission were also classified as AGN by the BPT diagram. Seven of the BPT-selected AGN do not have detectable X-ray or radio emission. <cit.> also found that optically-selected AGN were not always detected using alternate methods (X-ray, IR, radio) and explained that this is likely because optical emission lines are one of the most sensitive AGN tracers. All of the results for this section are summarized in Table <ref>.
§ STAR FORMATION HISTORIES
§.§ SFR Measurements
We aim to measure robust SFRs in order to explore the role of AGN in the quenching of massive galaxies. Many AGN indicators are also SFR indicators, making it difficult to disentangle the two. If AGN are present, they may contribute to the flux of the emission lines. Therefore, for the SED fitting, we use nebular marginalization, which decouples the emission lines from the SFR, as discussed in Section <ref>.
Hydrogen recombination lines are thought to be one of the most robust tracers of SFR. In particular, Hα is the SFR indicator of choice, which probes the SFR over the past 10 Myr <cit.>. However, as mentioned in Section <ref>, Hα redshifts out of the visible spectrum at z>0.5 and consequently requires NIR spectroscopy. Our NIR data cover the Hα line, enabling a cross-comparison of SFR tracers between the spectrophotometric SED-modeling of the optical data while accurately correcting for dust attenuation. <cit.> showed that simultaneously fitting photometry and spectroscopy is needed to break the dust-age-metallicity degeneracy, and that the dust attenuation is mainly constrained by the photometry.
To measure SFR_Hα, we first correct the Hα flux for dust attenuation using the best inferred dust attenuation law parameters from our fits. We use the dust-corrected fluxes to derive SFRs using the conversion factor in <cit.>. To check if the recent SFR inferred from our SED fitting with is robust, in Figure <ref> we show the SFR_Hα compared with the SFR_SED. We only include galaxies with no detectable AGN in this comparison because AGN contribute significantly to Hα flux. We calculate the scatter as the standard deviation of the perpendicular distances to the one-to-one line and find that it is 0.1 dex. Therefore, the results are in excellent agreement.
In Figure <ref>, we show the SFHs of individual galaxies inferred with our nonparametric model, displaying a diversity of evolutionary pathways for our sample of galaxies. The top 3 panels show galaxies hosting AGN and the bottom 3 panels show galaxies without detectable AGN. The columns represent different redshift bins. The green shaded region is the transition regime, or the “green valley," defined by <cit.> using cuts in specific SFRs (sSFR = SFR/M_*) based on the mass doubling number, which is a measure of the number of times the stellar mass would double over the age of the universe at the current sSFR. <cit.> verified these cuts empirically using the SFR-M_* plane from <cit.>. Galaxies with sSFRs below this region are classified as quiescent and galaxies with sSFRs above this region are classified as star-forming.
Using this SFH classification scheme, three galaxies are quiescent, six galaxies are in the green valley, and the remaining twenty galaxies are star-forming. All of the quiescent and green valley galaxies have higher masses (log M_*>10.7 M_⊙). Next, we discuss the SFHs of AGN compared to galaxies without detectable AGN.
lllllllc[!htbp]
AGN
0pt
# ID BPT L_X,int Radio-loud f_AGN Galaxy classification
1 151258 AGN no no 0.0 GV
2 254580 AGN no no 0.1 GV
3 165155 AGN 3.3E+42 no 0.0 GV
4 240905 AGN no no 0.1 Q
5 144632 AGN 1.6E+42 no 0.2 GV
6 161993 Composite no no 0.1 SF
7 234067 Composite 5.2E+42 yes 0.0 SF
8 166309 Composite 3.2E+42 no 0.0 GV
9 240675 Composite no no 0.0 Q
10 143863 AGN/Composite no no 0.3 SF
11 162906 AGN/Composite no no 0.1 Q
All of the galaxies in our sample with detectable AGN are included in this table. AGN that were classified from radio, IR, or X-ray emission were also classified as AGN by the BPT diagram. Column 3 indicates whether the galaxy lies above the <cit.> or <cit.> line in the BPT diagram, labeled as AGN or composite, respectively, see Figure <ref>. Galaxies labelled as AGN/Composite do not have [O3] measurements because the line was redshifted out of range for the LEGA-C observations, however their values for log([N2]/Hα) are high enough to consider them either AGN or composite galaxies. Column 4 indicates whether the galaxy was detected in the Chandra Legacy Survey <cit.>. Column 5 indicates whether the galaxy was determined a radio-loud AGN <cit.>. Column 6 shows the inferred AGN luminosity as a fraction of the galaxy's bolometric luminosity from our fits. Column 7 indicates whether the galaxy is classified as star-forming (SF), transitioning in the green valley (GV), or quiescent (Q) based on the cuts in sSFR from <cit.>.
§.§ Star Formation Histories of AGN vs. non-AGN
The left panel of Figure <ref> shows the SFMS for the galaxies in our sample using SFR_SED, with galaxies hosting BPT AGN shown in purple and galaxies without detectable AGN shown in yellow. <cit.> fit the entire LEGA-C sample using with a similar physical model as used in this work. Their results are shown as contours for the 25th, 50th, and 75th percentiles. We compare SFR_SED relative to the redshift-dependent SFMS from <cit.>. We calculate the mean and standard error of the mean for log (SFR_SED/SFR_MS(z)) and find that for AGN host galaxies it is -0.37±0.15 and for galaxies without detectable AGN it is 0.14±0.07. Therefore AGN host galaxies lie significantly below the SFMS while galaxies without detectable AGN are consistent with being on or above the SFMS.
The right panel of Figure <ref> shows the sSFR vs. stellar mass, as determined from the SED fitting, with the green valley at z=0.75 shown as a green shaded region. As mentioned in Section <ref>, the majority of galaxies in our sample are star-forming (above the green valley), based on the inferred SFHs from the SED fitting with . Six galaxies are in the transition region, five of which are AGN. In addition, the three recently quenched galaxies, which lie below the green valley, are AGN. Therefore, 89±15% of galaxies in the green valley or below host AGN, while 15±8% of galaxies above the green valley host AGN. The uncertainties for these percentages were calculated using the binomial distribution. In Section <ref> we discuss if this suppression of star formation in AGN host galaxies is consistent with the overall trend with increasing stellar mass of the parent sample.
Using the nonparametric SFHs, we calculate the amount of time spent in the green valley (t_GV) for the five AGN, which differs significantly between galaxy and is in the range t_GV≈ 0.03-2 Gyr. This is consistent with the large diversity of quenching timescales for quiescent galaxies, defined as time spent in the green valley before quiescence, found by <cit.>, whose definition of green valley is used in this work.
§.§ Star Formation History Comparison as a Function of Stellar Mass
The number of AGN in our sample increases with M_*, in agreement with many other studies <cit.>. This is not surprising given the positive correlations between the black hole mass (M_BH), bulge mass, and stellar mass <cit.>. Specifically, galaxies with larger M_* also have larger M_BH with higher absolute accretion rates <cit.>. Therefore, it is expected that AGN are more likely to be detected in more massive galaxies. In addition, the majority of the high mass galaxies in the full LEGA-C sample are below the SFMS. As a result, it is necessary to compare SFRs as a function of M_* when looking for differences between AGN host galaxies and galaxies without detectable AGN to determine if any SFR differences are driven by the AGN and not other physical processes associated with higher stellar masses.
To accomplish this, we need to compare the star formation properties of AGN host galaxies to a mass-matched sample of non-AGN galaxies.
Therefore, we compare the SFRs of our AGN host galaxy sample with a mass-matched bootstrapped sample from the <cit.> catalog. We find that the mean and standard deviation of the SFR_SED for our AGN host galaxies is 2.9±9.1 and for the mass-matched bootstrapped sample it is 3.4±3.1. These are consistent with one another, meaning that the SFRs of our AGN host galaxies follow the same SFR trends at a given stellar mass as the full LEGA-C sample. We therefore cannot connect the lower SFRs with the current AGN activity, as it is possibly driven by other processes associated with the higher mass of the AGN sample.
§ DISCUSSION
In this work, we explore the connection between AGN activity and star formation. Many studies have compared the SFRs of AGN host galaxies to galaxies without detectable AGN and find varying results. We find that the green valley galaxies in our sample preferentially host AGN, consistent with other studies <cit.>. Compared to galaxies without detectable AGN, some studies find higher SFRs for the AGN host galaxies <cit.>, others find similar SFRs <cit.>, while others find lower SFRs <cit.>.
To compare to these results, we utilize the entire LEGA-C sample from the <cit.> catalog which uses a similar physical model as in this work. We select X-ray, IR, and radio AGN in the full LEGA-C sample using the same selection techniques discussed in Section <ref>. Figure <ref> shows the distributions of the stellar masses and sSFRS for each AGN selection type. Consistent with other results, the IR-selected AGN tend to have enhanced star formation <cit.>, while X-ray AGN have a broad range of sSFRs, showing both enhanced and suppressed star formation <cit.>. The radio-selected AGN tend to have low sSFRs and high stellar masses.
These differences in the SFRs of AGN host galaxies may be due to the AGN selection technique <cit.> or the type of AGN <cit.>. For example, <cit.> suggest that IR-selected AGN may have a higher fraction of mergers compared to the merger fraction of optically selected AGN, and these mergers are also thought to trigger episodes of star formation. Other studies have found a connection between radio-loud AGN and quiescence <cit.>. <cit.> show that BPT-selected AGN are preferentially found in high-mass, green, moderately star-forming hosts due to “star formation dilution," which causes a significant bias against BPT AGN selection in lower mass star-forming galaxies. Therefore, different techniques may select for AGN and host galaxies in different stages of their co-evolutionary pathways, or they may simply be the result of observational selection bias.
Galaxies can reside in the green valley for two main reasons: (1) a quiescent galaxy can move from the red sequence to the green valley via the rejuvenation of star formation or (2) the galaxy is in the process of quenching. Other studies that find lower SFRs for AGN host galaxies have suggested the former scenario, where the build-up of the central bulge through inflowing gas can trigger AGN activity as well as rejuvenate star formation <cit.>. With our detailed individual, non-parametric SFH modeling that has been shown to capture rejuvenation events <cit.>, we are able to test this scenario for the first time. We use the following criteria to define rejuvenation: galaxies that were once quiescent (their median sSFR fell below the green valley) and have risen to either the green valley or above at some time after. We inspect the individual SFHs of the green-valley AGN host galaxies in our sample and find that they do not meet this rejuvenation criteria. We also inspect the SFHs of the rest of our sample and find that none of our sample meets this rejuvenation criteria. Therefore, there is no evidence that the galaxies in the green valley in our sample are rejuvenating and they are more likely in the process of quenching.
However, we cannot attribute this quenching to the AGN activity. In Section <ref>, we found that the SFRs of the AGN host galaxies are consistent with galaxies at similar masses from the full LEGA-C sample. Since the majority of high mass galaxies of the sample are below the SFMS, we cannot rule out that the suppression of star formation is driven by the higher mass of the AGN sample. Due to AGN variability, the timescale for when AGN are actively accreting and visible is much shorter <cit.> than even the most rapid quenching events (≈ 10^8 yr). Also, our SFR tracers probe the most recent SFR on timescales of 10^7 yrs, which is still longer than the timescale of AGN activity. Figure <ref> shows that different empirical AGN selections are identifying evidence of AGN with overlapping distributions across the entire sSFR range of the full LEGA-C sample. This means we see evidence for current AGN activity in galaxies above, through, and below the SFMS. This makes it difficult to determine if the instantaneous AGN activity influences the large-scale star formation activity in galaxies. However, there is growing evidence in recent simulations and observational studies that the integrated history of AGN feedback, on Gyr timescales, rather than instantaneous AGN feedback suppresses star formation <cit.>. <cit.> suggest that AGN release energy over long periods, which prevents gas from cooling and accreting from the CGM into massive galaxies, leading to the quenching of star formation due to the unavailability of fuel. This scenario would explain the difficulty of connecting instantaneous AGN activity with the suppression of star formation in observational studies.
§ SUMMARY AND CONCLUSIONS
We have followed up a subset of LEGA-C galaxies with NIR observations at MMT Observatory, allowing us to select AGN with the [N2] BPT diagram at higher redshift (z∼0.7) to look for observational evidence of AGN quenching during the epoch of cosmic star formation decline. We study the nonparametric SFHs of AGN hosts and compare them to star-forming galaxies to check for observational evidence of AGN quenching. Our main findings are as follows:
* The SFRs inferred from our SED fitting with are in excellent agreement with the SFRs measured from the dust-corrected Hα line fluxes, showing only 0.1 dex in scatter.
* 89±15% of galaxies in the green valley or below host AGN, based on sSFRs inferred from SED fitting with .
* The AGN host galaxies are 0.37±0.15 dex below the SFMS while the galaxies without detectable AGN are consistent with being on or above the SFMS. However, when comparing to a bootstrapped mass-matched sample, the SFRs of the AGN host galaxies are consistent with the same SFR trends at a given stellar mass with the full LEGA-C sample. Therefore, we cannot rule out that this suppression of star formation is driven by the higher mass of the AGN sample.
We therefore show that despite having high quality data and state-of-the-art SFH modeling, it remains difficult to connect the suppression of star formation with current AGN activity. This conclusion should guide the design of future experiments with mass-complete distributions and similar quality data at intermediate redshifts of BPT-selected AGN. This is currently difficult to obtain as it requires large time investments on telescopes in both optical and NIR spectrographs. However, in the near future, surveys using instruments such as the Multi-Object Optical and Near-infrared Spectrograph (MOONS) on the European Southern Observatory (ESO) Very Large Telescope <cit.>, the spectroscopic galaxy survey with the Dark Energy Spectroscopic Instrument <cit.>, the galaxy evolution survey with the Prime Focus Spectrograph <cit.>, and the wide field redshift surveys planned with Euclid <cit.> will enable large samples with the complete rest-optical emission line diagnostics used in this work.
This will enable future studies to further investigate the connection between AGN activity and suppressed star formation with larger sample sizes during the peak epoch of quenching.
Observations reported here were obtained at the MMT Observatory, a joint facility of the University of Arizona and the Smithsonian Institution. We thank Sean Moran and Igor Chilingarian for help with the data reduction pipeline and Joannah Hinz for help with the mask design.
Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 194-A.2005 (The LEGA-C Public Spectroscopy Survey). CW and REH are supported by the National Science Foundation through the Graduate Research Fellowship Program funded by Grant Award No. DGE-1746060. CCW and MR were supported by the National Aeronautics and Space Administration (NASA) Contract NAS50210 to the University of Arizona.
This material is based upon High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and Research, Innovation, and Impact (RII) and maintained by the UArizona Research Technologies department.
We respectfully acknowledge the University of Arizona is on the land and territories of Indigenous peoples. Today, Arizona is home to 22 federally recognized tribes, with Tucson being home to the O'odham and the Yaqui. Committed to diversity and inclusion, the University strives to build sustainable relationships with sovereign Native Nations and Indigenous communities through education offerings, partnerships, and community service.
HST, VLT, MMT (MMIRS)
<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, GELATO <cit.>
§ ADDITIONAL FIGURES
figure-1
figure-1
figure-1
figure-1
figure-1
rrrrrrrrrrr
=0.05cm
Measurements
# ID Hβ [O3]λ 5007Å [N2]λ 6584Å Hα [S2] λ 6717Å [S2] λ 6731Å n τ_dust,1 τ_dust,2
1 150047 543.61 ± 7.64 75.77 ± 10.89 782.15 ± 42.35 2802.66 ± 7.64 433.20 ± 34.87 263.24 ± 36.90 -0.37 ^+ 0.07 _- 0.08 0.57 ^+ 0.24 _- 0.04 0.63 ^+ 0.04 _- 0.04
2 235624 801.64 ± 55.32 148.02 ± 16.17 1491.21 ± 81.58 2846.79 ± 55.32 517.34 ± 100.46 39.34 ± 191.63 -0.26 ^+ 0.05 _- 0.05 0.27 ^+ 0.11 _- 0.10 0.51 ^+ 0.04 _- 0.04
3 240675 242.11 ± 8.49 193.47 ± 17.76 868.93 ± 218.59 1765.34 ± 8.49 297.88 ± 82.24 … 0.18 ^+ 0.13 _- 0.18 0.22 ^+ 0.06 _- 0.05 0.26 ^+ 0.03 _- 0.04
4 236338 476.75 ± 10.65 … 752.35 ± 46.24 2302.21 ± 10.65 686.07 ± 88.32 388.44 ± 47.20 -0.17 ^+ 0.05 _- 0.05 0.93 ^+ 0.16 _- 0.16 1.17 ^+ 0.06 _- 0.06
5 151258 123.97 ± 6.38 178.32 ± 43.96 516.63 ± 68.21 500.53 ± 6.38 251.32 ± 41.61 31.9 ± 37.63 -0.41 ^+ 0.21 _- 0.17 0.30 ^+ 0.05 _- 0.08 0.25 ^+ 0.04 _- 0.03
6 254580 258.25 ± 9.28 741.09 ± 15.02 778.29 ± 45.66 1756.37 ± 9.28 279.19 ± 42.63 192.76 ± 33.56 -0.00 ^+ 0.10 _- 0.09 0.39 ^+ 0.14 _- 0.14 0.62 ^+ 0.06 _- 0.05
7 143863 … … 1722.23 ± 42.40 2224.60 ± 36.96 1162.81 ± 84.06 1351.06 ± 105.38 -0.13 ^+ 0.02 _- 0.00 0.56 ^+ 0.04 _- 0.15 0.70 ^+ 0.02 _- 0.00
8 234911 320.27 ± 7.04 180.63 ± 15.18 169.97 ± 102.60 1676.05 ± 7.04 404.56 ± 126.28 690.20 ± 213.27 -0.05 ^+ 0.08 _- 0.08 1.13 ^+ 0.19 _- 0.18 0.81 ^+ 0.05 _- 0.05
9 164423 1165.39 ± 28.53 … 2096.75 ± 44.87 6437.72 ± 28.53 796.57 ± 62.52 740.77 ± 32.91 -0.07 ^+ 0.04 _- 0.04 0.18 ^+ 0.12 _- 0.10 0.93 ^+ 0.06 _- 0.06
10 235128 345.70 ± 8.10 207.10 ± 14.64 116.02 ± 28.09 1342.14 ± 8.10 567.75 ± 149.20 … 0.02 ^+ 0.03 _- 0.03 0.15 ^+ 0.17 _- 0.10 1.39 ^+ 0.06 _- 0.06
11 148184 1626.36 ± 11.57 426.33 ± 19.99 2850.63 ± 44.02 9151.36 ± 11.57 1593.18 ± 92.59 967.74 ± 62.25 -0.19 ^+ 0.04 _- 0.04 0.02 ^+ 0.03 _- 0.01 0.55 ^+ 0.03 _- 0.03
12 257619 676.78 ± 45.86 356.46 ± 19.11 376.27 ± 48.65 1879.07 ± 45.86 5271.99 ± 417.27 823.77 ± 145.40 0.12 ^+ 0.20 _- 0.21 0.37 ^+ 0.07 _- 0.06 0.32 ^+ 0.05 _- 0.06
13 180320 1537.02 ± 971.65 1023.87 ± 18.30 517.99 ± 48.51 2433.68 ± 971.65 683.81 ± 105.30 729.14 ± 102.26 0.23 ^+ 0.09 _- 0.10 0.18 ^+ 0.10 _- 0.10 0.33 ^+ 0.06 _- 0.06
14 129098 … … 1506.81 ± 58.63 3741.03 ± 56.02 810.42 ± 55.85 517.37 ± 40.53 -0.45 ^+ 0.06 _- 0.05 0.86 ^+ 0.20 _- 0.16 1.03 ^+ 0.07 _- 0.06
15 165155 591.55 ± 10.66 2261.70 ± 23.44 1819.91 ± 64.54 2030.13 ± 10.66 659.01 ± 40.90 448.01 ± 41.96 0.17 ^+ 0.11 _- 0.11 0.62 ^+ 0.09 _- 0.07 0.47 ^+ 0.05 _- 0.05
16 162906 221.65 ± 19.67 … 443.28 ± 55.28 638.55 ± 19.67 105.16 ± 142.37 227.18 ± 121.23 -0.28 ^+ 0.49 _- 0.54 0.07 ^+ 0.01 _- 0.01 0.05 ^+ 0.05 _- 0.03
17 144958 586.93 ± 11.60 170.06 ± 8.69 528.66 ± 35.52 2071.99 ± 11.60 457.35 ± 91.78 170.44 ± 118.27 -0.06 ^+ 0.08 _- 0.07 0.58 ^+ 0.16 _- 0.15 0.66 ^+ 0.05 _- 0.05
18 142793 … … 783.22 ± 46.73 2179.31 ± 43.10 526.15 ± 35.32 325.37 ± 43.33 -0.06 ^+ 0.06 _- 0.08 0.44 ^+ 0.31 _- 0.09 0.59 ^+ 0.07 _- 0.06
19 126247 858.59 ± 14.78 … 798.58 ± 27.64 2782.05 ± 14.78 511.91 ± 40.85 412.43 ± 24.56 -0.27 ^+ 0.05 _- 0.04 0.04 ^+ 0.05 _- 0.03 0.40 ^+ 0.04 _- 0.02
20 166598 948.38 ± 14.91 432.64 ± 8.73 2254.69 ± 68.40 5109.72 ± 14.91 1109.67 ± 98.01 991.29 ± 59.74 -0.11 ^+ 0.06 _- 0.05 0.93 ^+ 0.16 _- 0.14 0.93 ^+ 0.05 _- 0.04
21 124727 345.20 ± 9.21 143.29 ± 8.15 209.51 ± 30.72 936.47 ± 9.21 … 308.94 ± 86.79 -0.02 ^+ 0.05 _- 0.05 0.79 ^+ 0.23 _- 0.41 1.00 ^+ 0.08 _- 0.06
22 240905 289.93 ± 9.38 777.85 ± 16.98 1244.6 ± 128.93 1913.73 ± 9.38 … … -0.24 ^+ 0.30 _- 0.28 0.07 ^+ 0.02 _- 0.02 0.07 ^+ 0.03 _- 0.02
23 234067 380.32 ± 6.61 226.81 ± 8.70 1112.90 ± 71.72 1471.69 ± 6.61 292.13 ± 54.74 … 0.02 ^+ 0.06 _- 0.06 0.77 ^+ 0.24 _- 0.16 0.81 ^+ 0.04 _- 0.05
24 166309 389.88 ± 16.67 170.60 ± 10.00 984.33 ± 82.98 1811.97 ± 16.67 140.21 ± 137.52 371.74 ± 80.64 -0.20 ^+ 0.08 _- 0.08 0.59 ^+ 0.18 _- 0.13 0.62 ^+ 0.04 _- 0.04
25 146965 338.46 ± 4.72 376.19 ± 8.85 … 1252.12 ± 4.72 425.75 ± 67.01 400.84 ± 42.15 -0.17 ^+ 0.05 _- 0.06 0.42 ^+ 0.15 _- 0.24 0.72 ^+ 0.05 _- 0.05
26 257051 1126.76 ± 12.70 2184.56 ± 8.24 657.26 ± 13.46 2706.12 ± 12.70 628.64 ± 36.02 371.64 ± 19.50 -0.01 ^+ 0.07 _- 0.07 0.13 ^+ 0.09 _- 0.08 0.45 ^+ 0.06 _- 0.05
27 255999 804.54 ± 13.06 557.00 ± 10.15 1159.54 ± 127.44 3251.99 ± 13.06 545.97 ± 37.65 506.54 ± 52.00 -0.12 ^+ 0.05 _- 0.06 0.73 ^+ 0.27 _- 0.22 0.96 ^+ 0.05 _- 0.06
28 161993 1864.95 ± 119.22 2514.25 ± 10.84 2986.60 ± 67.86 4057.83 ± 119.22 1025.88 ± 89.62 823.83 ± 66.01 0.22 ^+ 0.07 _- 0.07 0.36 ^+ 0.10 _- 0.11 0.60 ^+ 0.05 _- 0.05
29 144632 298.85 ± 12.77 950.91 ± 34.98 1902.20 ± 90.73 1967.41 ± 12.77 1293.00 ± 95.30 … 0.02 ^+ 0.04 _- 0.05 1.10 ^+ 0.18 _- 0.21 1.45 ^+ 0.06 _- 0.06
Columns 3-8 are flux measurements from the GELATO fitting results in units of 10^-19 erg cm^-2 s^-1 Å^-1. Columns 9-11 are the dust measurements from the fits and includes all components of the dust model: power-law modifier to the shape of the attenuation curve of the diffuse dust (n), the birth cloud dust (τ_dust,1), and the diffuse dust screen (τ_dust,2). The [S2] line fluxes are fitted completely independently. The [N2]λ 6584Åline is fit independently and the [N2]λ 6548Åline is fixed to have 0.34/1 times its flux.
aasjournal
|
http://arxiv.org/abs/2409.02707v1 | 20240904134323 | Search and state transfer between hubs by quantum walks | [
"Stanislav Skoupy",
"Martin Stefanak"
] | quant-ph | [
"quant-ph"
] |
]Search and state transfer between hubs by quantum walks
Department of Physics, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Břehová 7, 115 19 Praha 1 - Staré Město, Czech Republic
§ ABSTRACT
Search and state transfer between hubs, i.e. fully connected vertices, on otherwise arbitrary connected graph is investigated. Motivated by a recent result of Razzoli et al. (J. Phys. A: Math. Theor. 55,
265303 (2022)) on universality of hubs in continuous-time quantum walks and spatial search, we extend the investigation to state transfer and also to the discrete-time case. We show that the continuous-time quantum walk allows for perfect state transfer between multiple hubs if the numbers of senders and receivers are close. Turning to the discrete-time case, we show that the search for hubs is successful provided that the initial state is locally modified to account for a degree of each individual vertex. Concerning state transfer using discrete-time quantum walk, it is shown that between a single sender and a single receiver one can transfer two orthogonal states in the same run-time. Hence, it is possible to transfer an arbitrary quantum state of a qubit between two hubs. In addition, if the sender and the receiver know each other location, another linearly independent state can be transferred, allowing for exchange of a qutrit state. Finally, we consider the case of transfer between multiple senders and receivers. In this case we cannot transfer specific quantum states. Nevertheless, quantum walker can be transferred with high probability in two regimes - either when there is a similar number of senders and receivers, which is the same as for the continuous-time quantum walk, or when the number of receivers is considerably larger than the number of senders. Our investigation is based on dimensional reduction utilizing the invariant subspaces of the respective evolutions and the fact that for the appropriate choice of the loop weights the problem can be reduced to the complete graph with loops.
[
S. Skoupý and M. Štefaňák
September 9, 2024
=============================
§ INTRODUCTION
One of the most promising applications of quantum walks <cit.> is the spatial search
<cit.> which can be seen as an extension of the Grover's algorithm <cit.> for search of an unsorted database on a quantum computer. Quantum walk search was shown to provide quadratic speed-up over classical search on various graphs and lattices <cit.>. In fact, search utilizing continuous-time quantum walk for a single marked vertex was shown to be optimal for almost all graphs <cit.>. More recently, quadratic speed-up over a classical random walk search for any number of marked vertices was achieved in both discrete-time <cit.> and continuous-time <cit.> quantum walk algorithms.
Closely related problem to spatial search is state transfer <cit.> which is the basic tasks of any quantum communication network <cit.>. Apart from direct exchange of quantum information between nodes of a quantum network it can be utilized for distributed quantum computing <cit.> or more generally in quantum internet <cit.>. While in search we evolve the system from the equal weight superposition to a localized state on the marked vertex, in state transfer we aim to evolve from a localized state from the sender to the receiver vertex. One possibility to achieve state transfer is to design the dynamics on the whole graph accordingly. For the continuous-time evolution, where the dynamics is governed by the Schroedinger equation with a given Hamiltonian, the problem was over the years investigated on various types of graphs <cit.>. Discrete-time version was also studied in detail <cit.>. Second possibility to achieve state transfer is to employ spatial search, thus altering the dynamics only locally at the sender and the receiver vertex <cit.>. This approach was extensively investigated especially in the discrete-time quantum walk model <cit.>.
In a recent paper <cit.> the authors have investigated continuous-time quantum walk on an arbitrary graph with fully connected vertices, which we refer to as hubs for short. The Hamiltonian of the studied walk was given by the graph Laplacian, with additional potential on marked hubs. It was shown that such quantum walks have certain universal behaviour when the probability amplitudes at the hubs are concerned, guaranteed by the existence of invariant subspaces <cit.> which are independent of the rest of the graph. Applications to spatial search and quantum transport for single and multiple hubs were discussed, showing that the walk behaves the same as if the graph would be complete.
Our paper elaborates on the ideas of <cit.> and extends the investigation to state transfer between multiple hubs and also to the discrete-time scenario. Utilizing the effective Hamiltonian of the continuous-time quantum walk with marked hubs derived in <cit.>, we show that state transfer from S senders to R receivers can be achieved with high fidelity provided that S≈ R. Moving on to the discrete-time case, we first investigate a spatial search for M marked hubs. The dynamics is given by a coined walk with a flip-flop shift. As the quantum coin we consider a modified Grover coin with a weighted loop at each vertex <cit.>. The hubs which are a solution to the search problem are marked by an additional phase shift of π. We show that there is a five-dimensional invariant subspace of the walk provided that the initial state of the search is locally modified such that it has equal projection onto every vertex subspace. In this case the walk evolves as search on a complete graph with loops, which is known to be optimal. We provide a detailed investigation how do the terms contributing to the total success probability change with the number of solutions M. Next, we proceed to the state transfer between two marked hubs. Properly choosing the weights of the loops of the local coins turns the problem of state transfer between two hubs on an otherwise arbitrary graph to the state transfer on a complete graph with loops, which we have investigated earlier <cit.>. We expand the known results and show that it is possible to transfer multiple orthogonal states. We find that there is a 9-dimensional invariant subspace, which can be further split into symmetric and antisymmetric subspaces with respect to the exchange of the sender and the receiver vertices. Time evolution in the invariant subspace is investigated in detail for two orthogonal initial states, namely loop and the equal weight superposition of all outgoing arcs at the sender vertex. It is shown that in both cases the discrete-time quantum walk achieves transfer to the corresponding state at the receiver vertex in the same run-time. Hence, we can transfer an arbitrary quantum state of a qubit between the hubs. Since our analysis utilizes an approximation of effective evolution operator eigenstates for large graphs, we also numerically investigate the fidelity of qubit state transfer in dependence on the number of vertices N. It is shown that the fidelity behaves as 1-O(1/N). In addition, we show that if the sender and the receiver know each other's location, a third linearly independent state can be transferred in the same run-time, namely the arc on the edge connecting the sender and the receiver. Hence, in this case the sender and the receiver can exchange a qutrit state. Finally, we extend the study to state transfer from S sender hubs to R receiver hubs. For S≠ R the exchange symmetry between senders and receivers is broken and the splitting of the invariant subspace (which is now 11-dimensional) into symmetric and anti-symmetric subspaces is no longer possible. Nevertheless, the system can be investigated analytically. We show that while we can't send a qubit state, transfer of the walker from senders to receivers occurs with high probability in two regimes - either when S≈ R as for the continuous time case, or when the number of receivers is considerably larger than the number of senders.
The rest of the paper is organized as follows: in Section <ref> we review the results of <cit.> for the continuous-time quantum walk and extend the investigation to state transfer between multiple hubs. Section <ref> introduces the discrete-time quantum walk and investigates search for M marked hubs. Section <ref> is dedicated to the state transfer between two hubs by discrete-time quantum walk. In Section <ref> we investigate state transfer between multiple hubs. We conclude and present an outlook in Section <ref>. Technical details concerning the form of the individual evolution operators in the invariant subspaces and their eigenstates are left for Appendices.
§ STATE TRANSFER BETWEEN HUBS BY CONTINUOUS-TIME QUANTUM WALK
We begin with the continuous-time quantum walk. Let us have a graph G=(V,E), where V denotes the set of vertices and E denotes the set of edges. We consider simple graphs, i.e. the are no loops and no multiple edges in E. For a graphs with N=|V| vertices the Hilbert space of the continuous-time walk is an N-dimensional complex space spanned by vectors |v⟩, v∈ V, corresponding to a particle being localized at the vertex v. The authors of <cit.> consider M vertices w∈ W to be marked hubs, i.e. fully connected with degrees d_w = N-1. The Hamiltonian of the walk is taken as the weighted Laplacian with additional potential on the marked vertices
Ĥ = γL̂ + ∑_w∈ Wλ_w |w⟩⟨w|.
Note that the Laplacian of a given graph G is determined by its adjacency matrix  and the diagonal degree matrix D̂ as
L̂ = D̂ - Â.
The results of <cit.> show that as far as the amplitudes at the marked vertices are concerned, the system evolves in a M+1 invariant subspace <cit.> spanned by |w⟩, w∈ W, and the equal weight superposition of all remaining vertices
|g⟩ = 1/√(N-M)∑_v∉ W|v⟩ .
The effective Hamiltonian in the invariant subspace acts according to (see Eq. (50) in <cit.> for μ=M)
Ĥ|w⟩ = γ(N-1+λ_w/γ)|w⟩ - γ∑_w ∈ W
w'≠ w|w'⟩ -
- γ√(N-M)|g⟩, ∀ w∈ W,
Ĥ|g⟩ = -γ√(N-M)∑_w∈ W|w⟩ + γ M |g⟩.
For spatial search, the weights of all marked hubs are chosen as λ_w=-1, and the optimal choice of the hopping amplitude γ is shown to be equal to 1/N. The dimensionality of the problem can be further reduced to 2, since the target state of the search algorithm is the equal weight superposition of the marked vertices
|w̃⟩ = 1/√(M)∑_w∈ W|w⟩.
In the invariant subspace spanned by |w̃⟩ and |g⟩ the Hamiltonian is described by a matrix (see Eq. (72) in <cit.> for γ = 1/N)
H = -1/N[ M √(M(N-M)); √(M(N-M)) -M ].
Search evolves from equal weight superposition of all vertices
|ψ⟩ = 1/√(N)∑_v|v⟩ = √(M/N)|w̃⟩ + √(N-M/N)|g⟩,
towards the target state |w̃⟩ with unit probability in time given by
T = π/2√(N/M).
Let us now utilize the Hamiltonian (<ref>) for state transfer. Consider S sender and R receiver hubs from sets S and R, such that S+R=M. We keep the hopping rate γ = 1/N and the weights λ_s = λ_r = -1 for all s∈ S and r∈ R. In such a case, we can group together the states localized on the sender and the receiver vertices
| S⟩ = 1/√(S)∑_s∈ S|s⟩,
| R⟩ = 1/√(R)∑_r∈ R|r⟩,
which we consider as the initial and the target state of state transfer. Together with (<ref>) they form a 3D invariant subspace.
The Hamiltonian in the reduced subspace is given by the following 3x3 matrix
H = -1/N[ S √(R S) √(S(N-M)); √(R S) R √(R(N-M)); √(S(N-M)) √(R(N-M)) -M ]
The energy spectrum of (<ref>) is found to be
E_0 = 0, E_± = ±√(M/N),
and the corresponding eigenvectors are
|0⟩ = 1/√(S+R)(√(R)| S⟩ - √(S)| R⟩),
|±⟩ = √(1- E_±/2M)(√(S)| S⟩ + √(R)| R⟩) ∓
∓√(1+ E_±/2)|g⟩.
Given the initial state |ψ(0)⟩ = | S⟩, the state of the walk at time t reads
|ψ(t)⟩ = e^-i H t| S⟩
= √(R/M)|0⟩ + e^-i E_+ t√(S(1-E_+)/2 M)|+⟩ +
+ e^i E_+ t√(S(1+E_+)/2M)|-⟩.
Overlap with the target state | R⟩ then equals
⟨ R|ψ(t)⟩ = √(RS)/M[1-cos(E_+ t) + i E_+ sin(E_+ t)].
Fidelity of state transfer is then given by
F(t) = |⟨ R|ψ(t)⟩|^2
= 4 RS/M^2sin^4(E_+ t/2) + R S/N Msin^2(E_+ t).
Hence, the maximal fidelity
F_max = 4RS/(R+S)^2,
is reached at time T given by
T = π/E_+ = √(N/R+S)π.
We find that state transfer with high fidelity is possible when R≈ S, see Figure <ref> for a visualization. Unit fidelity corresponding to perfect state transfer is achieved when the number of receivers and senders are equal. Run-time of the state transfer is twice of that for search for M=R+S vertices (<ref>). Indeed, in state transfer we evolve the system from the state localized on the sender vertices through the equal weight superposition of all vertices (<ref>) towards the state localized on the receiver vertices, effectively iterating the search dynamics twice. For S and R differing considerably the state transfer is suppressed. In particular, for S≫ R or R≫ S, maximal fidelity (<ref>) tends to zero.
§ DISCRETE-TIME QUANTUM WALK SEARCH FOR HUBS
We turn to the discrete-time quantum walk search for M marked hubs labeled by indices from a set M. Let us start with the Hilbert space. Given a graph G=(V,E) the corresponding Hilbert space H_G can be decomposed as a direct sum of local Hilbert spaces H_v at each vertex v∈ V. Each H_v is spanned by vectors |v,w⟩ such that there is an edge between vertex v and w, and an additional state |v,v⟩ corresponding to the loop at the vertex v. The dimension of the local space H_v is thus equal to d_v + 1, where d_v is the degree of the vertex v in the simple graph G.
The unitary evolution operator of the walk, Û = ŜĈ, is a product of the shift operator Ŝ and the coin operator Ĉ. We consider the flip-flop shift, which is defined in the following way
Ŝ| v,w⟩ = | w,v⟩, Ŝ|v,v⟩ = |v,v⟩ .
The coin operator is given by a direct sum
Ĉ = ⊕_v Ĉ_v,
of local unitaries Ĉ_v acting on vertex spaces H_v. As the local coins at non-marked vertices we consider the Grover operator with a weighted loop <cit.>. The original Grover operator <cit.> is invariant under all permutations. Hence, when used as a coin in the discrete time quantum walk, it has the advantage that it does not matter how we order the basis states in the local Hilbert space H_v <cit.>. Adding the weighted loop adds a parameter, which can be tuned to improve the success probability of search <cit.>. We can write the Grover operator with a weighted loop as <cit.>
Ĝ_v(l_v)= 2|Ω_v(l_v)⟩⟨Ω_v(l_v)| -Î_v ,
where Î_v is an identity operator at the subspace H_v and |Ω_v(l_v)⟩ is given by
|Ω_v(l_v)⟩=1/√(d_v+l_v)(√(d_v)|Ω_v⟩ + √(l_v)|v,v⟩) .
By |Ω_v⟩ we have denoted the equal weight superposition of all outgoing arcs from the vertex v
|Ω_v⟩=1/√(d_v)∑_w
{ v,w}∈ E| v,w⟩.
The free parameter l_v sets the weight of the loop state at the vertex v. In correspondence to the continuous-time case <cit.> we tune it according to the degree of the vertex and set l_v=N-d_v. On the marked hubs m∈ M this results in l_m = 1. In addition, we multiply the coin on the marked hubs by -1, which is analogous to the marking of the solutions in Grover algorithm <cit.>. Moreover, one can show that the phase shift of π is the optimal choice, as it results in large gap of the avoided crossing which is inversely proportional to the run-time of the search <cit.>. Hence, choosing different phase shift results in slower algorithm with lower success probability. The evolution operator of the search is then given by
Û_ M = Ŝ (⊕_v∉ M(Ĝ_v(N-d_v))⊕_m∈ M (-Ĝ_m(1))) .
The usual choice of the initial state for the search algorithm is the equal weight superposition of all arcs
|Ω̃⟩ = 1/√(∑_v∈ V
d_v)∑_v∈ V√(d_v)|Ω_v⟩ .
However, with this initial state the search does not perform well, as can be observed from numerical simulations. In addition, we do not find a simple closed invariant subspace of Û_ M which involves (<ref>). The reason for that is that |Ω̃⟩ has the same overlap with every arc, but not with every vertex of the graph, since the graph is not expected to be regular. Therefore, state (<ref>) is not an eigenstate of the unperturbed walk operator (i.e. without marking of hubs by a π phase shift), which is given by
Û = Ŝ(⊕_v∈ V(Ĝ_v(N-d_v)) ).
Such eigenstate reads
|Ω⟩ = 1/√(N)∑_v∈ V|Ω_v(N-d_v)⟩,
which has the same overlap of 1/√(N) with all vertex subspaces ℋ_v. One can easily check that it satisfies Û|Ω⟩ = |Ω⟩. For this initial state we find that search for hubs behaves analogously as search on the complete graph with loops <cit.>. Let us construct the invariant subspace <cit.> of the evolution operator (<ref>). First, we include states where the walker is localized on the marked hubs
|ν_1⟩ = 1/√(M)∑_m∈ M|m,m⟩ ,
|ν_2⟩ = 1/√(M(M-1))∑_m≠ m'∈ M|m,m'⟩,
|ν_3⟩ = 1/√(M(N-M))∑_m∈ M∑_v∉ M|m,v⟩,
corresponding to superposition of loops (|ν_1⟩), arcs of the edges connecting different marked hubs (|ν_2⟩), and arcs leaving marked hubs to the rest of the graph (|ν_3⟩). Since the evolution operator (<ref>) treats all marked hubs equally we can group them together and form the superpositions of the corresponding internal states. Note that the flip-flop shift operator (<ref>) leaves the states |ν_1⟩ and |ν_2⟩ unchanged, however, it maps the vector |ν_3⟩ onto
|ν_4⟩ = Ŝ|ν_3⟩ = 1/√(M(N-M))∑_m∈ M∑_v∉ M|v,m⟩,
where the walker is localized on the non-marked vertices in the arcs heading towards the marked ones. We add this state as the fourth vector of the invariant subspace I. To complete the invariant subspace we consider the superposition of loops and arcs between non-marked vertices
|ν_5⟩ = 1/N-M∑_v∉ M(√(N)|Ω_v(N-d_v)⟩-∑_m∈ M|v,m⟩).
These 5 vectors are required to decompose the initial state of the search (<ref>) according to
|Ω⟩ = 1/N(√(M)|ν_1⟩+√(M(M-1))|ν_2⟩ + .
+ . √(M(N-M))(|ν_3⟩ + |ν_4⟩) + (N-M)|ν_5⟩).
We note that in search for a single marked hub (M=1) the state |ν_2⟩ vanishes and the invariant subspace I is only 4-dimensional.
The action of the search evolution operator (<ref>) on the states |ν_j⟩ is described in the Appendix <ref>, see Eq. (<ref>). We find that the spectrum of Û_ M reduced to the invariant subspace is composed of ± 1 and pair conjugate eigenvalues λ_±=e^± iω where the phase is given by
ω = arccos(1-2M/N).
For M>1 eigenvalue 1 is doubly degenerate. We denote the corresponding eigenvectors as |(1)_1,2⟩, |-1⟩ and |±ω⟩, their explicit forms are given in the formulas (<ref>), (<ref>) and (<ref>.)
Decomposition of the initial state (<ref>) into the eigenbasis of Û_ M is given by
|Ω⟩ = - √(N-M/2N)|(1)_1⟩ + √(M/2N)|-1⟩ + 1/2(|+ω⟩ + |-ω⟩).
Evolution of the walk after t steps then can be written as
|ψ(t)⟩ = - √(N-M/2N)|(1)_1⟩ + (-1)^t√(M/2N)|-1⟩ +
+ 1/2(e^iω t|+ω⟩ + e^-iω t|-ω⟩).
The total probability to find one of the marked hubs after t steps can be written as a sum of transition probabilities from |ψ(t)⟩ into the basis states |ν_i⟩, i = 1,2,3
P(t) = ∑_i=1^3 P_i(t) = ∑_i=1^3 |⟨ν_i|ψ(t)⟩|^2 .
We find that the corresponding probability amplitudes read
⟨ν_1|ψ(t)⟩ = 1/2√(M)(cos(ω t) - 1 + (1+(-1)^t)M/N),
⟨ν_2|ψ(t)⟩ = √(M-1)⟨ν_1|ψ(t)⟩ ,
⟨ν_3|ψ(t)⟩ = 1/2sin(ω t) + (1+(-1)^t)√(M(N-M))/2N .
Considering first the case N≫ M, we can neglect the rapidly oscillating terms and obtain the transition probabilities
P_1(t) ≈ 1/Msin^4(ω t/2),
P_2(t) ≈ M-1/Msin^4(ω t/2) ,
P_3(t) ≈ 1/4sin^2(ω t) .
The total success probability is then equal to
P(t) ≈sin^4(ω t/2) + 1/4sin^2(ω t),
which is close to unity provided that we choose the number of steps t as the closest integer to
T = π/ω≈π/2√(N/M)+O(√(M/N)) .
For N≫ M the maximum success probability comes from P_1(T) and P_2(T), while P_3(T) vanishes. When M=1 we find the particle in the loop at the unique marked vertex, since there is no |ν_2⟩ state and P_2(t) equals zero. As the number of marked hubs grows the contribution of P_2(t) increases, so we are more likely to find the walker on an edge connecting two marked hubs.
When the number of marked hubs is non-negligible relative to the total number of vertices the approximations (<ref>) are no longer valid. While P_1(t) vanishes the contribution of P_3(t) to the total success probability becomes more important. One can show that for all values of N and M<N the total success probability of search evolves with the number of steps according to
P(2t) = P(2t+1) = sin^2(ω (2t+1)/2).
Indeed, as was shown in <cit.> two steps of the quantum walk search on a complete graph with loops are equivalent to a single iteration of the Grover algorithm. In conclusion, we find one of the marked hubs with probability close to one for the number of steps derived earlier (<ref>).
For comparison of analytical and numerical results see Fig. <ref>, where we display the course of the success probability on a graph with N = 100 vertices with different number of marked hubs. The upper plot shows the case of a single marked hub, i.e. M=1. In the middle plot we have considered M=3, and the lower plot shows the case M=15. The plot highlights how the contribution of different terms P_i(t) changes as the number of marked hubs increases. One can also notice that the total success probability changes only after each two steps, in accordance with (<ref>).
§ STATE TRANSFER BETWEEN TWO HUBS
Let us now turn to the state transfer between two hubs, sender and receiver vertices s and r. We consider the same local coins for the marked and non-marked vertices as for the search. The evolution operator for the state transfer then has the form
Û_s,r = Ŝ (⊕_v≠ s,r(Ĝ_v(N-d_v)) ⊕ (-Ĝ_s(1)) .
. ⊕ (-Ĝ_r(1))) .
As for the search, the choice of the loop weights allows to map the problem of state transfer between hubs on otherwise arbitrary graph to state transfer on complete graph with loops. We have investigated this model earlier <cit.>, however, only a single initial state was considered. In the present paper we extend this investigation and show that it is possible to transfer two orthogonal states, namely the loop and the equal weight superposition of all outgoing arcs, in the same run-time. Hence, one can transfer a state of a qubit from one hub to another. In fact, we show that if the sender and the receiver knows their locations, then they can exchange another linearly independent state, allowing for exchange of a qutrit.
First, we perform the dimensional reduction, i.e. determine the invariant subspace I of the walk evolution operator Û_s,r containing both pairs |s,s⟩, |r,r⟩ and |Ω_s⟩, |Ω_r⟩. Construction of the invariant subspace follows the same logic as for the search problem in the previous Section. We begin with the loops on the marked vertices and the connecting arcs
|ν_1⟩ = |s,s⟩ , |ν_2⟩ = |r,r⟩,
|ν_3⟩ = |s,r⟩ , |ν_4⟩ = |r,s⟩.
Next, we add states connecting sender and receiver with the rest of the graph
|ν_5⟩ = 1/√(N-2)∑_v≠ s,r|s,v⟩ , |ν_6⟩ = 1/√(N-2)∑_v≠ s,r|r,v⟩,
|ν_7⟩ = 1/√(N-2)∑_v≠ s,r|v,s⟩, |ν_8⟩ = 1/√(N-2)∑_v≠ s,r|v,r⟩.
To complete the invariant subspace I we consider vectors corresponding to loops and arcs between non-marked vertices
|ν_9⟩ = 1/N-2∑_v≠ s,r(√(N)|Ω_v(N-d_v)⟩ - |v,r⟩-|v,s⟩) .
Note that the equal weight superpositions can be expressed in the form
|Ω_s⟩ = 1/√(N-1)|ν_3⟩ + √(N-2/N-1)|ν_5⟩,
|Ω_r⟩ = 1/√(N-1)|ν_4⟩ + √(N-2/N-1)|ν_6⟩,
which tends to |ν_5⟩ and |ν_6⟩ for large graph size N. One can show by direct calculation that the 9-dimensional space is closed under action of Û_s,r. We note that in <cit.> we considered state transfer on the complete graph with loops from the state corresponding to the equal weight superposition of the loop and arcs leaving the sender vertex, which in the present notation reads
|ψ⟩ = 1/√(N)(|ν_1⟩ + |ν_3⟩ + √(N-2)|ν_5⟩) .
For this particular initial state it was sufficient to consider a smaller 5 dimensional invariant subspace <cit.>.
We provide the detailed investigation of the evolution operator (<ref>) in the invariant subspace in Appendix <ref>. We find that the spectrum of the reduced operator consists of ± 1 and three conjugated pairs λ_j^(±) = e^± i ω_j where the phases ω_j are given by
ω_1 = arccos(1-4/N),
ω_2 = arccos(√(1-2/N)) = ω_1/2,
ω_3 = arccos(-√(1-2/N)) = π - ω_2 .
Eigenvalue 1 is doubly degenerate. The corresponding eigenstates are denoted by |(1)_1,2⟩, |-1⟩ and |±ω_j⟩, j=1,2,3. Their explicit forms are given by equations (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>). We proceed with the calculation of the time evolution of the walk for different choices of the initial and target states.
§.§ Transfer of equal weight superposition states
Let us first consider the initial state as the equal superposition of all outgoing arcs at the sender vertex (<ref>). As follows from (<ref>), for large graph the initial and the target states, |Ω_s⟩ and |Ω_r⟩, can be approximated by
|Ω_s⟩ ≈ |ν_5⟩ , |Ω_r⟩≈|ν_6⟩ .
In terms of the eigenbasis of Û_s,r the vectors |Ω_s⟩ and |Ω_r⟩ can be estimated by
|Ω_s⟩ ≈ 1/2√(2)(√(2)|-1⟩ + i(|+ω_1⟩-|-ω_1⟩) + .
. / +i(|+ω_2⟩ - |-ω_2⟩) - i(|+ω_3⟩ - |-ω_3⟩)),
|Ω_r⟩ ≈ 1/2√(2)(√(2)|-1⟩ + i(|+ω_1⟩-|-ω_1⟩) - .
. / - i (|+ω_2⟩ - |-ω_2⟩) + i (|+ω_3⟩ - |-ω_3⟩)).
where we have utilized the approximations of eigenstates (<ref>) and (<ref>) for large N. The state of the walk after t steps is then given by
|ϕ_Ω(t)⟩ = Û_s,r^t|Ω_s⟩ = (-1)^t/2|-1⟩ +
+ i/2√(2)(e^2iω_2 t|+ω_1⟩ - e^- 2iω_2 t|-ω_1⟩) +
+ i/2√(2)(e^ iω_2 t|+ω_2⟩ - e^- iω_2 t|-ω_2⟩) -
- i(-1)^t/2√(2)(e^ -iω_2 t|+ω_3⟩ - e^ iω_2 t|-ω_3⟩).
For the amplitude of state transfer to |Ω_r⟩ we obtain
⟨Ω_r|ϕ_Ω (t)⟩ = 1/4[(-1)^t+cos(2ω_2 t)-.
-.cos(ω_2 t)(1+(-1)^t)].
We see that there are rapid changes between odd and even steps. For odd number of steps 2t+1 the amplitude turns into
⟨Ω_r|ϕ_Ω (2t+1)⟩ = -1/2sin^2(ω_2 (2t+1)),
so the fidelity can reach 1/4 at most. For even number of steps 2t the amplitude (<ref>) equals
⟨Ω_r|ϕ_Ω (2t)⟩ =
-cos(2ω_2 t)sin^2(ω_2 t).
Hence, for the closest even integer to
T = 2t ≈π/ω_2≈π√(N/2)+O(1/√(N)) .
the amplitude (<ref>) is close to one, i. e. we obtain state transfer from |Ω_s⟩ to |Ω_r⟩ with high fidelity. We note that (<ref>) results in the same fidelity as for the initial state (<ref>) studied in <cit.> (see eq. (17) of the aforementioned paper).
§.§ Transfer of loop states
In the case of transfer of the loop states the initial and target states are
|s,s⟩ = |ν_1⟩ , |r,r⟩ = |ν_2⟩ .
In the eigenvector basis we find that the for large N the loop states can be approximated by
|s,s⟩ ≈ 1/2|(1)_1⟩ + 1/2√(2)|(1)_2⟩ + 1/4(|+ω_1⟩+|-ω_1⟩)+
+ 1/2(|+ω_2⟩ + |-ω_2⟩),
|r,r⟩ ≈ 1/2|(1)_1⟩ + 1/2√(2)|(1)_2⟩ + 1/4(|+ω_1⟩+|-ω_1⟩) -
-1/2(|+ω_2⟩ + |-ω_2⟩).
The state of the walk after 2t steps equals
|ϕ_l(2t)⟩ = Û_s,r^2t|s,s⟩
= 1/2|(1)_1⟩+1/2√(2)|(1)_2⟩+
+ 1/4(e^4 i ω_2 t|+ω_1⟩+e^- 4 i ω_2 t|-ω_1⟩)+
+ 1/2(e^ 2i ω_2 t|+ω_2⟩ + e^- 2 iω_2 t|-ω_2⟩).
The scalar product of |ϕ_l(2t)⟩ and the target state is then given by
⟨r,r|ϕ_l(2t)⟩ = sin^4(ω_2 t).
Hence, we achieve state transfer with high fidelity in the same number of steps as for the equal weight superposition states (<ref>).
For comparison of state transfer of loops and equal weight superposition states see Fig. <ref>.
§.§ Transfer of qubit state and finite-size effects of the graph
Our results show that we can achieve state transfer of both |s,s⟩ to |r,r⟩ and |Ω_s⟩ to |Ω_r⟩ in the same run-time. Moreover, we can perform a state transfer of any superposition of these two orthonormal states. Hence, it is possible to transfer an arbitrary state of a qubit between two hubs in time given by the closest even integer to (<ref>). Consider a general qubit state at the sender encoded as
|ψ_s⟩ = a |s,s⟩ + b|Ω_s⟩≡ a|0⟩ + b |1⟩.
The target qubit state at receiver vertex is
|ψ_r⟩ = a |r,r⟩ + b|Ω_r⟩.
Time evolution of the initial state (<ref>) after 2t steps is given by
|ψ(2t)⟩ = a|ϕ_l(2t)⟩ + b |ϕ_Ω(2t)⟩,
where |ϕ_l(2t)⟩ and |ϕ_Ω(2t)⟩ are given by (<ref>) and (<ref>). Amplitude of state transfer from (<ref>) to (<ref>) then reads
⟨ψ_r|ψ(2t)⟩ = |a|^2⟨r,r|ϕ_l(2t)⟩ + |b|^2⟨Ω_r|ϕ_Ω(2t)⟩ +
+ ab⟨Ω_r|ϕ_l(2t)⟩ + a b ⟨r,r|ϕ_Ω(2t)⟩,
where ⟨Ω_r|ϕ_Ω(2t)⟩ and ⟨r,r|ϕ_l(2t)⟩ were determined in (<ref>) and (<ref>). The remaining transition amplitudes are found to be
⟨Ω_r|ϕ_l(2t)⟩ = 1/√(2)sin^2(ω_2 t)sin(2ω_2 t) ,
⟨r,r|ϕ_Ω(2t)⟩ = - ⟨Ω_r|ϕ_l(2t)⟩,
which both vanish when the number of steps T is taken as the closest even integer to (<ref>). Since ⟨Ω_r|ϕ_Ω(T)⟩ and ⟨r,r|ϕ_l(T)⟩ both tend to one for a large graph, we can transfer an arbitrary qubit state from the sender to the receiver vertex.
We note that our result were derived utilizing the approximation of eigenstates (<ref>), (<ref>) for large graph, which neglected the O(1/√(N)) terms in the eigenvectors. For smaller values of N these contributions might be non-negligible and affect the fidelity of state transfer. In addition, the influence can be state dependent. To elucidate the finite-size effects we numerically investigated the fidelity of state transfer of a qubit state
|ψ⟩ = ρ|0⟩ + e^iφ√(1-ρ^2)|1⟩,
on a graph with 10 vertices. The results are displayed in the upper plot of Fig. <ref>. For both parameters ρ and φ we split the corresponding interval range into 100 elements and evaluate the fidelity at the number of steps given by the closest even integer to (<ref>). The plot indicates that the worst fidelity is obtained for the state |1⟩, i.e. |Ω_s⟩. On a graph of 10 vertices the fidelity ranges between 0.82 and 0.9.
In the lower plot of Fig. <ref> we show the fidelity of state transfer for the basis states as a function of the graph size N. The plot is on the log-log scale to unravel the power-law behaviour F(T) = 1- O(1/N). Note that the rapid drops and increases in the figure are due to the discreteness of the optimal time T, which has to be chosen as the closest even integer to (<ref>). The plot indicates that changing the qubit state to be transferred alters the pre-factor but not the power law dependence.
§.§ Transfer of the connecting arcs
Let us now consider the situation when the sender and the receiver know their positions. In such a case, they can transfer the states corresponding to the arcs on the connecting edge, i.e. the state |ν_3⟩ will be mapped onto |ν_4⟩. We show that this is achieved in the same runtime as for the loop and the equal weight superposition.
The initial and the target states decomposed into the eigenvectors of Û_s,r have the following form
|ν_3⟩ ≈ -1/2|(1)_1⟩ + 1/2√(2)|(1)_2⟩ + 1/4(|+ω_1⟩ + |-ω_1⟩) +
+1/2 (|+ω_3⟩ + |-ω_3⟩),
|ν_4⟩ ≈ -1/2|(1)_1⟩ + 1/2√(2)|(1)_2⟩ + 1/4(|+ω_1⟩ + |-ω_1⟩) -
- 1/2 (|+ω_3⟩ + |-ω_3⟩).
In even steps 2t, the state of the walk is given by
|ϕ_sr(2t)⟩ = -1/2|(1)_1⟩ + 1/2√(2)|(1)_2⟩ +
+ 1/4(e^4i ω_2 t|+ω_1⟩ + e^-4i ω_2 t|-ω_1⟩) +
+ 1/2 (e^-2i ω_2 t|+ω_3⟩ + e^2i ω_2 t|-ω_3⟩).
For the amplitude of state transfer in even steps we find
⟨ν_4|ϕ_sr(2t)⟩ = sin^4(ω_2 t),
i.e., it behaves the same as for the transfer of loop states (<ref>). Hence, the arc from the sender to the receiver is transferred into its opposite in the runtime given by (<ref>)
We note that the vectors |ν_3,4⟩ and |Ω_s,r⟩ are linearly independent but not mutually orthogonal. Nevertheless, from (<ref>) we find
⟨ν_3|Ω_s⟩ = ⟨ν_4|Ω_r⟩ = 1/√(N-1)
so the overlap vanishes with increasing size of the graph N. Within the approximations we have used in our derivations, the states |s,s⟩ = |ν_1⟩, |Ω_s⟩≈|ν_5⟩ and |ν_3⟩ are orthogonal. Their superposition represent a qutrit state, which can be transferred from the sender to the receiver vertex, provided that the two communicating parties know which edge connects them. Note that |ν_1⟩, |ν_3⟩ and |ν_5⟩ are the only basis states which are localized on the sender vertex. Hence, in this model we cannot transfer a higher dimensional qudit state with d≥ 4.
We have also tested numerically the effect of finite graph size on the fidelity of state transfer of the connecting arcs. The simulation reveals that for every N the fidelity is exactly equal to the one for the state transfer of the loop state, which is depicted in the lower plot of Figure <ref> with the blue triangles.
§ STATE TRANSFER BETWEEN MULTIPLE SENDER AND RECEIVER HUBS
We now expand the investigation of the previous Section by considering S>1 senders and R>1 receivers from sets S and R, respectively, with S+R=M. We consider the same dynamics as for the search and state transfer between two marked vertices, i.e. the local loop weights are chosen as l_v = N-d_v and on the marked hubs the coin is multiplied by -1. Having multiple senders and receivers leads to the following modification of the invariant subspace we have constructed in the previous Section for S=R=1. First, states (<ref>) corresponding to the loops and connecting arcs between senders and receivers are adjusted according to
|ν_1⟩ = 1/√(S)∑_s∈ S|s,s⟩,
|ν_2⟩ = 1/√(R)∑_r∈ R|r,r⟩
|ν_3⟩ = 1/√(RS)∑_s∈ S∑_r∈ R|s,r⟩ ,
|ν_4⟩ = 1/√(RS)∑_s∈ S∑_r∈ R|r,s⟩ ,
where we have utilized the fact that evolution operator treats all senders and receivers equally and we can group them together. Next, states connecting sender and receiver vertices to the rest of the graph (<ref>) are changed into
|ν_5⟩ = 1/√(S(N-M))∑_s∈ S∑_v ∉ S∪ R|s,v⟩ ,
|ν_6⟩ = 1/√(R(N-M))∑_r∈ R∑_v ∉ S∪ R|r,v⟩,
|ν_7⟩ = 1/√(S(N-M))∑_s∈ S∑_v∉ S∪ R|v,s⟩ ,
|ν_8⟩ = 1/√(R(N-M)))∑_r∈ R∑_v ∉ S∪ R|v,r⟩ .
The state (<ref>) corresponding to the loops and arc between non-marked vertices is modified as follows
|ν_9⟩ = 1/(N-M)∑_v∉ S∪ R(/√(N)|Ω_v(N-d_v)⟩ - .
. - ∑_s∈ S|v,s⟩ - ∑_r∈ R|v,r⟩).
To complete the invariant subspace we have to add two states describing the superpositions of arcs connecting sender hubs together, and the same for receiver hubs, i.e.
|ν_10⟩ = 1/√(S(S-1))∑_s ≠ s'∈ S|s,s'⟩,
|ν_11⟩ = 1/√(R(R-1))∑_r ≠ r'∈ R|r,r'⟩,
which are not present when we have only one sender and one receiver vertex.
We leave the detailed investigation of the evolution operator Û_ S, R
Û_ S, R = Ŝ (⊕_v∉ S, R(Ĝ_v(N-d_v)) .
.⊕_s∈ S (-Ĝ_s(1)) ⊕_r∈ R (-Ĝ_r(1))) .
for Appendix <ref>. The spectrum of Û_ S, R has a similar form as for S=R=1, i.e. it consists of eigenvalues ± 1 and three complex conjugated pairs λ_j^(±) = e^± i ω_j, where the phases are given by
ω_1 = arccos(1-2(R+S)/N),
ω_2 = ω_1/2, ω_3 = π - ω_2 .
Eigenvalue 1 now has a degeneracy of 4. The corresponding eigenvectors are denoted by |(1)_m⟩ , m = 1,…,4, (see formulas (<ref>)), |-1⟩ (equation (<ref>)) and |±ω_j⟩, j=1,2,3. Eigenvectors |±ω_1⟩ are given by (<ref>), and approximations of |±ω_2⟩ and |±ω_3⟩ for large N≫ R,S can be found in the formula (<ref>).
To investigate state transfer we consider evolution from two initial states supported on the sender vertices corresponding to the loops (|ν_1⟩) and the equal weight superposition of outgoing arcs (|ν_5⟩). We analyze the transition amplitudes to the states supported on the receiver vertices (|ν_k⟩, k=2,4,6,11). As we will see, for S,R≠ 1 it is not possible to transfer specific quantum states from the senders to the corresponding states on the receiver vertices. Nevertheless, if we relax the requirements and focus solely on transfer of the quantum walker from senders to receiver vertices irrespective of the internal state, we find that this task can be achieved in certain regimes of S and R.
Let us begin with the superposition of loop states on the sender vertices |ν_1⟩. In this case the state of the walk after t steps reads
|ϕ_l(t)⟩ ≈ 1/M(R/√(S)|(1)_1⟩ + √(S/2)|(1)_2⟩ + .
. + M√(S-1/S)|(1)_3⟩ + .
+ . √(2)/2[e^i 2ω_2 t|+ω_1⟩ + e^-i 2ω_2 t|-ω_1⟩] + .
+ . √(R)[e^i ω_2 t|+ω_2⟩ + e^-i ω_2 t|-ω_2⟩] ) ,
within the approximations of (<ref>) and (<ref>) for N≫ R,S. Transition amplitudes to the states at the receiver vertices are given by
⟨ν_2|ϕ_l(t)⟩ = 4√(RS)/M^2sin^4(ω_2 t/2),
⟨ν_4|ϕ_l(t)⟩ = -2√(R)/M^2 (R + Scos(ω_2 t))sin^2(ω_2 t/2),
⟨ν_6|ϕ_l(t)⟩ = -2√(RS/M^3)sin(ω_2 t) sin^2(ω_2 t/2),
⟨ν_11|ϕ_l(t)⟩ = 4√(RS(R-1))/M^2sin^4(ω_2 t/2).
We see that all amplitudes are vanishing with increasing number of sender and receiver vertices. The total probability of transfer to an arbitrary loop or arc originating from one of the receiver vertices is given by
P_l(t) = ∑_k=2,4,6,11 |⟨ν_k|ϕ_l(t)⟩|^2 = 4R/(R+S)^2sin^4(ω_2 t/2) .
Hence, from the loop state |ν_1⟩ one can achieve unit transfer probability only in the case S=R=1 treated in the previous Section. Notice the asymmetry - for R≫ S the maximal transfer probability decreases as R^-1, while for S≫ R the decrease is faster and follows S^-2.
Concerning the superposition of all outgoing arcs from all senders
|Ω_ S⟩ = √(RS/N-1)|ν_3⟩ + √(S(N-M)/N-1)|ν_5⟩,
we focus on a large graph with N≫ R,S and approximate this initial state as |ν_5⟩. After t steps the walk is described by the following vector
|ϕ_Ω(t)⟩ ≈ 1/2√(R+S)( (-1)^t √(2S)|-1⟩ + .
. + i √(S)[e^2iω_2 t|+ω_1⟩ - e^-2iω_2 t|-ω_1⟩] + .
. + i√(R)[e^iω_2 t|+ω_2⟩ - e^-iω_2 t|-ω_2⟩] - .
. - i(-1)^t√(R)[e^-iω_2 t|+ω_3⟩ - e^iω_2 t|-ω_3⟩] ) .
Turning to the transition amplitudes, we investigate odd and even steps separately. For even number of steps 2t we obtain
⟨ν_2|ϕ_Ω(2t)⟩ = -⟨ν_6|ϕ_l(2t)⟩ ,
⟨ν_4|ϕ_Ω(2t)⟩ = √(S)⟨ν_2|ϕ_Ω(2t)⟩ ,
⟨ν_6|ϕ_Ω(2t)⟩ = -2√(RS)/Mcos(2ω_2 t)sin^2(ω_2 t),
⟨ν_11|ϕ_Ω(2t)⟩ = √(R-1)⟨ν_2|ϕ_Ω(2t)⟩ .
The total transfer probability in even steps is given by
P_Ω(2t) = 4RS/(R+S)^2sin^4(ω_2 t),
which reaches the maximum
P_max^(even) = 4RS/(R+S)^2,
for number of steps given by the closest even integer to
T = 2t ≈π/ω_2≈π√(N/R+S).
We point out that the result (<ref>) coincide with the one for the continuous time quantum walk (<ref>).
Considering odd steps, the amplitudes at time 2t+1 are equal to
⟨ν_2|ϕ_Ω(2t+1)⟩ = 2√(RS/M^3)sin(ω_2 (2t+1)) ×
×sin^2(ω_2 (2t+1)/2) ,
⟨ν_4|ϕ_Ω(2t+1)⟩ = -√(R/M^3)[R + S cos(ω_2(2t+1))] ×
×sin(ω_2 (2t+1)) ,
⟨ν_6|ϕ_Ω(2t+1)⟩ = -√(RS)/Msin^2(ω_2 (2t+1))
⟨ν_11|ϕ_Ω(2t+1)⟩ = √(R-1)⟨ν_2|ϕ_Ω(2t+1)⟩ .
The total transfer probability in odd steps is then given by
P_Ω(2t+1) = R/R+Ssin^2(ω_2(2t+1)).
The maximum of value
P_max^(odd) = R/R+S,
is reached in a number of steps given by the closest odd integer to T/2.
The derived results show that the discrete time quantum walk allows to transfer the walker initialized at the sender vertices in the state |Ω_ S⟩ to the receiver vertices with high probability in two regimes. The first regime occurs for R≈ S, which is the same as for the continuous time quantum walk. Here the probability in even steps reaches close to one (<ref>) for T given by (<ref>). The second regime is given by R≫ S, which is absent for the continuous time case. In this case the transfer probability approaches one in odd steps (<ref>) in half the runtime. We illustrate our results in Figures <ref>, <ref> and <ref>.
In Figure <ref> we show the maximal transfer probabilities in odd steps (<ref>) as a function of the number of senders S and the number of receivers R. For even steps, we refer to the continuous time case and Figure <ref>, since P_max^(even) (<ref>) and F_max (<ref>) are equal.
Figure <ref> displays the course of the transfer probability for the case of equal number of senders and receivers. We see that the transfer probability is close to 1 for even number of steps T given by (<ref>). In odd number of steps, the maximal value reached is 1/2 in approximately T/2 steps.
In Figure <ref> we consider transfer from a single sender vertex to multiple receivers. The two plots display the course of the transfer probability on a graph of N=1000 vertices with 3 receivers (upper plot) and 20 receivers (lower plot). We observe that for R=3 the peaks for even (<ref>) and odd (<ref>) steps reach almost the same maximal value close to 0.75. For R=20 the peak for even steps is suppressed considerably, while in odd number of steps the transfer probability reaches close to 1.
§ CONCLUSIONS
Search and state transfer between multiple hubs on otherwise arbitrary graphs were investigated in detail. We have shown that both continuous-time and discrete-time quantum walks can be utilized for this purpose, since the models can be mapped to the complete graph with loops for a proper choice of the loop weights. In the continuous-time case the state transfer is achieved with high fidelity when the number of senders S is close to the number of receivers R. The discrete-time walk has proven to be more versatile. Indeed, state transfer between multiple hubs is achieved also in the regime R≫ S, which is not possible in the continuous-time case. Moreover, when only a single sender and a single receiver is considered, the coin degree of freedom allows us to transfer multiple orthogonal states in the same run-time, expanding our earlier results <cit.>. We have shown that sender and receiver can always exchange an arbitrary qubit state, and in the case they know each others position this can be extended to a qutrit. This is not possible in the continuous time model, since there the walker does not have an internal degree of freedom, only the position.
There are several directions for generalizing our results. Our aim is to investigate if one can achieve state transfer from an ordinary vertex (i.e. not fully connected) to a hub. If the answer is positive we can perform state transfer between arbitrary vertices through the hub by switching the marked coin from the sender to the receiver after the transfer to the hub was completed, while keeping the hub vertex marked throughout the whole duration. Another approach to state transfer on arbitrary graphs is to utilize the framework based on the ergodic reversible Markov chains developed for both discrete-time <cit.> and continuous-time models <cit.>. This method was used to prove that quantum spatial search is quadratically faster over the classical search for an arbitrary number of marked vertices. We aim to modify this approach to investigate state transfer between multiple vertices which are not fully connected, where the reduction to a complete graph as in this paper is not possible.
MŠ acknowledges the financial support from Czech Grant Agency project number GAČR 23-07169S. SS is grateful for financial support from SGS22/181/OHK4/3T/14.
§ EVOLUTION OPERATOR OF THE SEARCH
In this Appendix we treat the search evolution operator (<ref>) in the invariant subspace I spanned by vectors (<ref>), (<ref>) and (<ref>). The action of Û_ M on the basis states is given by
Û_ M|ν_1⟩ = (1-2/N) |ν_1⟩ - 2√(M-1)/N|ν_2⟩ -2√(N-M)/N|ν_4⟩ ,
Û_ M|ν_2⟩ = - 2√(M-1)/N|ν_1⟩ + (1 - 2M-1/N)|ν_2⟩ - 2√((M-1)(N-M))/N|ν_4⟩ ,
Û_ M|ν_3⟩ = -2√(N-M)/N|ν_1⟩ -2√((M-1)(N-M))/N|ν_2⟩ - (1-2M/N) |ν_4⟩ ,
Û_ M|ν_4⟩ = - (1-2M/N) |ν_3⟩ + 2√(M(N-M))/N|ν_5⟩ ,
Û_ M|ν_5⟩ = 2√(M(N-M))/N|ν_3⟩ + (1-2M/N) |ν_5⟩,
which shows that I is indeed invariant.
Spectrum of the reduced evolution operator in the invariant subspace consists of eigenvalues ± 1 and e^± i ω with the phase ω given in (<ref>). We find that eigenvalue 1 is doubly degenerate and the corresponding eigenvectors read
|(1)_1⟩ = 1/√(2 N M)(√(N-M)(|ν_1⟩ + √(M-1)|ν_2⟩) - M (|ν_3⟩ + |ν_4⟩) - √(M(N-M))|ν_5⟩),
|(1)_2⟩ = 1/√(M)(√(M-1)|ν_1⟩ - |ν_2⟩).
Eigenvector corresponding to -1 is given by
|-1⟩ = 1/√(2N)(|ν_1⟩ + √(M-1)|ν_2⟩ - √(M)|ν_5⟩ + √(N-M)(|ν_3⟩+|ν_4⟩) ).
For the conjugate pair λ_± we find the eigenvectors
|±ω⟩ = 1/2(1/√(M)|ν_1⟩ + √(M-1/M)|ν_2⟩ + |ν_5⟩∓ i(|ν_3⟩-|ν_4⟩)).
Note that for a single marked hub the state |ν_2⟩ vanishes. In that case we have only 4-dimensional invariant subspace. Nevertheless, all results remain valid for M=1, except that eigenvalue 1 is simple as the state |(1)_2⟩ equals zero.
§ EVOLUTION OPERATOR OF STATE TRANSFER BETWEEN TWO HUBS
In this Appendix we investigate the evolution operator of the state transfer between two hubs (<ref>) in the invariant subspace spanned by vectors |ν_j⟩ given in equations (<ref>), (<ref>) and (<ref>). To simplify further calculations we utilize the invariance of Û_s,r under exchange of sender and receiver vertices <cit.>. This symmetry reduces the dimensionality of the problem, since we can decompose Û_s,r into a direct sum of lower-dimensional operators. Consider the swap operator P̂ which acts on the basis states |ν_j⟩ as
P̂|ν_1⟩ = |ν_2⟩, P̂|ν_3⟩ = |ν_4⟩, P̂|ν_5⟩ = |ν_6⟩,
P̂|ν_7⟩ = |ν_8⟩, P̂|ν_9⟩ = |ν_9⟩.
Clearly, P̂ commutes with Û_s,r and therefore they have common eigenvectors. Since the square of P̂ is the identity it has eigenvalues ± 1. Hence, it divides the invariant subspace I = I_+⊕ I_- into a direct sum of symmetric states I_+ and antisymmetric states I_-. We find that I_+ has dimension 5 and I_- has dimension 4. Subspace I_+ is spanned by vectors
|σ_1⟩ = 1/√(2)(|ν_1⟩+|ν_2⟩),
|σ_2⟩ = 1/√(2)(|ν_3⟩+|ν_4⟩),
|σ_3⟩ = 1/√(2)(|ν_5⟩+|ν_6⟩),
|σ_4⟩ = 1/√(2)(|ν_7⟩+|ν_8⟩),
|σ_5⟩ = |ν_9⟩
Let us denote the restriction of the evolution operator Û_s,r onto the symmetric subspace as Û_+. We find that it acts on the basis states |σ_j⟩ as follows
Û_+|σ_1⟩ = (1-2/N)|σ_1⟩-2/N|σ_2⟩-2√(N-2)/N|σ_4⟩,
Û_+|σ_2⟩ = -2/N|σ_1⟩+(1-2/N)|σ_2⟩-2√(N-2)/N|σ_4⟩,
Û_+|σ_3⟩ = -2√(N-2)/N|σ_1⟩-2√(N-2)/N|σ_2⟩ -
-(1-4/N)|σ_4⟩,
Û_+|σ_4⟩ = -(1-4/N) |σ_3⟩+2√(2)√(N-2)/N|σ_5,⟩
Û_+|σ_5⟩ = 2√(2)√(N-2)/N|σ_3⟩ + (1-4/N)|σ_5⟩ .
Spectrum of Û_+ is composed of eigenvalues ± 1 and a conjugate pair λ_1^(±)=e^± iω_1 with eigenphase given by (<ref>). Eigenvalue 1 has degeneracy 2, the corresponding eigenvectors are given by
|(1)_1⟩ = 1/√(2)(|σ_1⟩-|σ_2⟩),
|(1)_2⟩ = 1/2√(N)(√(N-2)(|σ_1⟩+|σ_2⟩) - .
- .
2(|σ_3⟩+|σ_4⟩) - √(2)√(N-2)|σ_5⟩).
Eigenvector corresponding to -1 reads
|-1⟩ = 1/√(2N)(|σ_1⟩+|σ_2⟩+√(N-2)(|σ_3⟩+|σ_4⟩) - .
. -√(2)|σ_5⟩) .
For large N the eigenvectors |(1)_2⟩ and |-1⟩ can be approximated by neglecting the O(1/√(N)) terms with
|(1)_2⟩ ≈ 1/2(|σ_1⟩+|σ_2⟩) - 1/√(2)|σ_5⟩ ,
|-1⟩ ≈ 1/√(2)(|σ_3⟩+|σ_4⟩).
Concerning the conjugated pair we find that the corresponding eigenvectors read
|±ω_1⟩ = 1/2(1/√(2)(|σ_1⟩ + |σ_2⟩) + |σ_5⟩± i(|σ_4⟩ - |σ_3⟩)).
Lets now move to the antisymmetric subspace I_- which is spanned by vectors
|τ_1⟩ = 1/√(2)(|ν_1⟩-|ν_2⟩),
|τ_2⟩ = 1/√(2)(|ν_3⟩-|ν_4⟩),
|τ_3⟩ = 1/√(2)(|ν_5⟩-|ν_6⟩),
|τ_4⟩ = 1/√(2)(|ν_7⟩-|ν_8⟩).
We denote by Û_- the restriction of Û_s,r to I_-, which acts as follows
Û_-|τ_1⟩ = (1-2/N) |τ_1⟩ + 2/N|τ_2 ⟩ -2√(N-2)/N|τ_4⟩,
Û_-|τ_2⟩ = -2/N|τ_1⟩ -(1-2/N)|τ_2⟩-2√(N-2)/N|τ_4⟩,
Û_-|τ_3⟩ = -2√(N-2)/N|τ_1⟩+2√(N-2)/N|τ_2⟩ -
- (1-4/N)|τ_4⟩,
Û_-|τ_4⟩ = -|τ_3⟩.
We find that Û_- has two pairs of conjugated eigenvalues λ_2^(±)=e^± iω_2 and λ_3^(±)=e^± iω_3 with eigenphases given by equations (<ref>). The corresponding eigenvectors read
|±ω_2⟩ = cos(ω_2/2)/√(2)|τ_1⟩∓ i sin(ω_2/2)/√(2)|τ_2⟩±
±i/2(e^± i ω_2/2|τ_4⟩ - e^∓ i ω_2/2|τ_3⟩),
|±ω_3⟩ = ± i sin(ω_2/2)/√(2)|τ_1⟩ + cos(ω_2/2)/√(2)|τ_2⟩±
±i/2( e^± i ω_2/2|τ_3⟩ + e^∓ i ω_2/2|τ_4⟩).
For large N the phase ω_2 tends to zero as √(2/N). Omitting the O(1/√(N)) terms the eigenvectors can be approximated by
|±ω_2⟩ ≈ 1/√(2)|τ_1⟩±i/2(|τ_4⟩ - |τ_3⟩) ,
|±ω_3⟩ ≈ 1/√(2)|τ_2⟩±i/2(|τ_3⟩ + |τ_4⟩) .
§ EVOLUTION OPERATOR OF STATE TRANSFER BETWEEN MULTIPLE HUBS
In this Appendix we present the analysis of the evolution operator Û_ S, R for state transfer between multiple senders and receivers (<ref>). We find that Û_ S, R acts on the basis vectors of the invariant subspace (<ref>), (<ref>), (<ref>) and (<ref>) according to
Û_ S, R|ν_1⟩ = (1-2/N)|ν_1⟩ - 2√(R)/N|ν_4⟩ - 2 √(N-M)/N|ν_7⟩ - 2√(S-1)/N|ν_10⟩,
Û_ S, R|ν_2⟩ = (1-2/N) |ν_2⟩ -2√(S)/N|ν_3⟩ - 2 √(N-M)/N|ν_8⟩ - 2 √(R-1)/N|ν_11⟩,
Û_ S, R|ν_3⟩ = - 2√(R)/N|ν_1⟩ + (1-2R/N)|ν_4⟩ -2 √(R(N-M))/N|ν_7⟩ - 2 √(R(S-1))/N|ν_10⟩,
Û_ S, R|ν_4⟩ = -2√(S)/N|ν_2⟩ + (1-2/N)|ν_3⟩ - 2 √(S(N-M))/N|ν_8⟩ - 2 √(S(R-1))/N|ν_11⟩ ,
Û_ S, R|ν_5⟩ = -2 √(N-M)/N(|ν_1⟩ + √(R)|ν_4⟩ + √(S-1)|ν_10⟩) - (1-2M/N) |ν_7⟩ ,
Û_ S, R|ν_6⟩ = - 2 √(N-M)/N(|ν_2⟩ + √(S)|ν_3⟩ + √(R-1)|ν_11⟩) -(1-2M/N) |ν_8⟩,
Û_ S, R|ν_7⟩ = -(1-2S/N)|ν_5⟩ - 2√(RS)/N|ν_6⟩ - 2 √(S(N-M))/N|ν_9⟩,
Û_ S, R|ν_8⟩ = 2√(RS)/N|ν_5⟩ - (1-2R/N)|ν_6⟩ - 2 √(R(N-M))/N|ν_9⟩,
Û_ S, R|ν_9⟩ = 2 √(N-M)/N(√(S)|ν_5⟩ + √(R)|ν_6⟩) + (1-2M/N)|ν_9⟩,
Û_ S, R|ν_10⟩ = - 2 √(S-1)/N(|ν_1⟩ + √(R)|ν_4⟩ + √(N-M)|ν_7⟩) + (1-2(S-1)/N)|ν_10⟩,
Û_ S, R|ν_11⟩ = - 2 √(R-1)/N(|ν_2⟩ + √(S)|ν_3⟩ + √(N-M)|ν_8⟩) + (1-2(R-1)/N)|ν_11⟩ .
For S≠ R the splitting of the invariant subspace into the symmetric and antisymmetric subspaces is no longer possible. Nevertheless, the evolution operator Û_ S, R can be diagonalized analytically. The eigenvalues are again ± 1 and three conjugated pairs λ_j = e^± i ω_j, j=1,2,3, where the phases ω_j are given in (<ref>). Eigenvalue 1 has a degeneracy of 4, one can choose the basis in the degenerate subspace e.g. according to
|(1)_1⟩ = 1/M(R/√(S)|ν_1⟩ + S/√(R)|ν_2⟩ - √(RS)(|ν_3⟩
+ |ν_4⟩) + R √(S-1/S)|ν_10⟩ + S √(R-1/R)|ν_11⟩) ,
|(1)_2⟩ = 1/√(2N)M[√(N-M)(√(S)|ν_1⟩ + √(R)|ν_2⟩ + √(RS)(|ν_3⟩ + |ν_4⟩) - M|ν_9⟩ + . .
. . + √(S(S-1))|ν_10⟩ + √(R(R-1))|ν_11⟩) - M(√(S)(|ν_5⟩ + |ν_7⟩) + √(R) (|ν_6⟩ + |ν_8⟩) )],
|(1)_3⟩ = 1/√(S) (√(S-1)|ν_1⟩ - |ν_10⟩),
|(1)_4⟩ = 1/√(R) (√(R-1)|ν_2⟩ - |ν_11⟩).
The eigenvector corresponding to -1 reads
|-1⟩ = 1/√(2NM)(√(S)|ν_1⟩ + √(R)|ν_2⟩ + √(RS)(|ν_3⟩ + |ν_4⟩) + √(N-M)[√(S)(|ν_5⟩ + |ν_7⟩) + √(R)(|ν_6⟩ + |ν_8⟩)] - .
. - M|ν_9⟩ + √(S(S-1))|ν_10⟩ + √(R(R-1))|ν_11⟩).
In the limit N≫ R,S we can approximate the eigenstates |(1)_2⟩ and |-1⟩ with
|(1)_2⟩ ≈ 1/√(2)M(√(S)|ν_1⟩ + √(R)|ν_2⟩ + √(RS)(|ν_3⟩ + |ν_4⟩) - M|ν_9⟩ + √(S(S-1))|ν_10⟩ + √(R(R-1))|ν_11⟩),
|-1⟩ ≈ 1/√(2M)[√(S)(|ν_5⟩ + |ν_7⟩) + √(R)(|ν_6⟩ + |ν_8⟩) ].
For the complex conjugated pair λ_1^(±) we find the following
|±ω_1⟩ = 1/2M(√(S)|ν_1⟩ + √(R)|ν_2⟩ + √(RS)(|ν_3⟩ + |ν_4⟩) + M|ν_9⟩ + √(S(S-1))|ν_10⟩ + √(R(R-1))|ν_11⟩±.
. ± i√(M)[√(S)(|ν_7⟩ - |ν_5⟩) + √(R)(|ν_8⟩ - |ν_6⟩)]) .
The exact form of the eigenvectors corresponding to λ_2,3^(±) is rather complicated, however, in the limit of N ≫ R,S they can be approximated by
|±ω_2⟩ ≈ 1/2M(2√(R)|ν_1⟩ - 2√(S)|ν_2⟩ + (R-S)(|ν_3⟩ + |ν_4⟩) + 2√(R(S-1))|ν_10⟩ -2√(S(R-1))|ν_11⟩±.
. ± i√(M)[√(R)(|ν_7⟩ -|ν_5⟩ + √(S)(|ν_6⟩ - |ν_8⟩))] ),
|±ω_3⟩ ≈ 1/2M(M(|ν_3⟩ - |ν_4⟩) ± i √(M)[√(R)(|ν_5⟩ + |ν_7⟩) - √(S)(|ν_6⟩ + |ν_8⟩)] )
We point out that for S=R=1 the spectrum and the eigenvectors reduce to those derived in the Appendix <ref>. Note that in this case the states |ν_10⟩ and |ν_11⟩ need to be eliminated, since there no connecting edges when we have only a single sender and a single receiver.
|
http://arxiv.org/abs/2409.02439v1 | 20240904043033 | Pressure-Tunable Targets for Light Dark Matter Direct Detection: The Case of Solid Helium | [
"Omar A. Ashour",
"Sinéad M. Griffin"
] | hep-ph | [
"hep-ph",
"cond-mat.mes-hall",
"cond-mat.mtrl-sci",
"physics.ins-det"
] |
Department of Physics, University of California, Berkeley, California 94720, USA
Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
[email protected]
Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
§ ABSTRACT
We propose hydrostatic pressure—a well-established tool for tuning properties of condensed matter—as a novel route for optimizing targets for light dark matter direct detection, specifically via phonons. Pressure dramatically affects compressible solids by boosting the speed of sound and phonon frequencies. Focusing on helium—the most compressible solid—our ab initio calculations illustrate how high pressure elevates helium from lacking single-phonon reach to rivaling leading candidates. Our work establishes pressure as an unexplored tuning knob for accessing lower dark matter mass regimes.
Pressure-Tunable Targets for Light Dark Matter Direct Detection:
The Case of Solid Helium
Sinéad M. Griffin
September 9, 2024
==========================================================================================
Introduction
Proposals for next-generation dark matter (DM) searches are extending into lower mass ranges, driven by advances in detector sensitivities that allow probing low-energy excitations in condensed matter systems <cit.>. Traditional direct detection experiments based on nuclear recoils are ineffective in the sub-GeV DM mass range, as the kinetic energy deposition from the light DM to the heavier nuclei is minimal. This limitation has catalyzed research into low-energy excitations across various condensed matter systems including electronic transitions in semiconductors <cit.>, phonons in dielectrics <cit.> and superfluids <cit.>, as well as more exotic quantum materials such as superconductors <cit.>, Dirac semimetals <cit.>, multiferroics <cit.>, and topological axion insulators <cit.>. For instance, phonons are the key target in the in-development TESSERACT collaboration for low-mass DM detection <cit.>.
Phonons are a promising avenue for light (sub-GeV) DM detection due to their sensitivity to various DM models, and their kinematic and energetic match with DM masses as low as a few keV through scattering <cit.> and a few meV through absorption <cit.>. Notably, dark photon mediators couple to optical (gapped) phonons in polar semiconductors, such as Al_2O_3, with potential directional responses <cit.>. In contrast, hadrophilic scalar mediators are more strongly coupled to acoustic (gapless) phonons in both crystals and superfluids <cit.>. At low detector thresholds and small momentum transfer, single-phonon production is generally the dominant interaction in crystalline materials, whereas multiphonon processes dominate in superfluids, with considerable phase space suppression <cit.>. However, single phonons usually have energies 𝒪(10-100) meV in crystals and thus require low sensing thresholds.
Hydrostatic pressure has long been exploited for exploring new phases in condensed matter and tuning properties of materials otherwise inaccessible at ambient conditions. By directly altering interatomic distances and electronic interactions, pressure can dramatically change the electronic, structural, and vibrational properties of matter. For instance, pressure is critical for accessing high-temperature superconductivity in hydrides <cit.> and for the discovering of topological phases in elemental lithium <cit.> and tellurium <cit.>. Pressure is also used to control the performance of functional materials, with applications ranging from space-based photovoltaics <cit.> to optoelectronic devices <cit.>.
In this Letter, we explore the use of hydrostatic pressure as a new tuning knob for controlling the reach of compressible solid targets as DM detectors. Pressure typically enhances the speed of sound and phonon frequencies in solids, with this effect being especially pronounced in highly compressible materials. We demonstrate this concept with solid helium-4, selected for its especially high compressibility and existing interest in superfluid helium as a DM target <cit.>. At near-ambient pressures, solid helium-4 has limited single-phonon reach owing to its small phonon energies (≲ 3 meV <cit.>) and low speed of sound (∼ 10^-6 <cit.>). Our calculations show that by applying pressures up to 40 GPa, we can significantly enhance the speed of sound in helium by more than tenfold and phonon frequencies by nearly two orders of magnitude, dramatically improving its prospects for light DM detection. We perform density functional theory (DFT) calculations on two solid phases of helium-4, elucidating the effect of pressure on the structural and vibrational properties. We calculate the single-phonon reach and compare our results to the best-performing targets for a hadrophilic scalar mediator. Finally, we address the challenges and limitations of pressurized targets and examine how this unexplored degree of freedom could be best used in direct detection experiments.
Modeling Solid Helium
The pressure-temperature (PT) phase diagram of helium-4 is shown in Fig. <ref>(a), indicating its three known solid phases. The hexagonally close-packed (hcp) α phase is the dominant solid phase, stable at pressures down to a few at sub-1 K temperatures, while the face-centered cubic (fcc) β phase, is only stable at higher temperatures, 𝒪(10 K), and much higher pressures, 𝒪(100 ) <cit.>. Finally, the body-centered cubic (bcc) γ phase is only stable in a minute sliver of the PT phase diagram <cit.>, so we will focus exclusively on the α and β phases.
Studying the structural and vibrational properties of helium with DFT presents difficulties due to helium's significant zero-point motion and long-range interactions. The Born-Oppenheimer approximation used in DFT does not capture the former, and the latter is underestimated by standard semi-local exchange-correlation (XC) functionals. We tested 30 combinations of XC functionals and nonlocal corrections and verified our approach by comparing our DFT results to a variety of experimental measurements, including inelastic neutron scattering and Brillouin scattering. We find that the semi-local PBE functional <cit.> combined with the D3M dispersion correction <cit.> provides excellent agreement for both phases across a large range of pressures, except for at low pressures which we do not include in this work (see the Supplemental Material (SM) <cit.> for details).
Single-Phonon Scattering Rates
We consider a benchmark model where real scalar DM χ couples to the proton p and neutron n via a real scalar mediator ϕ (neglecting any DM-electron coupling). The Lagrangian is given by
ℒ= 1/2(∂_μϕ)^2-1/2 m_ϕ^2 ϕ^2+f_p ϕp̅ p+f_n ϕn̅ n
+(1/2(∂_μχ)^2-1/2 m_χ^2 χ^2+1/2 y_χ m_χϕχ^2),
where m_χ (m_ϕ) is the mass of the DM (mediator), f_p (f_n) is the coupling to the proton (neutron), and y_χ is the mediator-DM coupling. We set f_p = f_n for the remainder of this work.
The dominant DM-matter interactions depend on the range of DM under consideration: for high DM masses, these are nuclear recoils and multiphonon excitations, whereas for low masses, it is single-phonon generation, our focus here. Following the effective field theory of Refs. <cit.>, the scattering rate per DM particle with velocity v is given by
Γ(v)=πσ̅_n/μ_χ n^2∫d^3 q/(2 π)^3ℱ_med ^2(q) S(q, ω_q).
Here, we integrate over the momentum transfer q≡p - p', where p = m_χv and p' are the momentum of the incident and scattered DM, respectively. The mediator form factor ℱ_med(q) is given by
ℱ_med (q)= 1 (heavy mediator, m_ϕ≫ q_0)
(q_0 / q)^2 (light mediator, m_ϕ≪ q_0)
We make the conventional choice for the reference momentum transfer q_0 = m_χ v_0, with v_0 the DM's velocity dispersion <cit.>, and present our results in terms of a reference cross section σ̅≡μ^2/π|ℳ(q_0)|, where ℳ is the target-independent χ n →χ n vacuum matrix element (see SM).
The dynamic structure factor, S(q, ω_q), encapsulates the target's response to the momentum transfer q and energy deposition ω_q = q·v - q^2/2 m_χ, and depends explicitly on the target's properties. For single-phonon excitations and the hadrophilic scalar mediator, we have <cit.>
S(q, ω_q)=
π/Ω∑_ν1/ω_ν, k|∑_j A_j F_N_j(q)/√(m_j) e^-W_j(q) e^i G·x_j^0(q·ϵ_j,ν,k^*)|^2 δ(ω_q-ω_ν, k)
The sum runs over all phonon branches ν and atoms j in the primitive cell, with masses m_j, atomic mass numbers A_j, and equilibrium positions x_j^0. Ω is the volume of the primitive cell, and k is the phonon momentum in the first Brillouin zone (1BZ), constrainted to k = q + G by momentum conservation, for some reciprocal lattice vector G. ω_ν,k and ϵ_j,ν,k are the phonon frequencies and polarization vectors of a given phonon mode (ν, k) and ion j. Finally, F_N_j(q) and W_j(q) are the nuclear form factor (see SM) and the Debye-Waller factor for atom j, respectively, with the latter given by
W_j(q)=Ω/4 m_j∑_ν∫_1 BZd^3 k/(2 π)^3|q·ϵ_j, ν, k|^2/ω_ν, k.
Now, we can compute the total rate per unit target mass
R=1/ρ_Tρ_χ/m_χ∫ d^3 v f_χ(v) Γ(v),
where ρ_χ = 0.3 /cm^3 is the local DM energy density, ρ_T is the target mass density, and f_χ is the incoming DM's velocity distribution in the target's rest frame (see SM).
We used our package <cit.> for all calculations. The projected 95% C.L. (3 events) exclusion reach on σ̅_n, assuming zero background and a 100% efficient detector, for an target with kg·yr exposure is shown in Fig. <ref>a (<ref>b) for a light (heavy) scalar mediator, and pressures ranging from 1 GPa to 40 GPa. For comparison, we include the reach of the best-performing materials identified in Ref. <cit.>, as well as silicon and superfluid helium through multiphonon scattering <cit.>. We consider three detector thresholds, = 1, 20, 100. The same results are presented for in Fig. <ref>fig:fcc_He_reach in the SM, for pressures ranging from 1 GPa to 12 GPa, the highest pressure at which this phase is stable.
In the low-mass regime, both and are competitive with silicon, even at the lower end of the pressure range. At much higher pressures near the maximum 40 GPa, is competitive with the best target identified in Ref. <cit.>, diamond, and both phases outperform silica (SiO_2) at the lower pressures. For both mediators, lower pressure targets lose reach as increases, and only the 40 GPa target has any reach at the highest threshold.
For = 1, higher pressure targets have better reach to lower m_χ. For this benchmark model, the lowest accessible DM mass with acoustic phonons scales as m_χ,min∼/ <cit.>, and Fig. <ref>d shows an increase in the speed of sound with pressure. For the light scalar mediator in Figs. <ref>a and <ref>fig:fcc_He_reacha, the reach at high m_χ≳ 0.1 is independent of target properties <cit.>, explaining the convergence of the curves at different pressures in that regime.
We now focus on the heavy mediator, Figs. <ref>b and <ref>fig:fcc_He_reachb. At m_χ∼ 20, we find an inflection point in the = 1 curves, beyond which low-pressure targets have better reach on σ̅_n. Here, q is still small enough to be inside the 1BZ, but is far from the Γ-point, and a parametric expansion of the rate shows a 1/ scaling <cit.>. At m_χ∼ 200, the slope of the curves changes, but the ordering with pressure remains. Now q is outside the 1BZ, leading to a rapidly oscillating k (and thus ω_νk), and the average phonon frequency in the 1BZ, , is the relevant quantity. An analytic estimate of the rate shows ^-1 scaling <cit.>, and in Fig. <ref>fig:freq_pres we find that is an increasing function of pressure. In the final regime, with m_χ≳ 8, q becomes large enough that the Debye-Waller factor (Eq. <ref>) is nonnegligible, cutting off the integral in Eq. (<ref>). The rate scales as (A)^2, resulting in high-pressure targets having better reach. In these last three regimes, both helium phases are outperformed by targets with heavy atoms, such as PbTe and CaWO_4, which have low and , and much larger mass numbers A. However, multiphonon scattering and nuclear recoil, which we do not include here, will dominate single-phonon scattering at large m_χ. In all cases, single-phonon scattering in solid helium outperforms two-phonon scattering in superfluid helium due to the phase-space suppression in the latter.
These trends generally hold at higher thresholds but with increasing contributions from optical phonons. In Fig. <ref>b, we find an inflection point in the reach of at = 20. Optical phonons have reach down to m_χ, min∼ω_LO, scaling with the frequency of the longitudinal optical mode near the Γ point, where the dispersion is flat. Conversely, m_χ,min∼/ for acoustic phonons. Consequently, the lowest m_χ regime is dominated by optical phonons at large , leading to the inflection point. This feature is crucially missing from , which lacks optical modes (see Fig. <ref>fig:fcc_He_reach).
We also calculate the daily modulation rates for both phases as a function of pressure. Fig. <ref>fig:daily_mod shows a daily modulation in at the lowest accessible DM masses, ranging from 5% (3%) below and 10% (5%) above the average daily rate for a light (heavy) mediator. This modulation is approximately independent of pressure (see Fig. <ref>fig:deltaR). Conversely, the highly isotropic exhibits no daily modulation. However, since our DM rate is dependent on pressure (Figs. <ref> and <ref>fig:fcc_He_reach), it could potentially be used as a new method for background discrimination.
Discussion Our results show that pressure is an enticing route for novel tunable DM detector designs. We demonstrate this in highly compressible solid helium <cit.> up to 40 GPa but expect the trend to hold at higher pressures. While our findings are generally applicable to any solid, the impact of pressure is most pronounced in materials characterized by very high compressibility, offering an unexplored avenue for detector tunability and background discrimination with applied pressure. Nonetheless, we note that pressure generally reduces phonon lifetimes, usually scaling as P^-1 in, e.g., inorganic semiconductors <cit.>.
However, pressurized cryogenic detectors pose several engineering challenges. Older piston-cylinder-type cells are capable of reaching pressures beyond 6 GPa with relatively large sample volumes, 𝒪(few cm^3) <cit.>. In contrast, Kawai-type multi-anvil presses <cit.> can reach tens of gigapascals with sample volumes 𝒪(few mm^3). Diamond anvil cells (DACs), considered the workhorse of modern high-pressure physics, can operate at pressures beyond 400 GPa <cit.>. DACs offer excellent optical access to the sample and can be used at cryogenic temperatures <cit.>. However, the feasibility of incorporating a phonon sensor, such as a TES, into such apparatuses is uncertain. Nonetheless, the strategic design of a direct detection experiment incorporating a high-pressure cell, carefully balancing target size with achievable pressure, holds promise for optimizing the reach of compressible solid targets.
In the case of solid helium specifically, single crystals of the hcp phase can be grown from helium gas directly inside DACs <cit.> or from the superfluid at cryogenic temperatures <cit.>, with some approaches yielding almost defect-free but fragile samples <cit.>. Further, it is possible to isotopically enrich helium-4 to less than 0.5 parts per trillion helium-3 <cit.>. Integrating these high-quality growth techniques with DACs could be advantageous for solid helium, as ultrahigh-quality crystals would significantly reduce scattering from dislocations and isotopic impurities, thereby improving phonon lifetimes. One potential approach involves using helium-3, characterized by a spin-1/2 nucleus, in a DAC with embedded nitrogen-vacancy (NV) centers. These NV centers have promising applications in quantum sensing <cit.>, and it might be possible to use them for phonon detection in materials with magnetic nuclei.
Conclusion
In conclusion, our findings highlight the effectiveness of hydrostatic pressure in enhancing the speed of sound and phonon frequencies in solid helium. This effect is particularly profound in highly compressible solids and elevates solid helium from a target without single-phonon reach to one that competes with leading targets like diamond, particularly at low DM masses. Despite the challenges involved in designing such experiments, exploring this additional degree of freedom is promising, as it not only introduces the possibility of using neglected targets but also offers the potential to fine-tune their properties to target different DM mass regimes and provide background discrimination.
We are grateful to S. Knapen for feedback on our manuscript and acknowledge helpful discussions with N. Taufertshöfer, M. Garcia-Sciveres, and D. McKinsey. We thank K. Inzani and T. Trickle for sharing their reach data. This work was supported by the US Department of Energy under the Quantum Information Science Enabled Discovery (QuantISED) for High Energy Physics grant KA2401032. Work at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a Department of Energy Office of Science User Facility using NERSC award BES-ERCAP0028926.
|
http://arxiv.org/abs/2409.02891v1 | 20240904173120 | Regional data-driven weather modeling with a global stretched-grid | [
"Thomas Nils Nipen",
"Håvard Homleid Haugen",
"Magnus Sikora Ingstad",
"Even Marius Nordhagen",
"Aram Farhad Shafiq Salihi",
"Paulina Tedesco",
"Ivar Ambjørn Seierstad",
"Jørn Kristiansen",
"Simon Lang",
"Mihai Alexe",
"Jesper Dramsch",
"Baudouin Raoult",
"Gert Mertes",
"Matthew Chantry"
] | physics.ao-ph | [
"physics.ao-ph",
"cs.LG"
] |
[
\begin@twocolumnfalse
[
[
September 9, 2024
=====================
§ ABSTRACT
A ddm suitable for regional weather forecasting applications is presented. The model extends the aifs by introducing a stretched-grid architecture that dedicates higher resolution over a regional area of interest and maintains a lower resolution elsewhere on the globe. The model is based on gnns, which naturally affords arbitrary multi-resolution grid configurations.
The model is applied to short-range weather prediction for the Nordics, producing forecasts at 2.5km spatial and 6h temporal resolution. The model is pre-trained on 43 years of global ERA5 data at 31km resolution and is further refined using 3.3 years of 2.5km resolution operational analyses from the meps. The performance of the model is evaluated using surface observations from measurement stations across Norway and is compared to short-range weather forecasts from meps. The ddm outperforms both the control run and the ensemble mean of meps for 2m temperature. The model also produces competitive precipitation and wind speed forecasts, but is shown to underestimate extreme events.
\end@twocolumnfalse
]
[1]Norwegian Meteorological Institute, Oslo, Norway
[2]ECMWF, Reading, UK
[3]ECMWF, Bonn, Germany
^*Equal contribution
§ BACKGROUND
ddms are rapidly emerging as an alternative to nwp models. Global ddms have been shown to produce weather forecasts that are competitive in accuracy with state-of-the-art global nwp models <cit.>. Unlike nwp models that simulate the weather by solving physical equations, ddms are typically based on artificial neural networks with parameters that are optimized by training on historical reanalyses. Although training costs are significant, ddms can be several orders of magnitude cheaper to run than nwp models of similar resolution, due to the efficient mapping of neural networks to gpus and their ability to use a much longer timestep when integrating forward in time.
This progress has been made possible by the availability of free and reliable datasets like ERA5 <cit.>, which provides a long archive of consistent reanalyses at approximately 31km horizontal grid resolution. Several global ddms have been trained using ERA5, including FourCastNet <cit.>, Pangu-Weather <cit.>, GraphCast <cit.>, FuXi <cit.>, and the aifs <cit.>.
The success of global ddms is driving efforts to develop ddms for regional purposes. In their mission to save lives and protect property, nmhses, such as the metno, currently rely on high-resolution regional nwp models to deliver real-time weather forecasts to the public and weather-affected industries.
Additionally, many nmhses produce regional reanalyses, with consistent high-resolution data spanning long time periods. These high-resolution datasets, often at a kilometer scale, offer an enormous potential for training regional ddms. Due to their high resolution, these datasets provide additional information at finer spatial scales than ERA5 can represent.
Regional data-driven modeling efforts have so far focused on driving a lam with initial conditions from a regional analysis and lateral boundaries from an external global model <cit.>. This is analogous to the common separation of regional and global models in nwp.
This article presents a different approach, where a global model is developed with higher resolution over the region of interest and lower resolution elsewhere. The model is initialized from both global and high-resolution regional analyses. The goal of this stretched-grid approach is for weather systems to move seamlessly from the global domain into the regional domain and out again without any explicit treatment of the boundary between the domains. The model is based on gnns <cit.>, which is also the architecture used in GraphCast and aifs, and was introduced to weather prediction by <cit.>. This flexible architecture allows arbitrary grids to be used, supporting our need to have variable resolution across the globe.
We target the data-driven model for use by metno's public weather forecasting app Yr (<https://www.yr.no>). Given Norway's intricate topography and coastline, the model must be capable of providing high-resolution forecasts. Currently, short-range weather forecasts on Yr for the Nordics are provided by the MetCoOp Ensemble Prediction System (MEPS) <cit.> with a horizontal resolution of 2.5km. The performance of the model is therefore compared to the operational MEPS and the evaluation of the model is done with a focus on the needs of our end-users.
This article is organized as follows: Section <ref> describes the stretched-grid approach to regional data-driven modeling. Section <ref> describes the hyper-parameters we chose and how the model was trained. Section <ref> evaluates the data-driven model against current operational nwp systems. Section <ref> draws conclusions and presents future work.
§ STRETCHED-GRID MODEL
gnn-based models, such as GraphCast and aifs, use an encoder-processor-decoder architecture (<ref>). Weather parameters are provided to the model on an input grid (<ref>a,b). Typically, this includes data on multiple pressure levels for the current time and one or more recent times in the past. This data is then encoded into a compact latent representation of the state of the atmosphere (<ref>c). The latent space is typically on a lower resolution grid, which we will call the processor mesh (<ref>d). This latent state is advanced forward one time step by the processor. The updated state is then decoded back to the weather parameters on the input grid (<ref>e). This process can be repeated to create forecasts of arbitrary length, which is called rollout.
Graphs are central building blocks of gnns, consisting of a set of nodes connected by edges. Nodes are used to represent the points in the input grid and in the processor mesh. As the nodes in the graphs are represented by a one-dimensional vector (i.e. is not required to be on a regular grid), arbitrary grid configurations can be defined. Graphs are well suited for a problem with variable resolution because the structural information in the model is encoded in the model features while the model weights do not depend explicitly on the shape of the graph. This allows us to dedicate a finer grid spacing in parts of the domain, which we call a stretched grid. Although the stretched-grid approach can be used with any number of different resolution domains, we focus here on applications with a single global and a single regional domain.
Graphs are used by the encoder, processor, and decoder. These components update the nodes they are connected to by processing information from any node connected by the edges. This update is called message passing. To aggregate the information from the contributing nodes, we use a graph transformer, which is a generalization of the transformer architecture <cit.> to graphs, where message passing is done via a multi-head attention mechanism <cit.>, the same GNN architecture used in the encoder and decoder of AIFS <cit.>. Our motivation for choosing the graph transformer over a pure transformer is that the graph transformer explicitly embeds directional and positional information via the edge features into the calculation of attention scores, which can be important for a multi-resolution model.
The stretched-grid model is built using the Anemoi framework first created by ecmwf and now co-developed by several meteorological organisations across Europe, including metno (<https://anemoi.ecmwf.int/>).
§.§ Grid and processor mesh design
The input grid is created to match the exact grid points from the two input models. Grid points from the global model that overlap with the regional model are removed, which is a configuration referred to as cutout. To construct the processor mesh, we follow <cit.> and start with an initial regular icosahedron with 20 faces, creating a graph with 12 nodes and 30 edges. This icosahedron is gradually refined by adding nodes and edges that divide each triangular face into four smaller triangles (<ref>f). When a layer of refinement is added to the mesh, we keep the edges of the original lower-resolution mesh so that different levels of refinement facilitate communication on different length scales in the final multi-layer mesh. We add extra levels of refinement over the regional domain, with the aim of keeping the number of edges from the input grid to each mesh node constant between the regional and global domains.
The encoder maps data on the input grid to a hidden representation that encodes the structural information in the graph. The first step is the source node embedding, which consists of a linear layer that transforms the raw features of the source nodes (in the grid) into a latent space representation. Similarly, the destination node embedding transforms the raw features from the destination nodes (in the mesh) to a hidden dimension. Layer normalization is applied to both source and destination node embeddings. The embedded source and destination nodes, along with processed edge attributes, are passed through a graph transformer block to perform the core processing. Each processor mesh node is connected to the nearest 12 input grid points (<ref>c).
§.§ Processor
The processor evolves the latent space representation of the data on the processor mesh through a series of message-passing steps. Each message-passing step is done through a graph transformer block where every node receives information from all one-hop neighbors in the multi-layer mesh. Consequently, communication on different mesh resolutions happens simultaneously <cit.> with spatial relations embedded in the multi-head attention mechanism. In total, the processor consists of 16 such message-passing steps, in contrast to one in the encoder and decoder.
§.§ Decoder
The decoder maps the latent space representation back to weather parameters on the input grid. To achieve this, the embedding process performed by the encoder is reversed by a graph transformer block, followed by embedding of the input features via a skip connection. Each point in the output grid receives data from its three nearest processor mesh nodes (<ref>e).
§ MODEL TRAINING
§.§ Data
To train the stretched-grid model, we use both global and regional NWP analyses. As global data, we use ERA5 reanalysis <cit.> on its native 31km resolution reduced Gaussian grid for the period 1979–2022. We also use the operational analyses of the ifs <cit.>, interpolated to the ERA5 resolution for the time period 2020–2024, as this is what the model will use as input when run operationally.
For the regional domain, we use the analysis of the meps <cit.> with 2.5km horizontal grid spacing, which covers Norway, Sweden, Finland, Denmark, and the Baltic countries. As the model underwent substantial changes, including a change to the grid configuration, on February 5, 2020, we only use analyses after this time.
<ref> summarizes the input and output variables we used from these datasets.
When performing hyper-parameter exploration, we used any data available in the time period 1979-01-01 to 2022-05-31 for training, reserving 2022-06-01 to 2023-05-31 for validation to help select the best model. We reserved a separate period for independent testing of the best model for 2023-06-01 to 2024-05-31, with results presented in <ref>.
§.§ Training configuration
Many of the settings used for the model follow <cit.>, including two input time steps, the choice of loss function (squared error), per-variable weights, and optimizer. We used 1024 channels in the encoder, processor, and decoder, resulting in a total of 246 million trainable parameters.
Early results showed that training a stretched-grid model using only 2-3 years of data leads to forecasts with poor synoptic developments. Therefore, we pre-trained the model on the 43 years of ERA5 data, before fine-tuning on the combined datasets. A consequence of this is that we cannot use any learnable features (<cit.>), as these are graph-specific.
We used a four-stage training procedure, as illustrated in <ref> and with full details in <ref>. After each stage, the scheduler was reset with a smaller starting learning rate to prevent catastrophic forgetting.
In stage A, the model was pre-trained on ERA5 upsampled to 100 km resolution. This is a relatively cheap procedure that subsequently speeds up the training of Stage B, where we trained on ERA5's full resolution.
In Stage C, we switched to using IFS upsampled to the native ERA5 resolution and added MEPS at 2.5km resolution in a stretched-grid configuration. In <cit.>, each output grid point's contribution to the loss is made proportional to the area that the point covers. We found that increasing the contribution of the grid points in the regional domain improved verification scores for the regional domain. In particular, we let the regional domain contribute 33% to the total loss value despite it covering only 1.2% of the earth's surface.
Finally, in Stage D we perform a 24 h auto-regressive rollout training to improve scores over longer lead times. This is done by incrementally increasing the rollout from two to four 6h time steps with 100 iterations each. Here, the scheduler is reset for each rollout step, reaching its starting learning rate after 10 warm-up iterations. We calculate the loss aggregated across the rollout steps, as in <cit.>.
§.§ Hardware
The ddm was trained on compute nodes equipped with four AMD Instinct MI250X gpus, each with 128 GB memory. To speed up training, we used data parallelism across 32 model instances for stages A–C and 16 for stage D. For stages B–D, we used model parallelism to run one instance of the model on each node. This splits the nodes and edges of the graphs across multiple GPUs and is necessary for fitting the 31km global and 2.5km regional stretched-grid model within the available gpu memory. The model parallel approach is described in <cit.>.
§ EVALUATION
This section aims to evaluate the quality of the forecasts from the perspective of a public weather forecast provider. A majority of the forecast parameters we provide on our public weather forecasting app Yr, are deterministic. The key properties we seek for our forecasts are accuracy, by low squared error, and that the frequency distribution of the forecasts match that of the observations. The latter is important to ensure that aggregated statistic over longer time periods (e.g. maximum daily wind speed) is accurate, and are important parameters for users that are not sensitive to the exact timing of weather events. As these properties are often in conflict with each other, we evaluate both separately.
To evaluate the ddm from <ref> against competing nwp systems, we reserved a separate test period that we did not use when selecting the model configuration. The test period spans the time period from June 1, 2023 to May 31, 2024. The final model in Section <ref> was retrained on the combined training and validation period from Feb 6, 2020 to May 31, 2023 in order to increase the training period.
§.§ Baseline NWP models
The ddm is evaluated against state-of-the-art nwp models used at MET Norway. Operationally, MET Norway relies on meps to provide high-resolution short-range weather forecasts for lead times up to 61 hours. This system is used for both automated weather forecasts for the general public and public weather warnings issued by duty forecasters. Additionally, they are used by downstream users such as energy, transportation, and agriculture. We extracted the control run and the ensemble mean from this modelling system.
We also include the control run from ifs at 0.1^∘ resolution, as this is a model we use to assess the added value of high-resolution models.
§.§ SYNOP observation network
The models are evaluated against observations from MET Norway's network of synop stations. To allow for a fair comparison of models at different resolutions, the gridded data is interpolated bilinearly to the station locations. For temperature, both the temperature and the model terrain height are interpolated to the station point. The temperature is then adjusted based on a lapse rate of 6.5 ^∘C/km applied to the difference between the station elevation and the bilinearly interpolated model terrain height.
§.§ Overall evaluation
The parameters we investigate are t, ws, and p6h. Additionally, we are interested in 24h aggregations, including daily minimum and maximum temperature, daily maximum wind speed, and 24h accumulated precipitation. Such aggregated values are used to summarize a day and are heavily used by our users. We also look at mslp as a diagnostic for the model's ability to capture general synoptic developments.
<ref> demonstrates how the stretched grid approach is able to seamlessly move weather systems from the global into the limited area domain. This case shows the storm Ingunn for a forecast initialized at 06Z on January 27th, 2024. Ingunn can clearly be seen to move into and develop within the limited area domain. Overall, the DDM appears in this case to have a similar capability to the current operational IFS and MEPS system in simulating large-scale features of wind speed and mslp. A notable difference is the slight spatial smoothing of the data-driven fields.
<ref> shows a forecast from the same period zoomed in on the mountains in northern Norway. Like MEPS, the DDM also simulates strong winds on the mountain tops and weaker winds in the valleys. This is in contrast to the coarse resolution IFS, which fails entirely to represent the stronger winds in the mountains. This indicates that the DDM, despite some spatial smoothing, is able to represent many of the systematic wind speed features in mountainous areas.
In terms of rmse, the ddm outperforms both nwp systems for temperature (<ref>a) and 6h precipitation (<ref>c). For wind speed, the model performs similar to the MEPS control, but worse than the MEPS ensemble mean (<ref>b). This is due to the fact that the ddm has been trained on analyses from the control run, which tries to represent meteorological features that are unpredictable. The ensemble mean is significantly smoother and therefore scores better for all lead times. For mslp, the ddm is comparable to both nwp systems for lead times below 24 hours (<ref>d). However, the error growth is larger, and it is outperformed by both nwp systems for longer lead times. We found that mslp is sensitive to the area weighting of the regional domain, with a lower weight giving better mslp scores but worse for the other parameters. This could indicate some over-fitting, which we will investigate further.
§.§ Temperature
t is greatly affected by the complex topography and coastline of Scandinavia. This makes it a challenging parameter to predict.
For instantaneous temperatures, the ddm outperforms meps for RMSE (<ref>a). The ddm improves the rmse by around 24 hours compared to meps control and ensemble mean. Improvements of this magnitude usually require many years of model development. A major contribution to this improvement likely comes from the way meps assimilates surface temperature observations and carries that state over from the analysis into the forecast. This is evident in the large increase in error from lead time 0h to lead time 6h. meps assimilates the same observations we use to verify the forecast with. This leads to a low RMSE score for lead time 0h. However, a large part of this is lost already at lead time 6h. The ddm does not suffer from this problem because it is trained against analyses that have these observations assimilated and is better able to propagate information from the assimilated state forward in time.
As an example, <ref> shows that when the analysis increment is large, and the NWP model is not able to carry the increment forward, a large systematic error is present for the remainder of the forecast. Note that we verify the forecasts using the same observations employed in the data assimilation. This implies that we cannot expect to get the same forecast improvements over areas where no observations are assimilated.
The ddm underestimates the standard deviation of temperatures across a 24-hour period (not shown). The consequence of this is that daily minimum temperatures are overestimated and daily maximum temperatures are underestimated. Despite these biases, the ddm outperforms meps in RMSE for both aggregated variables (<ref>a–b).
§.§ Wind speed
To assess the model's ability to predict extreme events, we use the ets, also known as the Gilbert skill score. For a particular wind speed threshold, the frequency of forecasts and observations exceeding or not exceeding the threshold are computed, thereby classifying events into hits (a), false alarms (b), missed events (c), and correct rejections (d). The ets for a particular wind speed threshold is defined by:
ETS = a-â/a+b+c-â,
where
â = (a+b)(a + c)/a+b+c+d ;
The ets penalises forecasts with a high number of false alarms and event misses.
Spatial smoothing is significant for wind speed because of the double-penalty problem associated with mse loss. When a model predicts an event slightly off its actual position, it gets penalized twice: once for predicting the event at the incorrect location, and again for not predicting the event at the correct location. Smoothing the wind field improves the rmse but decreases the forecast's effectiveness in warning against strong wind events. The ddm outperforms the IFS in terms of ETS for most thresholds, which is largely due to IFS's poor ability to represent strong winds in the Norwegian mountains. It also has ETS scores close to meps, except for wind speeds above 15m/s (<ref>a). We also see that the ddm produces fewer strong wind events than what is observed (<ref>c). This is problematic for public weather forecasting. A simple solution is to adjust the distribution of winds to better match the distribution of climatology by a separate post-processing step.
For weather warnings, the timing of an event is often less important. To evaluate this need, we computed the 24h maximum wind speed (from the four instantaneous values within the time period). The meps control outperforms the ddm for an even greater range of wind speed thresholds (<ref>b) than for instantaneous wind speed. This is because the ddm has an increasing number of misses for higher thresholds, and has skill closer to the ensemble mean.
§.§ Precipitation
For 6h accumulated precipitation, the ddm performs similar with respect to ETS to the NWP baseline models (<ref>a). As with wind speed, we see that the ddm underestimates higher precipitation amounts (<ref>c), which can be attributed to spatial smoothing.
Many of our users are interested in aggregated precipitation over longer periods, particularly at a daily timescale. Although the ddm is not directly trained to produce accurate 24h precipitation accumulations, we see that the model is competitive compared to the baseline models (<ref>b). The underestimation of extreme precipitation is less evident at the 24h timescale (<ref>d) than at the 6h time scale, and is likely due to the fact that any temporal smoothing at the 6h time scale does not negatively affect the 24h aggregated values.
For public weather forecasting, discriminating between a dry period and a rainy period is important as this affects the weather symbol we use to represent the period. For this, we use a threshold of 0.5mm, above which rain will appear in our weather symbol. <ref> shows that the ddm is better able to discriminate between the occurrence and non-occurrence of precipitation than the NWP models for lead times beyond 6h.
§ CONCLUSIONS AND FUTURE WORK
We have shown that a stretched-grid approach to regional data-driven modelling outperforms a state-of-the-art high-resolution NWP model for certain key parameters used in public weather forecasting, including instantaneous 2m temperature, 24h minimum and maximum temperature, and whether 6h precipitation exceeds 0.5mm. The model also provides competitive forecasts of instantaneous 10m wind speed, 24h maximum wind speed, and 6h and 24h accumulated precipitation, though the model tends to underestimate extremes.
The performance of the ddm is promising enough that it warrants use in public weather forecasting. However, several challenges must be overcome before we can use the model operationally. Firstly, forecasts at an hourly time scale is needed to serve the needs of our users. This will pose an even bigger challenge regarding GPU memory and it is still an open question if such a model can perform as well as a model trained with a 6h time step. Alternatives to auto-regressive approaches could be considered, such as training a separate model that interpolates 6-hour scenarios in time.
A second challenge is the need to provide probabilistic forecast <cit.> parameters, such as the quantiles of the probability distribution, which we use in our app to indicate the risk of heavy rainfall and strong winds. This can be solved by training an ensemble model or by directly modelling the quantiles using an appropriate loss function for scoring quantile forecasts.
To further improve the model in general, using more observational data could be important. For temperature, the ddm was able to utilize the information from assimilated observations better than the NWP model. Additional observations could be assimilated into the input grid before training, but a more general approach would be to use them directly with a separate encoder. Using crowdsourced observations, which have previously been shown to add value in operational weather prediction <cit.>, is an avenue we will explore further.
Due to their low computational requirements, ddms will allow us to run models for longer lead times than is typically affordable with high-resolution nwp models. This potentially allows us to use a single model for a range of timescales, instead of merging model runs from short-range, medium-range, and extended-range NWP systems as we do today. Operationally, these systems require separate post-processing as the models have different resolution and biases. End users would benefit from seamless forecasts without noticeable jumps across timescales.
§.§ Acknowledgments
This work was supported by computing and storage resources provided by Sigma2 – the National Infrastructure for High-Performance Computing and Data Storage in Norway (Grant No. NN10090K).
§.§ Data availability statement
The verification data (observations and forecasts) in this article are made accessible in Verif-format <cit.> on Zenodo (<https://zenodo.org/communities/verif/>).
ieeetr
|
http://arxiv.org/abs/2409.03429v1 | 20240905112012 | Reinforcement Learning Approach to Optimizing Profilometric Sensor Trajectories for Surface Inspection | [
"Sara Roos-Hoefgeest",
"Mario Roos-Hoefgeest",
"Ignacio Alvarez",
"Rafael C. González"
] | cs.RO | [
"cs.RO",
"cs.AI"
] |
AUGUST 2024Roos-Hoefgeest et al.: RL Approach to Optimizing Trajectories Surface Inspection
Reinforcement Learning Approach to Optimizing Profilometric Sensor Trajectories for Surface Inspection
Sara Roos-Hoefgeest,
Mario Roos-Hoefgeest,
Ignacio Alvarez,
and Rafael C. GonzálezMember, IEEE
Sara Roos-Hoefgeest, Ignacio Alvarez and Rafael C. González are with the Department of Electrical, Electronic, Communications and Systems Engineering, Oviedo University, Gijón, Spain
Mario Roos-Hoefgeest is with CIN Systems, Gijón, Spain
September 9, 2024
===================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
High-precision surface defect detection in manufacturing is essential for ensuring quality control. Laser triangulation profilometric sensors are key to this process, providing detailed and accurate surface measurements over a line. To achieve a complete and precise surface scan, accurate relative motion between the sensor and the workpiece is required, typically facilitated by robotic systems. It is crucial to control the sensor's position to maintain optimal distance and orientation relative to the surface, ensuring uniform profile distribution throughout the scanning process. Reinforcement Learning (RL) offers promising solutions for robotic inspection and manufacturing tasks. This paper presents a novel approach using RL to optimize inspection trajectories for profilometric sensors. Building upon the Boustrophedon scanning method, our technique dynamically adjusts the sensor's position and tilt to maintain optimal orientation and distance from the surface, while also ensuring a consistent profile distance for uniform and high-quality scanning. Utilizing a simulated environment based on the CAD model of the part, we replicate real-world scanning conditions, including sensor noise and surface irregularities. This simulation-based approach enables offline trajectory planning based on CAD models.
Key contributions include the modeling of the state space, action space, and reward function, specifically designed for inspection applications using profilometric sensors. We use Proximal Policy Optimization (PPO) algorithm to efficiently train the RL agent, demonstrating its capability to optimize inspection trajectories with profilometric sensors.
To validate our approach, we conducted several experiments where a model trained on a specific training piece was tested on various parts in simulation. Also, we conducted a real-world experiment by executing the optimized trajectory, generated offline from a CAD model, to inspect a part using a UR3e robotic arm model.
Note to Practitioners
§ ABSTRACT
This paper addresses the problem of integrating robot manipulators and laser profilometric sensors in surface inspection tasks. Despite the relevance of both technologies in the manufacturing industry, little research has been done to integrate both technologies. In this work, we present a Reinforcement Learning-based trajectory generation algorithm that generates feasible sensor trajectories that optimizes the sensor’s position and orientation at each point along the scanning path. The input to the system is a CAD model of the object to be inspected. Therefore the system adapts dynamically to the geometrical characteristics of the objects. Experiments were conducted using simulations and real hardware. Objects with different geometric shapes and complexities were used to validate the approach, proving the effectiveness of this approach.
Industrial robots, Trajectory Planning, Reinforcement Learning, Automatic optical inspection
§ INTRODUCTION
Surface inspection is a critical aspect of quality control in many industries, ensuring that manufactured components meet strict standards and function reliably. Accurate detection and characterization of surface defects are essential to maintaining product integrity and quality.
Many industries still rely on manual inspection processes performed by human operators. However, manual inspection has already stopped being practical when it comes to the development of industrial demands for accuracy and efficiency. Manual inspection systems are not prone to detecting micron-scale defects. Therefore, advance sensor technologies are needed to serve for the need of small imperfections, which would be impossible to detect by human vision. A famous case study is car body pieces <cit.>. As the body components have a dimensional tolerance of several tenths of millimeter, the defects, whether it is functional stretching, or just a purely aesthetic protrusion, or imbalance, are sometimes one hundredth of this size.
Furthermore, to meet the stringent requirements of modern industrial inspection, advanced sensor technologies have emerged as indispensable tools. Among them, laser triangulation is a widely adopted technique due to its superior precision and efficiency <cit.>, <cit.>. This method involves projecting a laser line onto the surface of an object and capturing the reflected light using a camera or sensor. Through the analysis of the distortion of the projected line, detailed information about the surface topography can be obtained with high accuracy.
To achieve an integral scan of the entire surface of the part to be inspected, relative motion between the part and the sensor is required. Robotic systems, including robotic arms, <cit.>, <cit.>, unmanned aerial vehicles (UAVs) <cit.>, drones or unmanned ground vehicles (UGVs) <cit.>, <cit.>, or autonomous underwater vehicles (AUVs) <cit.>, have been increasingly integrated into inspection procedures in different applications to address this requirement. These systems facilitate precise and controlled travel between the inspected part and the sensor, enabling complete surface coverage and efficient inspection processes.
Effective and accurate inspection requires meticulous planning of the sensor paths over the part surface. While manual planning is sufficient for simpler scenarios, more complex geometries or stringent accuracy standards require the implementation of automated methods. The generation of inspection paths for robotic systems represents a significant challenge, requiring predefined paths that take into account surface geometry, defect characteristics and inspection requirements.
Although several studies related to automated inspection path planning can be found in the literature, highlighting the use of robotic arms, there is a significant gap in research specifically addressing the integration of robotics and profilometric sensors for surface inspection tasks.
Chen et al. highlight this gap in their study<cit.>, where they propose a novel approach for automatically detecting surface defects on freeform 3D objects using a 6-degree-of-freedom manipulator equipped with a line scanner and a depth sensor. Their method involves defining local trajectories for precise defect inspection and optimizing global time for efficient scanning.
Li et al. propose a method for planning scanning trajectories in automated surface inspection <cit.>. Their approach is based on a trajectory planning algorithm using a triangular mesh model. They divide the workpiece surface area into regions, determine scanning orientations and points in each region, and then generate scanning trajectories using the minimum enclosing rectangle. This method involves developing a section division algorithm to determine the scanning orientation, followed by generating trajectories that comply with system constraints.
Recently, a new trend has emerged for trajectory generation in robotics using Reinforcement Learning (RL) methods. While several papers have explored the potential of RL in various applications, few have focused specifically on its application in inspection tasks. This lack of research highlights the need for further exploration and research in this area.
For example, Lobbezoo et al. compile in <cit.> different strategies present in the literature that use RL algorithms for pick and place applications. On the other hand, Elguea-Aguinaco et al. provides in <cit.> a comprehensive review of the current research on the use of RL in tasks involving intensive physical interactions. These tasks refer to activities where object manipulation involves direct and meaningful contact between the robot and its environment. This study covers research related to a variety of areas, including rigid object manipulation tasks (e.g., assembly, disassembly, or polishing and grinding) and deformable object manipulation tasks (e.g., rope or garment and fabric folding, tensioning and cutting, or object manipulation). The approaches and methodologies employed in each of these areas are compiled and analyzed.
Han et al. present in <cit.> a comprehensive investigation of different applications of Deep RL in robotic manipulators, highlighting the main problems faced by RL in robotic applications. One of the most important problems is that the models used often do not perfectly replicate the real system. For example, in machine vision-guided robots, the simulation of RGB images can differ significantly from the actual images captured by cameras, a problem known as Sim-to-Real. This is because the simplified models do not fully capture the system dynamics. These discrepancies make it difficult to transfer simulation-trained models to real environments.
To address this issue, we use a realistic simulator presented in our previous work <cit.>, which allows us to accurately represent the measurements obtained by the profilometric laser triangulation sensor.
Another problem they highlight is that trajectory generation in robotics is an inherently multidimensional problem, which further complicates the learning and optimization process. Also Ji et al. emphasize in <cit.> this problem, highlighting that in the field of robotics, most work using RL focuses on the field of mobile robot navigation due to its simple and well-developed theory <cit.>.
Surface inspection using profilometric sensors is typically performed in a straight line. If the workpiece is too large to be covered in a single scan, parallel passes are used, generally following Boustrophedon-like trajectories <cit.>. In our approach, we will start with these linear or Boustrophedon trajectories and enhance them using reinforcement learning techniques to optimize sensor movements and ensure comprehensive surface coverage. During each pass, the sensor advances in a predetermined direction while it can adjust its height and pitch angle over the piece, keeping the other orientations constant. Our approach effectively tackles the multidimensional challenge in surface inspection by concentrating on three critical parameters: the sensor's position along the scanning direction, its height adjustment, and pitch orientation. This focus simplifies the problem significantly, eliminating the need to individually manage each robot axis and streamlining sensor control and trajectory planning during inspections.
Reinforcement learning (RL) techniques offer effective solutions for problems with limited action spaces, making them well-suited for optimizing surface inspection tasks. RL's ability to learn from interactions and improve control policies based on rewards makes it a promising tool for addressing challenges in this field. Despite advancements in RL algorithms, its application in inspection tasks remains relatively underexplored compared to other robotics applications. This gap highlights the necessity for further exploration and research in this area.
In the realm of inspection applications, Xiangshuai Zeng's research presents PIRATE (Pipe Inspection Robot for Autonomous Exploration) <cit.>, a robot designed for internally inspecting pipes using reinforcement learning. Equipped with multiple bending joints and wheels, PIRATE offers adaptive and efficient mobility. By employing algorithms like PPO and deep neural networks, the robot learns to navigate sharp corners, adjusting its behavior in real-time and adapting to various pipe configurations. The system defines actions, rewards, and observations, with actions including wheel movements and adjustments in bending joints, and observations coming from a 2D laser scanner.
Another work focused on inspection applications is presented by Jing et al. in <cit.>, centering on the automatic generation of robotic trajectories for surface inspection in production lines. They use techniques such as Monte Carlo algorithms and greedy search for Coverage Path Planning (CPP), enabling the automatic generation of inspection trajectories adapted to objects of different sizes and geometries. Using a 3D structured light scanner, they generate a 3D point cloud representing the scanned object's geometry. The main objective is to minimize total cycle time, combining View Planning Problem (VPP) and trajectory planning to minimize the total sum of inspection and displacement costs after meeting surface coverage requirements. The trajectory generation process includes random viewpoint selection, robot movement calculation, collision-free path planning, evaluation of visibility for each viewpoint and covered surface portion, and application of an RL-based planning algorithm to select inspection actions until completion. Actions involve selecting a viewpoint and moving the robot to place the 3D scanner at the selected viewpoint. The proposed RL algorithm automatically generates the online inspection policy, taking as input the robot model, target object model, and sensor specifications.
Aligned with prior research, Christian Landgraf, Bernd Meese et al. introduce in <cit.> an approach for automatic viewpoint planning in surface inspection. They propose a strategy to find optimal sets of viewpoints for 3D inspection of specific workpieces. Their goal is to automate this process for any industrial robotic arm available in ROS and any 3D sensor specification; in their application they employ a stereo camera. Using advanced reinforcement learning algorithms such as Q-learning, Proximal Policy Optimization (PPO) and Deep Q-Networks (DQN), each action involves selecting the viewpoint and planning and executing the robot's movement towards this pose. Upon reaching its goal, the sensor generates a 3D point cloud at this specific pose, with the state of the environment constructed using 3D measurement observations and the robot's current pose.
For case studies like the one in this article, which uses laser triangulation profilometric sensors for measurements along a line, traditional trajectory planning approaches, such as the mentioned View Planning Problem (VPP), are not suitable. The VPP is intended for finding optimal poses for capturing images or making measurements of objects or surfaces using 3D vision sensors that cover a wide area. This highlights the need for a trajectory planning approach adapted to the specific characteristics of profilometric sensors, focusing on optimizing surface exploration rather than capturing a complete three-dimensional view.
To date, no research has been found regarding the generation of inspection trajectories using Reinforcement Learning and profilometric sensors, highlighting a significant gap in the literature and a promising area for robotics and automation research. This study addresses this gap by presenting an RL-based approach designed for generating inspection trajectories using profilometric sensors. Key contributions involve the modeling of the state space, action space, and reward function. We use the PPO (Proximal Policy Optimization) algorithm to efficiently train the RL agent, demonstrating its ability to optimize inspection trajectories with profilometric sensors.
PPO is an algorithm proposed by OpenAI in <cit.>. The authors highlight its ability to balance three key aspects in reinforcement learning: ease of implementation, sampling efficiency, and simplicity in hyperparameter tuning. PPO not only offers competitive performance comparable to or better than more advanced reinforcement learning algorithms, but also stands out for its significantly simpler implementation and tuning.
The effectiveness of PPO has been demonstrated across a wide range of applications, including robot control such as the PIRATE robot <cit.> or the development of view planning systems for inspection, as presented in the work of Landgraf, Meese et al. <cit.>. Additionally, it has been successfully applied in pick-and-place tasks, such as training 7-degree-of-freedom robotic arms for precise object manipulation, as described in <cit.>, as well as in other pick-and-place applications, as discussed in <cit.>.
Furthermore, comparisons between PPO and other reinforcement learning algorithms, such as SAC and TD3, reveal interesting patterns in terms of training efficiency, performance, and convergence. For example, in <cit.>, it was found that PPO tends to perform better in smaller state spaces, while SAC shows advantages in larger state spaces. On the other hand, in <cit.>, PPO and SAC were compared, where SAC demonstrated greater efficiency in sampling, but PPO exhibited greater insensitivity to hyperparameters and more stable convergence in complex problems. These findings support the choice of PPO as the main algorithm for the proposed research.
§ MATERIALS AND METHODS
In this section, we present the proposed approach for generating inspection trajectories using profilometric sensors and Reinforcement Learning techniques. The goal is to optimize each pass of the Boustrophedon scanning method, seeking an optimal orientation and distance of the sensor relative to the part at each position. This involves dynamically adjusting the position and tilt (pitch) of the sensor to maintain a constant and appropriate pose between the sensor and the surface at all times. The other two sensor orientations will be fixed beforehand, allowing for precise and uniform data capture. Additionally, we aim to optimize the spacing between profiles. This approach ensures complete and detailed coverage of the part's surface, thereby maximizing the quality and accuracy of the captured data.
To train the RL algorithms, we use a simulated environment that replicates the conditions of the real system described developed in our previous work <cit.>. This simulator emulates the measurements of a laser triangulation profilometric sensor, including sensor noise and speckle noise generated by the object's surface. Thus, a realistic and controlled training environment is obtained.
The state space is constructed using the position and orientation of the robot's end-effector. This allows for generalization of the approach and facilitates the transfer of the method to different robotic configurations. Additionally, the state also includes other parameters, such as the mean profile distance, the direction angle, and the advance between consecutive scans.
The action space is defined by relative increments in the sensor's position and tilt angle, allowing for precise adjustments and smooth movements of the sensor. The reward function consists of three key components: the distance between the sensor and the surface, the alignment of the sensor with the surface normal, and the spacing between consecutive scans. This comprehensive reward function encourages optimal behaviors in terms of distance, orientation, and sensor advancement.
Next, each component of the proposed method is described in detail, providing a thorough understanding of its design and implementation.
§.§ Reinforcement Learning (RL): Basic Concepts
Reinforcement Learning (RL) is a branch of machine learning inspired by behavioral psychology, which focuses on how agents make decisions within an environment to maximize some measure of accumulated reward over time <cit.>.
In RL, the agent dynamically interacts with the environment, observing its current state and selecting actions in response. These actions affect the environment's state and generate a reward signal that guides the agent's behavior. The primary goal of the agent is to maximize the accumulation of these rewards over time, thereby optimizing its performance in the environment. Figure <ref> illustrates the basic interaction cycle between an agent and the environment.
The agent refers to the machine learning algorithm that interacts with the environment. The environment is the adaptive problem space with attributes such as variables, boundary values, rules, and valid actions. Each action represents a step taken by the RL agent to navigate through the environment. The set of all valid actions in a given environment is referred to as the action space 𝒜, defined mathematically as 𝒜 = {a_1, a_2, ..., a_n}, where a_i represents a specific action in the set, and n is the total number of actions. A state represents the environment at a given point in time. The set of all possible states in an environment is 𝒮 = {s_1, s_2, ..., s_m}. The reward is the positive, negative, or neutral value the agent receives as a consequence of an action, assessing its quality. The reward at each time step r_t depends on each state-action pair r_t = r(s_t, a_t). The accumulated reward is the sum of all rewards obtained over time.
In most cases, the problem to be solved is formally modeled as a Markov Decision Process (MDP). An MDP can be defined as a tuple of 5 elements (𝒮, 𝒜, r, P, ρ_0), representing respectively the set of all valid states, the set of all valid actions, the reward function, the state-action transition probability function, and the initial state distribution.
Another key parameter is the policy (π), which is the strategy guiding the agent's decision-making within the environment. The policy can be deterministic, where for each state in the environment, the agent selects a specific action predictably or stochastic, meaning that the agent chooses actions based on probabilities, introducing uncertainty and allowing for exploration.
The goal of any reinforcement learning algorithm is to select a policy that maximizes the expected return when the agent acts according to it. The expected return J(π) is represented by equation <ref>. The expected reward 𝔼_π[r(s_t, a_t)] is the average of the rewards the agent expects to receive by following the policy π in each state s. The objective is to adjust the policy parameters to maximize this reward, using optimization methods such as policy gradient to continuously improve the policy and the agent's performance in the specified task.
J(π) = 𝔼_π[r(s_t,a_t)]
Therefore, the final optimization problem in an RL algorithm can be expressed as equation <ref>, where π^* is the optimal policy:
π^* = max_π J(π)
§.§ Scanning Characteristics for Surface Inspection with Profilometric Sensors
During the scanning process using laser triangulation profilometric sensors, the quality of the measured data is directly affected by various parameters associated with the relative position between the sensor and the inspected piece, as detailed in <cit.>. These parameters are critical to ensuring precise and thorough surface inspection. Therefore, it is essential to carefully consider these factors during the planning of scanning trajectories in order to achieve effective results in surface inspection.
One of these crucial parameters is the Optimal Working Distance (W_d), which denotes the ideal distance between the sensor and the object's surface. This distance ensures optimal precision of the captured data by positioning the laser source at the scanning reference plane, typically located at the midpoint of the depth of field.
The Depth of Field (Z_r) refers to the range of distances within which the sensor can accurately capture surface data during a single scan (Figure <ref>(b)). Assuming a point in the scanner's coordinate system is (x_s, 0, z_s), the equation <ref> must be satisfied. Operating within this range is critical as it minimizes noise levels associated with deviations from the optimal working distance. Studies <cit.> have demonstrated that maintaining proximity to the optimal working distance reduces noise, thereby enhancing measurement accuracy.
W_d - Z_r/2≤ z_s ≤ W_d + Z_r/2
Another crucial parameter is the Direction Angle (α), which signifies the angle between the sensor's orientation vector l and the normal vector n of the workpiece surface (Figure <ref>(c). This angle is computed using Equation <ref>. As the direction angle increases, there is a higher likelihood of introducing noise into the capture. This phenomenon occurs because the scanner may capture unwanted reflections of the laser light and variations in surface reflectivity, negatively impacting the data quality. Previous studies <cit.> have empirically shown how noise levels correlate with the direction angle, highlighting its significance in achieving precise surface capture.
α = acos(-l·n)
Additionally, the Distance Between Profiles (Δ s) determines the density of points between consecutive scan profiles. Adequate point density ensures comprehensive coverage and accuracy of the inspected surface, particularly in areas with small features or irregular surfaces where a lower density might compromise inspection quality. See figure <ref>(d).
In addition to considering these parameters, it is crucial to choose the appropriate type of trajectory to achieve a complete scan of the surface of the piece. In laser profilometer inspection applications, one of the most common strategies is to use Boustrophedon paths <cit.>, <cit.>.
In a Boustrophedon scan, the sensor moves in a straight line along one axis until it reaches the edge of the surface to be inspected. Then, it shifts laterally a predetermined distance and changes direction to move in the opposite direction along the initial axis. This pattern of movements is repeated until the entire surface is covered. In the context of surface inspection, this method is highly efficient in ensuring that every area of the piece's surface is scanned without omissions, thereby maximizing coverage and inspection accuracy.
Considering these types of trajectories, the profilometric sensor collects data only during the parallel passes along the surface of the piece. In Figure <ref>, these trajectories are shown in red, from the initial point of a pass (P_ini_i) to its end point (P_fin_i), where i denotes the number of parallel passes. The movement of the robot between each pass is shown in black. The distance between passes, d, is carefully adjusted to ensure that the scans overlap adequately, thereby completely covering the piece.
§.§ Simulated Environment
To train the reinforcement learning algorithms, it is essential to have an environment that simulates the conditions of the real system. Conducting tests directly on the real system can be costly, dangerous, or simply impractical in many situations. Therefore, simulators are used to virtually recreate the environment of interest.
We will use the simulator detailed in our previous work <cit.>, designed to replicate the conditions of the real system in a virtual environment. This simulator recreates the measurements of a laser triangulation profilometric sensor, emulating the parameters of any commercial sensor according to its specification sheet. It allows for precise reproduction of measurements on a CAD model of the part to be inspected, including the introduction of inherent sensor noise and speckle noise generated by the object's surface. See Figure <ref>.
In each iteration of the simulator, several critical parameters are obtained that will be used later by the RL algorithm. First, the distance profile is captured, a fundamental representation provided by any profilometric sensor, see Figure <ref>. Additionally, the 3D position of the scanned points of the CAD model is collected, providing detailed information about the surface geometry of the object, see Figure <ref>. Furthermore, the simulator also provides data on the normals at those points on the object's surface.
§.§ State Space
As previously mentioned the position and orientation of the end-effector are used instead of relying on the positions and velocities of the robot's joints. This choice simplifies the state space and facilitates the transfer of the method to different robotic configurations without the need for specific adjustments in the joints.
Mathematically, the state S is defined as a tuple as follows:
S = {P(x, y, z), θ, D, α, Δ s}
Here, P(x,y,z) represents the position of the end-effector, while θ denotes its tilt. The parameters D, α, and Δ s correspond to the mean profile distance obtained from the scan, the angle between the sensor and the surface, and the advance between consecutive scans in the 3D space, respectively.
§.§ Action Space
The action space is defined by the increments in the position and tilt angle of the inspection sensor. These increments are defined relative to the sensor's own coordinate system. Mathematically, the action space is represented by equation <ref>.
A = {Δ y, Δ z, Δθ}
Where Δ y refers to the increment in position in the sensor's forward direction (Y), which will be previously defined by a unit vector indicating the scanning direction. Δ z refers to the increment in position in the sensor's vertical direction (Z), controlling the height of the end-effector relative to the part. Δθ denotes the change in the sensor's pitch orientation, which is the rotation around the X-axis. This is represented in Figure <ref>.
The action space is defined as continuous, meaning that actions span a continuous range of values rather than discrete ones. This approach ensures smooth and controlled sensor movements to avoid abrupt changes that could affect measurement accuracy or cause collisions with the workpiece. Equation <ref> establishes the limits for each type of action in the continuous space. Here, Δ y, Δ z, and Δθ are constrained to values between ±Δ y_max millimeters, ±Δ z_max millimeters, and ±Δθ_max degrees, respectively.
Δ y ∈ [ - Δ y_max, Δ y_max]
Δ z ∈ [ - Δ z_max, Δ z_max]
Δθ∈ [ - Δθ_max, Δθ_max]
§.§.§ Dynamic Action Limitation
To ensure smooth and safe movement of the inspection sensor, the selection of actions is dynamically adjusted based on the environment's observations. This action limitation accelerates the convergence of the reinforcement learning algorithm, enabling the system to learn more efficiently and effectively.
When the sensor is farther from the part surface than the optimal working distance W_d, limits are applied to the sensor's displacement in the Z direction Δ z to bring it closer to the surface in a controlled manner. Conversely, if the sensor is too close, displacements in the negative direction are limited, as per equation <ref>.
Δ z = clip(Δ z, 0, Δ z_max) if (D - W_d) ≥ 0
clip(Δ z, -Δ z_max, 0) if (D - W_d) < 0
Here, clip(x,a,b) limits the value of x between a and b, ensuring that the actions are within the permitted range, according to equation <ref>.
clip(x,a,b) = {
a if x ≤ a
b if x ≥ b
x else.
Similarly, if the sensor's direction angle (α) with respect to the surface normal is positive, indicating excessive tilt, limits are applied to the angular displacement Δθ to correct the sensor's orientation. Conversely, if the tilt angle is negative, limits are applied to the angular displacement in the opposite direction to keep the sensor properly aligned with the inspected surface. This is represented in equation <ref>.
Δθ = clip(Δθ, 0, Δθ_max) if α≥ 0
clip(Δθ, -Δθ_max, 0) if α < 0
§.§ Reward Function
In reinforcement learning, creating an effective reward model is crucial as it guides the agent toward desirable behaviors within the environment. This model assigns a value to each state-action pair, reflecting the immediate benefit or cost associated with the agent's decision. This section details the reward strategy designed in this research.
The proposed reward function R(s,a) consists of three distinct components, each capturing different aspects of the inspection process. Mathematically, this function is expressed as shown in equation <ref>.
R(s,a)=w_d R_D+ w_α R_α+ w_Δ s R_Δ s
R_D represents the reward related to the distance between the sensor and the inspected surface, R_α denotes the reward related to the alignment of the sensor's orientation with the normal of the inspected object's surface, and R_Δ s captures the reward associated with the sensor's movement between consecutive scans in the 3D space corresponding to the point cloud of the inspected piece. w_d, w_α, w_Δ s represent the weights that each component contributes to the overall reward function.
The proposed rewards are in the range [0, -1], as the reward function aims to incentivize the agent to perform actions that improve the inspection process. The maximum value of 0 is assigned when the optimal goal is reached, while negative values indicate penalties for deviations from the desired behavior.
§.§.§ Distance Reward (R_D)
To ensure that the sensor maintains an optimal distance from the inspected surface, a distance reward function R_D is defined as a continuous penalty function that decreases as the absolute difference between the observed distance and the optimal working distance W_d increases. The reward function is formulated as follows:
R_D = - (W_d - D)^2/(Z_r/2)^2
Where W_d represents the optimal working distance, D the observed distance during scanning, and Z_r the specified working range of the sensor. This results in a parabolic function with values between [-1,0], corresponding to 0 when operating at the optimal working distance and -1 at the sensor's range limits, as shown in Figure <ref>. If the distance is outside this range, the penalty is maximum (-1).
§.§.§ Orientation Reward (R_α)
To induce the agent to align its orientation with the surface normal, we introduce an orientation reward model (R_alpha). This model is designed to minimize the angular disparity between the sensor direction and the surface normal vector. The function is defined as a continuous penalty function that approaches 0 as the absolute orientation difference decreases, see Figure <ref>:
R_α = max(-1, - α^2/α_max^2)
Where α is the angular difference between the sensor's orientation and the surface normal, and α_max is the maximum allowed angular disparity threshold. This model encourages the agent to maintain close alignment with the surface normal, optimizing the quality of the inspection.
§.§.§ Movement Reward (R_Δ s)
In addition to optimizing the distance and orientation of the sensor, ensuring smooth forward movement is crucial for comprehensive surface coverage. Forward scanning movement ensures that each scanned 3D profile extends beyond the previous one, facilitating thorough inspection. The reward function R_Δ s is expressed as:
R_Δ s = max(-1, - (Δ s - Δ s_opt)^2/Δ s_opt^2)
This function penalizes the agent when the scanning spacing Δ s is negative, indicating backward movement within the inspection area. Likewise, it behaves parabolically with respect to the scanning spacing Δ s. When the spacing is equal to twice the optimal value Δ s_opt, the reward reaches its minimum value of -1. This indicates a strong penalty for excessively large spacings. As the spacing decreases from this point, the reward gradually increases, reaching a maximum value of 0 when the spacing is exactly equal to the optimal value. Therefore, the reward function motivates the agent to maintain spacing close to the optimal, as both above and below-optimal values result in a decrease in reward, see Figure <ref>.
§.§ RL Algorithm: PPO
The Proximal Policy Optimization (PPO) algorithm <cit.> is a policy gradient technique designed to provide faster and more efficient updates than previously developed reinforcement learning algorithms, such as Advantage Actor-Critic (A2C) or Deterministic Policy Gradient (DPG).
PPO was designed as an improvement over Trust Region Policy Optimization (TRPO). It simplifies and accelerates the training process by using first-order gradients and a clipped objective function that stabilizes policy updates.
PPO employs a clipped surrogate loss function, penalizing excessive changes in the policy, stabilizing training, and preventing significant divergence between new and old policies. The clipped objective function of PPO is defined by equation <ref>.
J_clip(π) = 𝔼[ min( r(π) Â, clip( r(π), 1 - ϵ, 1 + ϵ) Â) ]
Here, π represents the policy, 𝔼 denotes the expectation over time, r is the ratio of probability under the new and old policies, respectively, Â is the estimated advantage, and ϵ is a hyperparameter controlling how much the new policies are allowed to differ from the old policies during the optimization process. It is used to compute a penalty function that limits the policy change at each optimization iteration. The probability ratio r(π) is calculated as the probability of selecting an action under the new policy divided by the probability of selecting the same action under the old policy, i.e.:
r(π) = π_new(a_t|s_t)/π_old(a_t|s_t)
For the PPO algorithm, the hyperparameter configuration involves several key aspects. The neural network architecture defines the structure of the neural network, including the number of hidden layers and units per layer. An activation function, typically ReLU, is applied to the outputs of each hidden layer to introduce non-linearities into the model. The learning rate determines the size of the step taken during weight updates, influencing the speed and stability of learning. Additionally, the clip ratio limits policy changes between updates to ensure training stability. The epoch parameter denotes the number of complete passes through the training dataset during training.
§ RESULTS
In this section, we present the experiments conducted to validate and evaluate the proposed methods in the context of generating inspection trajectories using reinforcement learning (RL) algorithms. These algorithms were implemented using the open-source library stable-baselines3 <cit.>, which provides enhanced implementations of reinforcement learning algorithms based on OpenAI. To analyze and process the obtained results, we used MATLAB 2023b.
First, we detail the training process of the RL model, including the architecture of the neural network used, the configured hyperparameters, and the training methodology employed.
Subsequently, the trained RL model is employed to generate inspection trajectories for two different parts using their CAD models: a car door and the body of a Parrot drone, see figure <ref>. Due to its dimensions, the car door will be scanned using a Boustrophedon trajectory base, whereas the drone will be scanned with a single straight-line scan.
The trained RL model takes the CAD model of the part as input and produces a sequence of movements that the sensor must follow to efficiently scan its surface. This trajectory is designed to minimize the error between the actual distance of the sensor to the part and the sensor's optimal working distance, as well as the direction angle. Additionally, it ensures a constant separation between scan profiles, guaranteeing uniform and precise coverage of the entire surface.
We perform different experiments, both in simulation and in a real environment. Simulation results obtained during the execution of the trajectories in our inspection simulator are presented and analyze. The results of the scans generated by RL optimized trajectories are compared with conventional methods such as straight line trajectories or Boustrophedon type trajectories.
Finally, an experiment is conducted using a UR3e robot in a real-world environment to execute the inspection trajectory generated offline by the trained RL model for the Parrot drone. We analyze the results obtained to validate the transferability of the solution from the simulated environment to practical real-world applications.
§.§ Training Process
The training process of the RL model for trajectory optimization in robotic inspection was developed using a detailed simulation environment, the characteristics of which are explained in <cit.>. In this context, a profilometric sensor was simulated with the same specifications as the Automation Technology model AT-C5-4090CS30-495, whose main parameters are detailed in Table <ref>. It is important to note that this setup is designed to generalize based on input parameters, allowing for adjustments to different working distance, for example.
The design of the training piece was aimed at representing a wide variety of conditions that could be encountered in real inspection applications. This piece, created in 3D modeling software, features changes in orientation, height variations, and flat surfaces. Its dimensions are 1050x150x50mm, as shown in Figure <ref>.
During training, each episode is defined so that it corresponds to a starting point and an ending point, determined by the scanning direction and the piece's dimensions, visually represented in the same figure showing the CAD model of the piece used for training, as illustrated in Figure <ref>.
In the experiments, the action space is continuous, meaning actions are expressed as values within a continuous range rather than discrete values. Specifically, these actions are limited within the interval of [-1, 1], where position increments are measured in millimeters and pitch angles in degrees.
Table <ref> summarizes the hyperparameters used for the PPO algorithm. These parameters were selected based on default values recommended by the authors of the stable-baselines3 library. The neural network architecture consists of two hidden layers with 64 units each and ReLU activation function. A learning rate of 0.0003 was employed, with updates performed every 2048 steps and a batch size of 64. The discount factor (γ) was set to 0.99, and a clip ratio of 0.2 was used to stabilize training by limiting policy updates. Training proceeded over 10 epochs to refine the policy effectively. These hyperparameters were chosen to balance exploration and exploitation, ensuring robust and efficient learning within the RL framework.
During the training process of the algorithm, various metrics are employed to assess their performance and convergence capability. These metrics include the reward per episode, the length of episodes, and the number of episodes required to reach a certain performance level. The reward per episode is a crucial metric indicating the total amount of reward accumulated by the agent in each training episode. Generally, a higher reward reflects better performance of the agent in the task.
However, in this specific training context, evaluating solely the accumulated reward per episode might not be the most appropriate approach. This is because the length of episodes can vary significantly depending on the step size, defined as the distance between profiles. Therefore, instead of focusing solely on the accumulated reward, it is preferred to evaluate the globally normalized reward by the length of the episode. This metric provides a more comprehensive assessment of the agent's performance, as it considers both the accuracy of measurements and the efficiency of the trajectory. By doing so, a more precise insight into the overall effectiveness of the model in trajectory optimization and inspection quality is obtained, regardless of the specific length of episodes.
Additionally, the total number of episodes required for the agent to reach a certain performance level or convergence is analyzed. This number gives us an idea of how quickly the RL algorithm can learn and improve its performance in the given task.
In this experiment, the weight of each partial reward has been set so that they have the same influence on the overall reward, that is, w_D = w_α = w_Δ s = 1/3.
§.§ Car door
The first piece used to evaluate the model was a car door with dimensions 1440x1060x190mm. The initial trajectory, serving as a base for the optimization, is depicted in Figure <ref>. In this figure, the initial Boustrophedon-type trajectory is observed, with inspection passes marked in red and movements between passes in white.
The reinforcement learning model is applied throughout the different passes, dynamically adjusting the orientation and distance of the profilometric sensor. Each pass of the Boustrophedon scan is optimized to maintain an appropriate pose between the sensor and the door surface, ensuring precise and uniform data capture. The model is exclusively applied to the red passes, where inspection is performed, optimizing the sensor's orientation and distance in each of these passes. The white movements represent transitions between passes where the sensor is not scanning and, therefore, not optimized.
Here are the results obtained when applying the trained model to the trajectory defined in Figure <ref>. The definition of the start and end points of each pass is manually performed using the simulator interface.
Figure <ref> (a) displays the point cloud obtained during the scanning using the profilometric sensor. The point cloud is depicted with a color map indicating the error in the measured distance for each point. This error is defined as the difference between the optimal working distance of the sensor and the actual distance obtained during the measurement at each point.
The aim of the analysis is to evaluate the accuracy and consistency of the profilometric sensor under real scanning conditions. The use of the color map facilitates the visualization of areas with higher and lower errors, providing a clear insight into the spatial distribution of the error across the scanned surface.
To provide a meaningful comparison, a scan was also carried out using a more traditional method. In this approach, the scanning was performed following the boustrophedon trajectory with a static configuration of height and orientation, without dynamic adjustments. This method is commonly employed in industrial applications due to its simplicity and ease of implementation. Figure <ref> (b) shows the representation of the distance error map in a traditional scan.
In addition to the distance error map, an orientation error map was generated, which displays angular deviations from the optimal sensor orientation at each point. This deviation refers to the direction angle of the sensor on the surface. See Figure <ref>.
§.§ Parrot drone
The results obtained from the optimization of the inspection trajectory of the Parrot drone are presented below. Figure <ref> shows an image of the drone's body and its CAD model. The dimensions of the part are 350x95x65mm.
A scan of one side of the drone body will be performed. Due to its dimensions, the base trajectory used will be a simple straight line from the beginning of the part to the end, see figure <ref>.
Using the trained reinforcement learning (RL) model, the optimized scanning trajectory for the drone is generated.
§.§ Pen Holder
§.§.§ Real-world Validation Experiment
By leveraging the proposed simulation framework to generate and validate the scanning trajectories from the CAD model, the experiment aims to transition these optimized paths effectively to the real-world setup. This process ensures that the developed methods can be reliably applied to the actual drone body, achieving high precision and efficiency in industrial inspection scenarios.
The results obtained after executing the generated trajectory for scanning the Parrot drone in a real environment are presented below. The scanning system is composed of a 6 dof UR3e robotic arm equipped with a triangulation laser profilometer model AT-C5-4090CS30-495. The complete configuration of the inspection system, which includes the UR3e robotic arm, the profilometric sensor, and the drone body, can be seen in Figure
We begin with a trajectory composed by sensor poses. Using the RoboDK framework [ref]
, we generate the motion program for the UR3e robot to follow the optimized path. This program is then transferred to the robot's controller, configured to execute the motion commands. As a result, the UR3e robotic arm tracks the planned trajectory to scan the Parrot drone, faithfully replicating the precise movements defined during simulation.
§ DISCUSSION
In the real-world validation tests of the scanning trajectories, we used a smaller piece that required only a single linear scanning path, eliminating the need for a boustrophedon pattern. This choice was dictated by the physical limitations of the UR3e robotic arm, which has a relatively restricted reach. Given the limited range of the UR3e, it was not feasible to scan larger surfaces that would necessitate multiple passes. Consequently, we opted for a smaller and more suitable piece that allowed for effective validation within the operational capabilities of the available equipment. This decision ensures that, although the UR3e cannot cover extensive surfaces, the methodology and trajectory optimization principles we developed are applicable and verifiable in a controlled and representative environment.
Despite its smaller size, the real piece chosen for the tests has sufficient geometric diversity to validate our proposed methodology. Its varied surface features ensured that the trajectory optimization and scanning techniques could be robustly tested. If we were to work with a larger piece, a boustrophedon scanning pattern would simply be employed. In such cases, our reinforcement learning model would be applied to each pass within the boustrophedon path, just as it was used in the simulated experiment with the door. This approach ensures that our methods are versatile and can be adapted to both small and large-scale scanning tasks.
The scanning trajectory generated by our approach is designed to be highly versatile and adaptable to any robotic system with sufficient reach. This versatility arises from the fact that the trajectory increments in both position and orientation are small and precise enough to be executed by any industrial robot. When these incremental commands are input into the robot’s control software, it calculates the necessary joint movements to achieve the specified end-effector positions.
This adaptability is a key feature of our method. By focusing on end-effector coordinates, our system avoids the need for specific kinematic adjustments for different robots, making the trajectory compatible with a wide range of robotic platforms. Whether the robot has a simple or complex joint configuration, the internal control software translates the end-effector trajectory into joint movements effectively.
This approach not only simplifies the integration process with various robots but also ensures that the trajectory optimization remains effective regardless of the robot used. This makes our system particularly beneficial in diverse industrial settings where different robots may be utilized for various tasks. Thus, our trajectory generation and optimization method provides a flexible solution, generalizing across multiple types of robotic systems and enhancing the applicability of automated inspection processes.
§ CONCLUSIONS
In this paper, we have presented a method for generating inspection trajectories using laser profilometric sensors using Reinforcement Learning (RL) techniques. Our goal was to optimize the scanning process by dynamically adjusting the sensor's position and tilt to maintain an optimal pose relative to the surface of the part being inspected.
We employed a simulated environment that replicates the conditions of a real system, as developed in our previous work. In our method, the state space was defined using the position and orientation of the sensor, that normally is the robot's end effector, which allows for the approach to be generalized and easily transferred to different robotic configurations. We also included additional parameters in the state space, such as the mean profile distance, the direction angle, and the spacing between consecutive scans, providing a comprehensive understanding of the inspection process.
The action space was designed to include relative increments in the sensor’s position and tilt angle, allowing for precise adjustments and smooth sensor movements. Our reward function was design by incorporating three key components: the distance between the sensor and the surface, the alignment of the sensor with the surface normal, and the spacing between consecutive scans. This detailed reward function encourages optimal behavior in terms of maintaining appropriate distance, correct orientation, and efficient sensor progression across the surface.
Through our experiments in a simulated environment, we validated the capability of the RL-trained model to adapt to different parts and maintain optimal scanning trajectories not seen during the training phase. Furthermore, we tested the method in a real-world scenario using a UR3e robotic arm. The optimized trajectory, generated offline from a CAD model, was executed successfully, demonstrating that our method can produce high-quality, precise inspection trajectories that ensure effective surface coverage.
IEEEtran
[
< g r a p h i c s >
]Sara Roos-Hoefgeest
Sara Roos Hoefgeest is currently a PhD student at University of Oviedo, in Escuela Politécnica de Ingeniería of Gijón, Spain. She is working at the Department of Electrical, Electronics, Communications, and Systems Engineering. Her research work is focused on the area of Robotics, 3D Computer Vision and Visual Inspection of Manufactured Products. She holds a Bachelor's Degree in Electronic and Automation Engineering and a Master's Degree in Automation and Industrial Computing both from the University of Oviedo.
[
< g r a p h i c s >
]Mario Roos-Hoefgeest
Mario Roos Hoefgeest Toribio is currently a PhD student at University of Oviedo, in Escuela Politécnica de Ingeniería of Gijón, Spain. He is working at CIN Systems, applying the results of the research in production lines in industrial facilities. His research work is focused on the area of 3D Computer Vision and Visual Inspection for defects detection in production lines. He holds a Bachelor's Degree in Electronic and Automation Engineering and a Master's Degree in Automation and Industrial Computing both from the University of Oviedo.
[
< g r a p h i c s >
]Ignacio Alvarez
Ignacio Alvarez is PhD in Industrial Engineering at University of Oviedo (1997), and Associate Professor at the Electrical, Electronics and Automatic Control Department of the same University (1999). His research work is focused in 3D automatic inspection systems for dimensional control and defects detection in production lines, using several technologies (laser triangulation, stereo vision, conoscopic holography). He has participated in more than 20 public financed projects, more than 30 private financed projects, and published more than 15 papers in scientific journals; several prototypes developed or inspired by the author are in production in industrial facilities.
[
< g r a p h i c s >
]Rafael C. González
Rafael Corsino González de los Reyes was born in Gijón, Spain, in 1968. He received an Engineering degree and a Ph.D. degree in Electronics and Automation from the Escuela Superior de Ingenieros Industriales de Gijón, University of Oviedo, Spain, in 1993 and 1999, respectively. He is currently an Associate Professor in the Department of Electrical, Electronics, Communications, and Systems Engineering, Universidad de Oviedo. He is a member of IEEE and his research interests include 3D Computer Vision, Robotic Path Planning and Visual Inspection of Manufactured Products.
|
http://arxiv.org/abs/2409.02477v1 | 20240904065859 | Parameter estimation of hidden Markov models: comparison of EM and quasi-Newton methods with a new hybrid algorithm | [
"Sidonie Foulon",
"Thérèse Truong",
"Anne-Louise Leutenegger",
"Hervé Perdry"
] | math.OC | [
"math.OC",
"stat.CO"
] |
[
[
3 September 2024
====================
§ ABSTRACT
Hidden Markov Models (HMM) model a sequence of observations that are dependent on a hidden (or latent) state that follow a Markov chain. These models are widely used in diverse fields including ecology, speech recognition, and genetics.
Parameter estimation in HMM is typically performed using the Baum-Welch algorithm, a special case of the Expectation-Maximisation (EM) algorithm. While this method guarantee the convergence to a local maximum, its convergence rates is usually slow.
Alternative methods, such as the direct maximisation of the likelihood using quasi-Newton methods (such as L-BFGS-B) can offer faster convergence but can be more complicated to implement due to challenges to deal with the presence of bounds on the space of parameters.
We propose a novel hybrid algorithm, QNEM, that combines the Baum-Welch and the quasi-Newton algorithms. QNEM aims to leverage the strength of both algorithms by switching from one method to the other based on the convexity of the likelihood function.
We conducted a comparative analysis between QNEM, the Baum-Welch algorithm, an EM acceleration algorithm called SQUAREM (Varadhan, 2008, Scand J Statist), and the L-BFGS-B quasi-Newton method by applying these algorithms to four examples built on different models. We estimated the parameters of each model using the different algorithms and evaluated their performances.
Our results show that the best-performing algorithm depends on the model considered. QNEM performs well overall, always being faster or equivalent to L-BFGS-B. The Baum-Welch and SQUAREM algorithms are faster than the quasi-Newton and QNEM algorithms in certain scenarios with multiple optimum. In conclusion, QNEM offers a promising alternative to existing algorithms.
§ KEYWORDS
Hidden Markov models, Baum-Welch, quasi-Newton, L-BFGS-B, SQUAREM, optimisation, computational statistics
§ INTRODUCTION
Hidden Markov Models (HMMs) are widely used in time series analysis, with application across diverse fields, including ecology, speech recognition and many others. A particular interest is found in biology, especially in genetics: HMM are often used for genome annotation. For example, CpG islands identification (regions of the genome with a high frequency of cytosine-guanine dinucleotides) <cit.>, can be performed through the use of HMM. It can also be used to differentiate exons (protein-coding sequences) and introns (non-coding sequences) within genes <cit.>.
.
To compute the maximum likelihood estimators of the parameters of HMM, two approaches are naturally employed: the Baum-Welch algorithm <cit.>, which is a special case of the Expectation-Maximisation (EM) algorithm <cit.> (cf section <ref> below) and direct maximisation of the likelihood using a quasi-Newton method (section <ref>), especially BFGS (Broyden-Fletcher-Goldfarb-Shanno) algorithm <cit.>. Both methods have advantages and drawbacks. In particular, the Baum-Welch algorithm accommodates naturally the presence of bounds on the space of parameters, which can be a hurdle for the quasi-Newton, in particular when some parameters live in an open interval. On the other hand, the Baum-Welch algorithm usually converges more slowly than the quasi-Newton method.
In this paper we propose a new algorithm, QNEM, for HMM parameter estimation. This algorithm starts with one or more EM steps, and switches to a quasi-Newton algorithm, the BFGS, as soon as a criterion on local convexity of the log-likelihood is met. Quasi-Newton iterations are then performed, until convergence or until the aforementioned criterion on local convexity is no longer satisfied, in which case it switches back to the EM algorithm, etc.
We compare its performances with several algorithms: the Baum-Welch algorithm, an accelerated EM algorithm called SQUAREM (section <ref>) and the L-BFGS-B (Limited-memory BFGS with Bound constraints) algorithm, the “off-the-shelf” quasi-Newton method optimized for problems with bounded parameters.
We first review the theory of HMMs, and then expose the four algorithms of interest as well as the examples on which the comparisons are performed. Finally, we present the results obtained and discuss them.
§ HIDDEN MARKOV MODELS
A sequence of random variables measured at successive moments (X_i)_i ∈ℕ is a Markov chain if it satisfies the Markov property: for any time i, the distribution of X_i+1 conditional to (X_j)_j≤ i is equal to the distribution of X_i+1 conditional to the single value X_i.
The model is classically illustrated by a directed acyclic graph (DAG) in Figure <ref>.
One of the property of a Markov chain is the homogeneity of the sequence. A sequence is homogeneous if the transition probabilities ℙ(X_i+1 | X_i) are identical in every point of the sequence. This property is sometimes contradicted by a given sequence of observations, making the Markov model inappropriate. A more complex model, in which there is an additional hidden layer (S_i)_i ∈ℕ which is a Markov chain, and the distribution of X_i is assumed to depend only on the “hidden state” S_i, can be more appropriate. This model is called Hidden Markov Model (HMM) and is illustrated by the DAG in Figure <ref>.
In some situations, the Markov model is known to be adequate, but it can be impossible to directly access the Markov chain, but only variables depending on the current state of the Markov chain. In this case, the HMM provides a natural framework for analysis. HMMs allow to retrieve the states S_i of the latent Markov chain, using the observations X_i whose distribution depends on the hidden states.
§.§ Direct maximisation of the likelihood with a quasi-Newton type method
Direct maximisation of the likelihood (DML) with quasi-Newton methods is a possible way to estimate the HMM parameters.
Quasi-Newton methods are particularly suited for maximisation of differentiable scalar functions, such as likelihood functions. They have the advantage of avoiding the calculation of the Hessian matrix by approximating the Hessian matrix (or its inverse) at each iteration.
The off-the-shelf methods are BFGS or L-BFGS-B when some parameters are constrained, standing respectively for Broyden-Fletcher-Goldfarb-Shanno <cit.>, and for Limited-memory BFGS with Bound constraints.
This method relies on a forward step, computing probabilities of the hidden states, either conditionally to the X_i, or jointly with the X_i. These probabilities allow to compute the likelihood. The most common choice for the DML is using joint probabilities going forward <cit.>. A log-transformation is often employed to get rid of the numerical underflow implied by the joint probabilities <cit.>. In this paper, we chose to use the conditional version of the algorithm for the forward step (cf below section <ref>), which avoids the numerical underflow and therefore going through the log-transformation.
§.§ Baum-Welch algorithm, an EM algorithm
The Baum-Welch algorithm <cit.> was developed to estimate the HMM parameters.
The respective contributions of Baum and Welch in the development of the Baum-Welch algorithm is explained in Welch's Shannon lecture, in 2003 <cit.>. After working separately in the calculation of a posteriori probabilities of the hidden states, Baum and Welch worked together on the re-estimation of the parameters from these a posteriori probabilities.
The Baum-Welch algorithm was later recognised as a particular case of the more general Expectation-Maximisation (EM) algorithm formalised by Dempster, Laird and Rubin in 1977 <cit.>.
The Baum-Welch algorithm relies on a forward-backward algorithm for calculating the a posteriori probabilities of the hidden states (E-step). This algorithm is constructed with a forward step as sketched before. We chose to use the forward algorithm based on conditional probabilities, which is more commonly used for the Baum-Welch algorithm. It is also possible to use the forward algorithm based on the joint probabilities, together with an adequate version of the backward step (cf <cit.>, chapter 6).
Then the parameters are re-estimated from the a posteriori probabilities (M-step). This step depends on the model considered.
§ MATERIAL AND METHODS
For the following, let X = (X_1, ..., X_L) = X_1^L the observations and S = (S_1, ..., S_L) = S_1^L the hidden states along the chain of length L.
Let (s, t) = ℙ_θ(S_i+1 = t| S_i = s) be the transition probability from state s to state t and (x,s) = ℙ_θ(X_i = x| S_i = s) the emission probability of observation x from state s.
We consider four maximum likelihood methods to estimate the parameter θ.
§.§ Direct maximisation of the likelihood
As seen in for example Zucchini <cit.> and Turner <cit.>, a variant of the forward algorithm based on joint probabilities can be used to compute the likelihood. If a_i(s) = ℙ(S_i = s, X_1^i-1 = x_1^i-1) and b_i(s) = ℙ_θ(S_i = s,X_1^i = x_1^i), the quantities a_i(s) and b_i(s) can be computed recursively as follows:
* Initialisation :
a_1(s) = π(s)
b_1(s) = a_1(s) (x_1, s)
with π(s) the initial distribution of state s.
* Recursion :
a_i(s) = ∑_t b_i-1(t) (t, s)
b_i(s) = a_i(s) (x_i, s)
for i ∈2, ..., L
Finally the likelihood can be computed as
L(θ ; X) = ℙ_θ(X_1^L = x_1^L) = ∑_s a_L(s)
As the values of the a_i(s) and b_i(s) become very small when i increases, this algorithm is prone to numerical underflow. The usual solution is to compute the values on the logarithmic scale <cit.>, which involves rewriting the above equations accordingly. In this paper, we prefer to use the version of the forward algorithm based on conditional probabilities.
This forward algorithm computes classically two conditional probabilities alternatively:
α_i(s) = ℙ_θ(S_i=s | X_1^i-1 = x_1^i-1)
the “forecast” probability and
β_i(s) = ℙ_θ(S_i=s | X_1^i = x_1^i)
the “filtering” probability.
To compute the log-likelihood we need to compute also for each i
γ_i = log(ℙ_θ(X_1^i = x_1^i) ). These quantities can be computed recursively as follows:
* Initialisation :
α_1(s) = π_s
β_1(s) = (x_1,s) ×α_1(s)/∑_i (x_1,i) ×α_1(i)
γ_1 = log(∑_s α_1(s) (x_1, s)).
* Recursion :
α_i(s) = ∑_t (t,s) ×β_i-1(t)
β_i(s) = (x_i,s) ×α_i(s)/∑_t (x_i,t) ×α_i(t)
γ_i = γ_i-1 + log(∑_s (x_i, s) α_i(s))
for i ∈2, ..., L
Finally, the log-likelihood ℓ(θ ; 𝐗) is equal to γ_L. The likelihood is still computed on the logarithmic scale, but the α_i(s) and β_i(s) can be computed without rewriting all the equations on the logarithmic scale, which make the computation simpler.
The L-BFGS-B algorithm is then performed on the likelihood computed in the last step using the R function. The gradient of the likelihood along θ can be computed using automatic differentiation (our implementation relies on the R package salad <cit.>).
§.§ Baum-Welch algorithm
The Baum-Welch EM algorithm is composed by two steps:
* E step: knowing θ, compute the a posteriori probabilities of the hidden states along the chain (forward-backward algorithm)
* M step: re-estimate θ using these probabilities
The Baum-Welch algorithm alternates these 2 steps until convergence. As mentioned in section <ref>, we will use here the forward-backward algorithm presented in <cit.> or <cit.>.
§.§.§ E step : Forward-Backward algorithm
The forward-backward algorithm runs through the observations twice: the first time forward, ie. from first to last observations ; the second backward, ie. from last to first observation.
After computing the forward quantities shown in equations (<ref>) and (<ref>), the following “backward quantities” will be computed:
δ_i(s,t) = ℙ_θ(S_i-1=s, S_i=t | X_1^L = x_1^L)
the “smoothing" probability and
φ_i(s) = ℙ_θ(S_i=s | X_1^L = x_1^L)
the “marginal" probability.
* Initialisation:
φ_L(s) = β_L(s)
* Recursion:
δ_i(s,t) = (s,t) ×β_i-1(s)/α_i(t)×φ_i(t)
φ_i-1(s) = ∑_t δ_i(s,t)
for i ∈L, ..., 2.
At the end of the forward-backward, we have computed the probabilities φ_i(s) of the hidden states, and the probability δ_i(s,t) of two consecutive hidden states, given all the observations. These probabilities will be used in the M step.
It is worth to note that the backward algorithm has a higher computational cost than the forward algorithm, due to the necessity to calculate the δ_i(s,t) for all pairs of states s and t, at each time point i.
As mentioned previously, there exists a variant of the backward algorithm based the forward quantities a_i and b_i involving joint probabilities, as seen in <cit.>.
§.§.§ M step
The aim in M step is to re-estimate the parameter θ that will be used in the following E step.
Note θ^(k) the parameter at the k-th iteration. For each iteration w of the Baum-Welch algorithm, in M step we are going to estimate the parameter θ^(k+1) for the next iteration.
The complete likelihood of θ, ie the likelihood assuming the hidden states are observed, is:
L(θ ; S = s, X = x) = ℙ_θ(S_1 = s_1) ·∏_i=2^L ℙ_θ(S_i = s_i | S_i-1 = s_i-1)
·∏_i=1^L ℙ_θ(X_i = x_i | S_i = s_i)
Now the log-likelihood of θ is:
ℓ(θ ; S = s, X = x) = logℙ_θ(S_1 = s_1) + ∑_i=2^L logℙ_θ(S_i = s_i | S_i-1 = s_i-1)
+ ∑_i=1^L logℙ_θ(X_i = x_i | S_i = s_i)
The M step relies on the expected value of the log-likelihood:
Q(θ ; θ^(k)) = 𝔼( ℓ(θ ; S = s, X = x) | θ^(k))
= ∑_s ( logℙ_θ(S_1 = s) ·φ_1(s) ) + ∑_i=2^L[ ∑_s,t( logℙ_θ(S_i = t | S_i-1 = s) ·δ_i(s,t) ) ]
+ ∑_i=1^L [ ∑_s ( logℙ_θ(X_i = x_i | S_i = s) ·φ_i(s) ) ]
The M step ends with finding the parameters that maximise Q(θ ; θ^(k)) :
θ^(k+1) = _θ Q(θ ; θ^(k)).
In concrete examples, the δ_i(s,t) will be used to compute the transition parameters of the Markov chain, and the ϕ_i(s) the parameters relatives to the emission probabilities.
§.§ EM acceleration
The SQUAREM method was presented in Varadhan, 2008 <cit.>. It is a method created to accelerate the EM algorithm. This method seems interesting when we aim to apply EM to high-dimensional data or data based on complex statistical models. In a nutshell, the SQUAREM makes two EM iterations, and uses these values to extrapolate the trajectory of the algorithm, and estimate a future point several iterations ahead. It then iterates again from this extrapolated point. The constraints on the parameters, if any, are simply dealt with by constraining the extrapolated point to be in the feasible space. We applied this method to our examples to test its performances as well.
§.§ QNEM: a mix of the two algorithms
We propose to mix the Baum-Welch algorithm and the quasi-Newton algorithm BFGS as presented in the algorithm <ref>.
If the current value of the parameter is θ^(k), the BFGS algorithm finds θ^(k+1) by performing a linear search in a search direction p_k = -H_k ∇ f_k where H_k is the current approximation H_k of the inverse of the Hessian of the objective function, and ∇ f_k is its gradient in θ^(k). The H_k is updated using the formula
H_k+1 = H_k - H_k · y_k · y'_k · H_k/y'_k · H_k · y_k + s_k · s'_k/y'_k · s_k
where s_k = θ^(k+1) - θ^(k) and y_k = ∇ f_k+1 - ∇ f_k. If H_k is positive definite, H_k+1 will be positive definite as soon as the scalar product of s_k and y_k is positive:
s'_k · y_k > 0
This is the curvature condition. If the function is strongly convex it always holds after a linear search which successfully decreases the value of the objective function. In the general case, it may not hold, leading to the use of complex linear search methods to avoid this situation, such as the Nocedal line search, see <cit.> chapter 3.
The QNEM starts by performing Baum-Welch algorithm steps, until the curvature condition (equation <ref>) is met; it then switches to BFGS algorithm. Our BFGS iterations use a simple backtracking search, until the Armijo condition is met. This condition ensures that the objective function decreased.
As long as the curvature condition holds, BFGS iterations are continued. If at some point it does not hold, the QNEM falls back to the Baum-Welch algorithm, until the curvature condition is met and the quasi-Newton iterations are performed again, until convergence.
The box constraint are simply dealt with by projecting the gradient on the box when θ is on the border.
§.§ Stopping criterion
For the four algorithms, we choose to use the same stopping criterion to decide when the convergence point is reached. The criterion is the classic relative convergence tolerance (reltol), which is based on the likelihood of the current iteration and the previous one. At the k-th iteration, the algorithm stops if
|ℓ^(k-1) - ℓ^(k)|/|ℓ^(k-1)| + reltol < reltol
where ℓ^(k) is the likelihood of the k-th iteration and reltol is a small constant, depending on the machine precision; in our experiment it is 1.49 · 10^-8.
§.§ Examples
§.§.§ Umbrella example
We will first use a simple example based on a fictitious situation. Every day of the year, in a secret underground installation, a security guard observes whether the director comes in with an umbrella or not <cit.>. During 56 days, the security guard keep track of his observations on his note pad as follows: U if the director carries an umbrella; N otherwise.
The guards aims to predict the daily weather conditions (hidden states) based on the umbrella observations: rainy (noted R) or dry (noted D) day. Here the hidden state is the weather. Since the security guard is in a underground installation, he can not observe directly the weather: he has to rely only on his observations of the umbrella status to determine the weather. The parameters to estimate are a the probability of weather state transition and b the probability of error in the umbrella status (not carrying an umbrella on rainy day or carrying an umbrella on dry day). The transition and emission matrix are presented respectively in Tables <ref> and <ref>.
To apply the quasi-Newton algorithm (L-BGFS-B) to this example, it was necessary to bound the values of a and b to the intervals [0.01, 0.99]. Using the interval [0, 1] would be preferable, but the likelihood is not always defined on the boundary, which leads to a failure of the algorithm. The algorithm can not accommodate open intervals such as (0,1).
§.§.§ Old Faithful geyser example
Following an example presented in Zucchini <cit.>, we work with the Old Faithful geyser data <cit.>, available in the R package , considering the 272 duration of the eruptions.
We make two uses of the data. First, we dichotomise at three minutes of eruption to divide the observations in two groups "Duration < 3min" and "Duration ≥ 3 min" (noted as "Dinf3" and "Dsup3" respectively) to work in a discrete observations case. This is similar to what is done in <cit.>.
Second, we work with the continuous values, assuming they are drawn in Gaussian distributions whose parameters depend on the hidden states.
In the sequence of dichotomised values, it appears that each short eruption was followed by a long eruption, but there are sequences of long eruptions. We decided to assume that the hidden Markov chain has three states: "short", "long" and "steady long", noted respectively "S", "L" and "Sl".
The parameters to estimate in the dichotomised model are a the probability to change from state "short" to "steady long", b the probability to stay in "steady long" state, c, d and e the probability to observe a "Duration ≥ 3 min" in respectively "short", "long" and "steady long" states. This emission matrix is presented in Table <ref>.
For the continuous model, the parameters to estimate are the same a and b and six other parameters μ_s, μ_l and μ_sl, the mean of the Gaussian distributions in which duration of eruptions are drawn, and σ_s, σ_l and σ_sl, their standard deviation.
The transition matrix, common to both the dichotomised and continuous models is presented in Table <ref>
Again, for the quasi-Newton L-BFGS-B algorithm, we had to constraint the probabilities a, b, c, d, and e to the interval [0.01, 0.99], and to set a lower bound of 0.01 for σ_s, σ_l, and σ_sl.
§.§.§ HBD segments example
Consanguineous individuals are offspring of related parents. In the absence of pedigree information, consanguineous individuals can be identified through their genome which carries Homozygous-By-Descent (HBD) segments. HBD segments are regions of the genome were the two copies of the genome are identical and come from a common ancestor of the parents. These segments are characteristic of consanguineous individuals. Reconstructing these regions allow to determine whether an individual is consanguineous and his/her degree of consanguinity. The coefficient of consanguinity f can be defined as the expected proportion of HBD genome.
HMM are used to compute for each variant of the genome, its probability to be in a HBD segment. These probabilities can be used to reconstruct the HBD segments, but this requires to first estimate the parameters of the HMM. The parameters to estimate are the coefficient of consanguinity f and another parameter a related to the mean length of HBD segments <cit.>.
cf. Figure <ref>
To test the algorithms, we will generate the genome of an individual from a given set of parameters θ = (f = 0.0625, a = 0.064), which corresponds to the expected values for an offsprings of first cousins. The data were created from an inhouse R package named (available at <https://github.com/genostats/Mozza>, still in development). We then rounded the distances between positions to 0.1 centiMorgan (cM). For each of these new positions, we randomly selected only one observation, to create submaps of positions. This corresponds to a total of 1050 observations.
The transition and emission probabilities are presented in Tables <ref> and <ref>. We note d the distance between positions (fixed here at 0.1 cM), pA and pa the allele frequency of the reference and alternative allele respectively and ϵ the genotyping or mutation error.
In this example, the probability f was constrained to the interval [0.01, 0.99] and the lower-bound was 0.01 for the parameter a when applying the quasi-Newton algorithm.
§.§ Comparison of the algorithms
We compare the likelihood maximisation methods by comparing
* the number of iteration to convergence
* the number of forwards step runs
* the number of backwards step runs
* the running time
* the percentage of runs that failed to converge
* the running time of forward step
* the running time of backward step
We also studied the final estimations of the parameters and calculated the percentage of appearance of each corresponding likelihoods.
For all four examples, the tests were made on 1000 uniform draws within a wide possibility of values for the initial θ, some of them being unlikely.
§ RESULTS
§.§ Umbrella example
The results are presented in Table <ref>.
We can see that, overall, the QNEM algorithm achieves convergence with the lowest mean number of iterations, averaging 11.32 across all runs. However, the quasi-Newton algorithm has the lowest maximum number of iterations (37 iterations vs. 118-338 iterations for the other three algorithms). The number of iterations of the Baum-Welch and the SQUAREM algorithms are comparable and systematically higher than those of the QNEM and the quasi-Newton algorithms.
Apart from the total number of iterations, it is interesting to look at the mean number of forward and backward steps used by the four algorithms. The quasi-Newton algorithm does not perform any backward steps. For the quasi-Newton and Baum-Welch algorithms, the number of forward steps corresponds to the mean number of iterations on the 1000 draws of the initial starting point. The Baum-Welch algorithm compute the same number of forward and backward steps. For the SQUAREM algorithm, the mean number of forward and backward steps is slightly different, with 17.70 forward steps and 17.67 backward steps. Finally, the QNEM algorithm executes an average of 12.63 forward steps and 2.80 backward steps, where the backward steps correspond to the calls to the Baum-Welch algorithm. We note that a forward step takes about 1.5ms for the Baum-Welch and the SQUAREM algorithms and around 11.5ms for the other two, which is longer because of the additional computation of the derivatives. A backward step takes about 8ms for all algorithms.
The total mean time spent in forward and backward steps for a run is almost the same for the quasi-Newton algorithm and the QNEM (∼ 18ms). It is higher for the SQUAREM, and even longer for the Baum-Welch algorithm (∼ 39ms and 61.5ms respectively). This trend is in accordance with the total mean execution time, where the Baum-Welch algorithm is the slowest one and the quasi-Newton and the QNEM algorithms are the fastest.
We also examined the final values obtained for θ. They can be divided into three subsets, corresponding to three different regions of the likelihood as shown in Figure <ref>. Two of these regions are points with a = 0.06 and b = 0.18 or b = 0.82, corresponding to the highest likelihood value (negative log-likelihood equal to 33.3). The third region is the segment b = 0.5 and a ∈ [0.5, 1] which corresponds to a crest line (negative log-likelihood equal to 33.8). The parameters and the corresponding proportion of runs converging to these parameters for each algorithm are presented in Table <ref>. The θ with the best likelihood is reached almost 90% of the time with the quasi-Newton algorithm but only 70% of the time for the other three algorithms.
§.§ Old Faithful geyser example
We will first present the results obtained with the dichotomised data, then with the continuous values.
§.§.§ Dichotomised values
As previously done, we tested the algorithms on 1000 random initial sets of parameters. The results are presented in Table <ref>.
The quasi-Newton and the QNEM algorithms had a lower number of iterations compared to the SQUAREM and the Baum-Welch algorithms. The latters showed a mean number of iterations almost 6 times higher. The lowest number of iterations was always reached by the QNEM.
Moreover, the quickest algorithm was the QNEM as well, with a mean of 0.26 seconds by run, compared to the quasi-Newton, the Baum-Welch and the SQUAREM algorithms, which had respectively 0.33 seconds, 1.21 seconds and 0.71 seconds. The mean time spent in the forward and backward step is the lower for the QNEM as well, with 215ms compared to 280ms for the quasi-Newton algorithm, 1197ms for the Baum-Welch algorithm and 693ms for the SQUAREM.
We note that the QNEM did not converge for two runs.
We then present the table of the convergence points in Table <ref>. We see that the optimum is obtained most of the time by all algorithms but it represents only ∼ 50% of the runs for all algorithms except the QNEM, for which it's closer to 40%. There are two other local optimum reached by the algorithms. Moreover, we observe that the QNEM converges to other local optima not reached by the other algorithms.
§.§.§ Continuous values
For the continuous case, the summary of the results is presented in Table <ref>.
The quasi-Newton algorithm showed the biggest amount of iterations, with a maximum of 159 iterations. The QNEM keeps showing the less number of iterations globally.
In term of running time, the lowest are reached by the EM-based algorithms: the Baum-Welch algorithm and the SQUAREM, with a mean running time near 0.23s. Therefore, the quasi-Newton was slow on this continuous case, with a mean running time of 1.47s. The hybrid QNEM showed an intermediate time of 0.66s. This difference of time can be explained by the time spent in the forward step, which is 1.5 millisecond for the forward used in the SQUAREM and the Baum-Welch (conditional probabilities forward) against 17.5 milliseconds for the forward used in the other two algorithms (conditional probabilities forward with derivatives) for a comparable mean number of forward steps.
The quasi-Newton algorithm did not converge for 68 sets of initial parameters and the QNEM for 5 sets of parameters.
In Table <ref>, the best likelihood is reached most of the time by the Baum-Welch, the SQUAREM and the QNEM algorithms but not by the quasi-Newton algorithm. The latter converges more often to local optima. Most of the time, the quasi-Newton converge to the point with the third best likelihood. This behavior can explain its slow convergence in this example.
§.§ HBD segments example
For this last example, the results are presented in Table <ref>.
The smallest number of iterations is always hit by the QNEM, with a maximum of 27 iterations. It is closely followed by the quasi-Newton algorithm. On the opposite, the Baum-Welch algorithm shows less good performances in term of number of iterations with a mean of 68.53 iterations. The SQUAREM algorithm, is a bit better than the Baum-Welch algorithm with a mean of 56.19 iterations.
In term of running time, the QNEM and the quasi-Newton shows a similar mean running time of 0.32s, whereas the SQUAREM and the Baum-Welch algorithms are way longer with respectively 1.37s and 2.40s of running time.
These running times are linked to the time spent in forward and backward steps. Even if the time spent in the forward step by the two EM-based algorithms (5.3ms) is lower than the time spent is the QNEM and quasi-Newton's forward step (around 20ms), the two latter need less forward steps (respectively a mean of 12.83 and 15.36 iterations) than the Baum-Welch and the SQUAREM algorithms (respectively a mean of 68.53 and 38.66 iterations).
For all algorithms, only one likelihood optimum is reach with all 1000 draws of the starting value theta θ. In fact, this likelihood admits only one optimum.
§ DISCUSSION
The Baum-Welch (or EM) algorithm is a natural choice when it comes to HMM parameter estimation, offering several advantages: its conceptual simplicity as well as the ease of implementation, and the fact it naturally accommodates the possible constraints on the parameters. However, while the first iterations may move quickly in the space of parameters, once it is close to the solution its convergence is usually slow.
Another possibility is to use a direct maximisation of the likelihood, particularly via a quasi-Newton method. It is expected to be more efficient than the Baum-Welch algorithm. However, the computation of the likelihood, and of its derivatives, on the logarithmic scale, can be tedious. In the literature, it is usually done using a joint probability version of the forward algorithm (equations <ref> to <ref>), in which all quantities need to be computed in the logarithmic scale. We showed that it is possible to use the conditional probability version of the forward algorithm (equations <ref> to <ref>) to compute the log-likelihood in a simpler manner.
A practical problem with the use of the quasi-Newton algorithm is the presence of bounds on the parameters. Typically, in our examples, many parameters take their values in [0,1], and others in [0, +∞), with possibly an infinite value of the log-likelihood on the border of the space of parameters. The off-the-shelf quasi-Newton method for this case is L-BFGS-B, which tends to favour in its first iterations points on the border of the space, leading to a failure of the algorithm. A common solution, but not totally satisfying, is to use constraints such as [0.01, 0.99] instead.
An appealing alternative is to use an EM acceleration, such as the SQUAREM algorithm. It is simple to implement from an existing Baum-Welch implementation. Bounds on the parameters are easily dealt with, including when the likelihood is infinite on the border.
We propose a new algorithm, QNEM, which alternates between Baum-Welch algorithm iterations and BFGS iterations. Our intuition was that the Baum-Welch algorithm can help to explore efficiently the space of parameters, and the BFGS can allow to achieve a fast convergence once the algorithm is near to a local optimum. The algorithm switches between the two kind of iterations using a convexity criterion. The box-constraints are simply managed with a projection of the gradient on the box when the parameter θ gets on the border.
We compared the four algorithms on four different examples. The umbrella example (cf. sections <ref> and <ref>) is a toy example with two hidden states and two possible observed states. As expected, the quasi-Newton was much faster than the Baum-Welch algorithm. The SQUAREM succeeds in accelerating the Baum-Welch algorithm, but is still much slower than the quasi-Newton. Our algorithm, QNEM, is equivalent to the quasi-Newton in term of computational time.
Our second example was based on the Old Faithful data, with three hidden states and two possible observed states (cf. sections <ref> and <ref>). In that case, the QNEM is the faster of all algorithms, followed by the quasi-Newton, then by the SQUAREM, and the Baum-Welch is the slowest.
In our third example, the Old Faithful data were used again, with the same model for the hidden Markov chain, but now the observed states are continuous (cf. sections <ref> and <ref>). Surprisingly, for this model, the quasi-Newton is by far the slowest method. The Baum-Welch algorithm is faster, and the SQUAREM is even slightly faster. The QNEM ranks third, with intermediate performances.
Our last example, which was our motivating example, addresses a concrete question in genetics: making inference about HBD segments on the genome. It has two hidden states, and the observations are discrete (cf. sections <ref> and <ref>). In that case, just as in the first example, the QNEM and the quasi-Newton were equivalent, and performed better than the SQUAREM and the Baum-Welch in term of computational time. Our results are in line with those of <cit.>, who also compared the L-BFGS-B algorithm to the EM algorithm in term of number of iterations and running time on a more complex HMM model aimed to identify HBD segments. In their work, the quasi-Newton method is several time faster than the EM algorithm.
In all examples except the HBD segments example, there are multiple local likelihood optimum, leading to multiple final estimations of the parameters. In a concrete situation, to deal with this issue, one would need to run the algorithms multiple times from different initialisation of the parameters, to retain the highest likelihood. In our examples, the quasi-Newton and the QNEM seems more prone to converge to a local optimum which is not a global optimum. It is worth to note that in our numeric experiments, the starting points are drawn in very large intervals, which may exacerbate this behaviour.
There was no algorithm uniformly more efficient. In particular, it was surprising to us to find that on our third example, the Baum-Welch and the SQUAREM are the more efficient ones. The presence of multiple optima in this example seems to play a role in this behaviour. The QNEM was the most efficient on our second example, and equivalent to the quasi-Newton on the first and fourth example. This new optimization algorithm for HMM parameter estimation seems to be an interesting alternative to the other methods, offering a good compromise in most situations.
§ CODE AVAILABILITY
Related to this paper, we created a R package available at <https://github.com/SidonieFoulon/steveHMM>.
§ ACKNOWLEDGMENT
This work is part of the Inserm cross-cutting program GOLD (GenOmics variability in health & Disease)
§ FUNDING SOURCES
Sidonie Foulon is funded for this work by a doctoral grant from the French Ministry of Research.
plainnat
|
http://arxiv.org/abs/2409.02991v1 | 20240904180002 | Asymptotic curvature divergences and non-gravitational theories | [
"Fernando Marchesano",
"Luca Melotti",
"Max Wiesner"
] | hep-th | [
"hep-th"
] |
=1
|
http://arxiv.org/abs/2409.03103v1 | 20240904220307 | Leveraging Interpretability in the Transformer to Automate the Proactive Scaling of Cloud Resources | [
"Amadou Ba",
"Pavithra Harsha",
"Chitra Subramanian"
] | cs.LG | [
"cs.LG"
] |
[
[
=====
§ ABSTRACT
Modern web services adopt cloud-native principles to leverage the advantages of microservices. To consistently guarantee high Quality of Service (QoS) according to Service Level Agreements (SLAs), ensure satisfactory user experiences, and minimize operational costs, each microservice must be provisioned with the right amount of resources. However, accurately provisioning microservices with adequate resources is complex and depends on many factors, including workload intensity and the complex interconnections between microservices. To address this challenge, we develop a model that captures the relationship between an end-to-end latency, requests at the front-end level, and resource utilization. We then use the developed model to predict the end-to-end latency. Our solution leverages the Temporal Fusion Transformer (TFT), an attention-based architecture equipped with interpretability features. When the prediction results indicate SLA non-compliance, we use the feature importance provided by the TFT as covariates in Kernel Ridge Regression (KRR), with the response variable being the desired latency, to learn the parameters associated with the feature importance. These learned parameters reflect the adjustments required to the features to ensure SLA compliance. We demonstrate the merit of our approach with a microservice-based application and provide a roadmap to deployment.
§ INTRODUCTION
One of the primary motivations driving application developers toward cloud systems is the possible access to large-scale infrastructure and automation platforms, guaranteeing scalability, flexibility, and cost-effectiveness, among a myriad of other advantages. To ensure high Quality of Service (QoS) for application developers, cloud providers attempt to strike an optimal balance between resource usage and QoS. However, provisioning resources to meet application performance while minimizing wastage and costs presents a challenge <cit.>. This challenge is further amplified by the rise of cloud-native tools that promote microservice architectures, which seek to ensure end-to-end QoS in production environments <cit.>. Microservice architectures are characterized by complex request execution paths that traverse multiple microservices. The latency from one microservice impacts the latency of downstream microservices, thereby affecting the end-to-end latency of the entire application trace. To prevent high latency in microservices-based applications and ensure QoS, autoscaling approaches have been developed. These approaches are either reactive or proactive <cit.>. Reactive autoscaling solutions make scaling decisions by analyzing current system metrics, such as CPU utilization and memory usage. The autoscaling mechanism is triggered when these metrics show abnormalities. However, reactive autoscaling is limited due to its inability to act beforehand to prevent QoS degradation. This limitation led to the development of proactive autoscaling approaches, which are based on either predicted workload <cit.> or predicted end-to-end latency <cit.>. However, existing prediction approaches for autoscaling are not interpretable. Our work lies in this line of research, where we propose a new approach aiming to achieve interpretable prediction of end-to-end latency for the implementation of informed, fine-grained autoscaling mechanisms. This allows us to determine and scale specific microservices experiencing volatile demand, rather than scaling the entire application. To this end, we make the following contributions in this paper.
(1) We develop a model that captures the relationship between an end-to-end latency, requests at the front-end level and resource utilization. Then, (2) we use the developed model to predict the end-to-end latency. Our solution leverages the Temporal Fusion Transformer (TFT) <cit.>. The TFT utilizes recurrent layers for local processing to learn temporal relationships at different scales and is equipped with interpretable self-attention layers for capturing long-term dependencies. This dual-layered structure enables our approach to simultaneously provide accurate predictions and interpretable results, offering us insights across the entire execution path and paving the way for advanced resource provisioning. (3) Whenever the prediction results lead to SLA non–compliance, we use the feature importance provided by the TFT as covariates in a Kernel Ridge Regression (KRR), with the response variable being the desired latency, to learn the parameters associated with the feature importance. The learned parameters allow us to perform autoscaling when they are associated to resource usage. (4) We demonstrate the viability of our approach in a practical setting characterized by a cloud-native application.
To the best of our knowledge, this is the first paper to develop a mechanism for interpretable prediction of end-to-end latency for microservices-based applications, and to use the interpretability results as a basis for autoscaling cloud resources.
§ RELATED WORK
To proactively provision resources to microservices, machine learning (ML) and deep learning (DL) approaches are increasingly being utilized <cit.>. These approaches aim to efficiently and optimally adjust resource allocation <cit.>. To enhance the relevance of the ML and DL models used for autoscaling, domain knowledge represented by causal mechanisms is gradually being considered in these models. Their objective is to capture the interrelations between the components of the microservices <cit.>. Generally, these causal mechanisms are represented by a graph, and Graph Neural Networks are employed to model the causal relations <cit.>. However, their operationalization presupposes perfect knowledge and a complete representation of all interconnections characterizing the topology of microservices. Furthermore, these approaches do not provide interpretable predictions of end-to-end latency for proactive autoscaling. This is where the contributions of our work lie, where we provide interpretable and actionable predictions of end-to-end latency.
§ APPROACH TO PROACTIVE AUTOSCALING
Figure <ref> presents our approach to proactive autoscaling of cloud resources. The approach starts with using the TFT to predict an end-to-end latency, then it considers the statistical significance of the feature importance provided by the multi-head attention of the TFT to fit KRR. To estimate the parameters associated with each feature importance, we use an optimization algorithm from the class of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm. The estimated parameters are then used to perform autoscaling.
§.§ Data representation for latency prediction
We categorize the resources used in our approach into two types, vertical and horizontal. Vertical resources include those used in the container host, such as CPU and memory. Horizontal resources refer to the number of pod replicas used. Let's consider by y_t,m∈ℝ^N × M
as the latency at time t at the front-end level of an application, where m=1, ⋯, M represents a trace ID and t=0, ⋯, N represents the time instant. The trace ID is used to track the flow of a single call as it traverses through various microservices. We consider that the inputs data at the trace level are given by 𝒳 = {x_t,1, ⋯, x_t,M}, where x_t,m∈ℝ^N × M are the features associated with the calls at the front-end level. The features at the microservices level are given by 𝒳^' = {X_t,1^L_1, ⋯, X_t,P^L_P}, where X_t,p^L_p∈ℝ^N× L_P, and L_p, p =1, ⋯, P, is the number of features L at the microservice p. Our objective is to learn the function that maps an end-to-end latency to the calls at the front-end level and the features at the microservices level associated with the end-to-end latency. To this end, we use the TFT because of its advanced modeling capability and interpretability features.
§.§ Temporal Fusion Transformer for the prediction
We adopt an interpretable AI method based on the Transformer <cit.> for the interpretable prediction of end-to-end latency and the determination of the influential features. For this purpose, we use the Temporal Fusion Transformer (TFT), where we adopt certain modifications to suit our application. For example, we ignore the static covariates. The TFT is an AI model designed for time series prediction. It integrates the Transformer architecture with Temporal Fusion mechanisms to capture temporal patterns in sequential data. The TFT is composed of the multi–head attention mechanism from the Transformer with Recurrent Neural Networks (RNNs). Three main building blocks are present in the TFT, the variable selection networks, the Long Short-Term Memory (LSTM) encoder-decoder, and the interpretable multi-head attention. The encoder receives the data required for training and the decoder provides the predictions. The interpretability of the approach is given by the multi-head attention. The input layer to our TFT architecture is composed of the features 𝒳 and 𝒳^' and the output layer combines the processed inputs with learned parameters to produce the quantile prediction of y_t, m. For the training phase, the inputs include past features {𝒳∪𝒳^', y}_t-k:t, characterized by the calls at the front-end level and the vertical and/or horizontal resources at the microservices level, along with the target variable, which is the end-to-end latency at the front-end level. These past features and the response variable are processed by the variable selection network before being passed to the LSTM encoder.
The output of the LSTM encoder is then fed to the Gated Recurrent Networks (GRN) after applying the gating mechanism, addition, and normalization operations. Subsequently, the output from the GRN, which receives the past information, is passed to the interpretable multi-head attention. For the scoring phase, the TFT architecture receives as input the future known features, {𝒳∪𝒳^'}_t+1:t+τ, and after applying the variable selection, these features are fed to the LSTM decoder and their output to another GRN for processing, after gating, normalization and addition operations are applied. They produce the quantile prediction of the end-to-end latency.
§.§ Kernel Ridge Regression for parametric estimation
Besides the quantile predictions of the end-to-end latency produced by the TFT, the multi-head attention mechanism of the TFT provides interpretability associated with these predictions. These interpretations are presented in the form of feature importance scores. The feature importance represents scores associated with the input features used in the TFT, based on their contributions to the prediction. Each score associated with a feature becomes a new sample for the KRR. The objective is to determine how much the features need to be readjusted after an SLA violation. Whenever an SLA violation occurs, we determine the percentage of violation in latency and define a desired latency by subtracting a factor from the predicted latency, making the new latency the desirable latency. We then learn the parameters linking the new features—comprised of scores from the multi-head attention interpretation—with the desired latency. These learned parameters represent autoscaling factors when they relate to actionable features such as the number of pods, CPU, memory, and so on.
We create K KRR models, each corresponding to a new feature derived from the feature importance scores. Each KRR is trained on one of the new features, with the target variable being the newly defined latency. This step involves learning a function that maps each new feature to the desired latency. At this stage, we define a function that takes parameters θ_0, θ_1, ⋯, θ_K, where K is the number of features, and computes predictions by combining the outputs of the K KRR models. Each KRR model predicts a nonlinear transformation of its corresponding feature, and the predictions are weighted and summed according to the parameters.
§.§ The autoscaling mechanism
We define the objective function to compute the squared difference between the combined model predictions of the desired end-to-end latency and the actual end-to-end latency. We then minimize this function to find the best parameters. To determine the parameters θ_0, θ_1, ⋯, θ_K, we use the Limited-memory Broyden-Fletcher-Goldfarb-Shanno with Box constraints (L-BFGS-B) algorithm, which is an algorithm widely used for solving optimization problems with constraints on the variables. The advantage of using Box constraints in the optimization approach—corresponding to specifying bounds on the variables—lies in its ability to control how the autoscaling of cloud resources is performed.
§ EXPERIMENTS
Our use case focuses on Robot Shop, an e-commerce website that provides a comprehensive environment and functionalities. Some of the components present are the catalog, cart, payments, and shipping. Each of these components within the Robot Shop is represented by a distinct microservice. This architectural approach demonstrates the practical implementation of microservices and provides a robust platform for testing various resource provisioning mechanisms specific to a microservice environment. Robot Shop illustrates the advantages and intricacies of a microservices-based application. The microservices in Robot Shop are built using a variety of tools and technologies, reflecting a polyglot programming environment. This includes services developed in NodeJS, Java, Python, Golang, and PHP. The application leverages databases and messaging systems like MongoDB, Redis, MySQL, and RabbitMQ. Additionally, web server technologies like Nginx and front-end frameworks such as AngularJS are utilized. This diverse technological stack showcases how different programming languages and frameworks can be integrated to create a cohesive and functional application. Each microservice is encapsulated in a Docker container, for deployment and management by ensuring consistent runtime environments. These Dockerized services communicate with each other through REST APIs and messaging queues. The design emphasizes scalability and resilience, ensuring that the application can handle varying loads. In our use case, Robot Shop is deployed on IBM Cloud. The deployment is orchestrated with Kubernetes, which automates the deployment, scaling, and management of the pod replicas. Figure <ref> presents an example of traces execution and their duration, whereas Figure <ref> presents a call graph of the Robot Shop that shows multiple execution paths as a result of concurrent client calls. In this example, we observe 5 traces, each following a specific execution path:
* : → → →
* : → →
* : →
→
* : → →
* : → →
* : → → →
These traces show that the requests are characterized by call paths through service dependency graphs, which capture communication-based dependencies between microservice instances. The call paths illustrate how requests flow among microservices by following parent-child relationship chains. This demonstrates that service dependency graphs are important tools for discerning the complex interplay between services.
We consider this use case to demonstrate the performance of our proposed approach, which involves predicting an end-to-end latency at the trace level. For illustration purposes, we focus our experiments on the green and purple traces.
§.§ Selected end-to-end latency
We consider the latency p95 as our response variable. The latency p95 (95th percentile) represents the latency value below which 95% of the measured latency values fall. Compared to the latency p99, it provides a broader view of the latency distribution. Generally, latency p95 is employed to understand the performance of a system and is usually less impacted by extreme outliers compared to p99. If the latency p95 for the Robot Shop is 100 milliseconds, it means that 95% of responses are received in 100 milliseconds or less, while 5% of responses may experience longer latencies. As mentioned earlier, we developed our TFT model to take as input the calls at the front-end level across all the traces, in addition to the number of pods at the microervices levels associated with the latency we want to predict if we are interested in scaling horizontal resources, or CPU and memory if we are interested in autoscaling vertical resources.
§.§ End-to-end latency prediction
Our experiments do not rely on any autoregressive features to predict the latency. For the TFT model, we include a time index that increments by one for each time step. We standardize each time series separately, ensuring that the values are always positive. To achieve this, we use the EncoderNormalizer, which dynamically scales each encoder sequence during model training. Our model training is conducted using PyTorch Lightning. The distinctive characteristic of the TFT model is its attention mechanism, which attributes different levels of importance to various points in time during latency prediction. This feature provides interpretability to the end-to-end predicted latency. The TFT is designed for multi-horizon prediction, meaning that it can predict future values at multiple time horizons simultaneously. To achieve this, the model incorporates output layers that predict values for each time horizon of interest, allowing it to generate quantile predictions of the end-to-end latency. Additionally, we tune parameters such as a batch size of 32, a learning rate of 0.03, and 20 epochs. Our model architecture includes a hidden size of 8, an attention head size of 1, and a dropout rate of 0.1. Furthermore, we set a maximum prediction length of 50 and a maximum encoder length of 400. We keep these tuning parameters intact across all our experiments. To evaluate the performance of our model, we utilize a quantile loss function. We use early stopping to avoid overfitting and to achieve faster convergence. The variable selection process chooses the relevant data for each time step, encompassing both current and past features and the latency. To handle past metrics, an encoder is employed to incorporate the selected features along with an index indicating their relative time. The encoder processes historical time series data and captures temporal dependencies. It consists of multiple layers of self-attention mechanisms and feedforward neural networks, similar to the encoder in the Transformer model. This encoder encodes the calls at the front-end and the infrastructure metrics into a meaningful representation, which then serves as input to the decoder. Additionally, the decoder takes the features for which latency prediction is desired. In TFT, the decoder primarily generates quantile predictions of the end-to-end latency. Figure <ref> presents an example of the results of applying the TFT to the presented features and latency. For illustration, we focus on the green and purple traces and the utilization of horizontal resources in the end-to-end latency prediction. In Table <ref>, we present the application of the TFT to our data and analyze its performance compared to some of the most established regression methods, namely XGBoost, Decision Tree Regressor (DTR), and Random Forest (RF). For this purpose, we use two performance metrics, the Root Mean Square Error (RMSE) and R^2. The lowest RMSE and the R^2 closer to 1 are associated to the best prediction results. The good performance observed with the green trace in terms of the end-to-end latency prediction is mainly due to the high variability in the number of pods allocated to the microservices belonging to this trace, compared to the other traces, along with the structure of these microservices. This is due to the fact that, in our use case, one of the objectives was to analyze the influence of horizontal resources on the variability of end-to-end latency. Irrespective of the performance metric used, whether the RMSE or R^2, there is consistency in terms of the traces that showed good end-to-end latency prediction results. This consistency is also reflected in the features used, whether they are horizontal, vertical, or both. Table <ref> demonstrates the advantage of TFT because, in addition to providing good prediction results for the end-to-end latency, the TFT offers interpretability of the prediction results compared to other state-of-the-art regression methods, specifically those presented in this experiment.
Table <ref> presents the application of KRR to the feature importance scores and the desired latency. The results observed in Table <ref> are consistent with those provided in Table <ref>, where the best performance is obtained when the green trace is considered, and the horizontal resources are used.
§.§ Feature importance associated with the predictions
We present in Figure <ref> the feature importance associated with the prediction results shown in Figure <ref>. One of the primary advantages of the TFT over other DL models is its inherent interpretability, largely attributable to its interpretable multi-head attention mechanisms. With TFT, we can determine the significance of the computing metrics along with the calls at the front-end in the end-to-end latency prediction, a capability present in both the encoder and decoder components. Across various experiments, the consensus is that the number of pods at the cart level is the most influential feature in the latency predictions when the features are composed of horizontal resources. For vertical resources, the memory at the cart level is the most influential. The multi-head attention is crucial for interpretability as it enables the model to focus on different parts of the input data and learn complex temporal dependencies. It also allows the TFT to compute attention weights for different pods and calls at the front-end at various time steps. With these weights, we can interpret which of the number of pods and the calls at the front-end is most relevant for making end-to-end latency predictions at each time step. This provides insights into the relative importance of different features and helps us understand how the model processes and weighs input information when generating the end-to-end latency predictions. Additionally, the multi-head attention enables the TFT to capture both local and global context when making predictions. By using different parts of the input sequence with attention heads, the model can integrate information from nearby and distant time steps to make more informed end-to-end latency predictions.
§.§ Autoscaling cloud resources
We exploit the interpretability results provided by the TFT to implement corrective actions whenever an SLA violation is detected. Our corrective actions either adjust the number of pods (horizontal autoscaling), CPU and memory (vertical autoscaling) or provide insights into how to adapt the characteristics of the calls at the front-end to prevent SLA violations. First, our approach to corrective actions starts with redefining the target variable. For example, if the predicted latency corresponds to an SLA violation of a given percentage, we subtract this percentage from the latency to obtain a new target variable. We then use the feature importance scores provided by the decoder as our new features. The objective becomes estimating the parameters associated with these features and the new target latency. To this end, we use KRR to estimate the parameters required to prevent any SLA violations. For example, for horizontal autoscaling, our KRR will estimate the parameters θ_1, ⋯, θ_6 associated with θ_1 f_1(cps_green) + θ_2f_2(cps_blue) +θ_3f_3(cps_purple) + θ_4f_4(cps_red) + θ_5f_5(pod_cart) +θ_6f_6(pod_catalogue) = desired_latency, where cps means calls per second. The functions f_k are determined by the KRR. The input matrix to the KRR is composed of the feature importance scores provided by the decoder as shown in Figure <ref> and the target variable is the desired latency. We choose a Radial Basis Function (RBF) kernel to capture the relationship between the features and the desired latency. We vary the regularization parameter α and the kernel parameter β of the RBF from 0.01 to 10 in steps of 10 and perform a 3-fold cross-validation to determine the best hyperparmeters α and β for the KRR. We build six separate KRR models, each corresponding to one feature (e.g., pod_cart).
We use scipy.optimize to obtain θ_1, ⋯, θ_6 that minimize our objective function.
Table <ref> shows that the best corrective action is obtained for the green trace using the horizontal resources, which is congruent with the results obtained for the end-to-end latency prediction using the green trace and the horizontal resources.
§ ROADMAP TO DEPLOYMENT
This work represents the development of the AI component of our multi-cloud manager system. It needs to be integrated into the overall multi-cloud manager system, where our developed approach will operate in conjunction with several other components. These include the monitoring system that collects data from the microservices, the ontology and semantic system that provides a formal representation of the data for convenient access and analysis, the resource exposure and discovery component that indicates the status of resources across the multi-cloud environment to decide where to run the microservices-based architecture, and the network programmability component that will effectively enforce the autoscaling results presented in this work. Following the integration of the AI component into the multi-cloud manager system, further evaluations in various settings are required to enable the effective and large-scale deployment of our framework. This is necessary because, although the approach provides good results that can be used to adjust the actionable features, the experimentation of the approach in diverse environments and the quantification of the reduction in SLA violations resulting from the framework need to be assessed before an extensive rollout.
§ CONCLUSIONS
This work establishes a foundation for the efficient autoscaling of cloud resources in microservices-based applications. To achieve this, we developed an innovative approach consisting of three key steps. The first step provides an interpretable end-to-end latency prediction, which enables the detection of potential SLA violations. In the case of an SLA violation, we utilize the interpretability results from the multi-head attention of the TFT, combined with the KRR, to identify the parameters that need adjustment and the extent of the adjustments required to correct the SLA violations. Following this step, we implement the autoscaling process. The performance metrics of our results demonstrate the effectiveness of our approach and its practical merit. This work is the first to use the interpretability of the Transformer to build autoscalers and makes a significant contribution to cloud providers by enhancing their ability to maintain SLA compliance efficiently through dynamic resource scaling.
plainnat
|
http://arxiv.org/abs/2409.03681v1 | 20240905163353 | Space-Efficient Algorithm for Integer Programming with Few Constraints | [
"Lars Rohwedder",
"Karol Węgrzycki"
] | cs.DS | [
"cs.DS"
] |
A New First-Order Meta-Learning Algorithm
with Convergence Guarantees
El Mahdi Chayti
Machine Learning and Optimization Laboratory (MLO), EPFL
Martin Jaggi
Machine Learning and Optimization Laboratory (MLO), EPFL
September 9, 2024
================================================================================================================================================================
§ ABSTRACT
Integer linear programs min{c x : A x =
b, x ∈ℤ^n_≥ 0}, where A ∈ℤ^m × n, b
∈ℤ^m, and c ∈ℤ^n, can be
solved in pseudopolynomial time for any fixed number of constraints m = O(1).
More precisely, in time (mΔ)^O(m)(I),
where Δ is the maximum absolute
value of an entry in A and I the input size.
Known algorithms rely heavily on dynamic programming, which
leads to a space complexity of similar order of magnitude as the running time.
In this paper, we present a polynomial space algorithm that solves integer linear programs in
(mΔ)^O(m (log m + loglogΔ))(I) time, that is,
in almost the same time as previous dynamic programming algorithms.
§ INTRODUCTION
We consider integer linear programs (ILP) in equality form
min{ c x : A x = b, x ∈^n }
where A ∈^m × n, b ∈^m, and c ∈^n.
Integer linear programs are a widely used model to describe problems in combinatorial optimization and
the algorithms available for them are used as general purpose optimization suites.
Unlike its continuous counterpart, where x∈^n, integer linear programming is NP-complete even in the case of m=1. Hence, theoretical research is mainly focused on identifying structural assumptions, under which ILPs can still be
solved efficiently. Here, two dominant directions can be distinguished.
Based on the geometry of numbers, Lenstra <cit.> showed that ILPs of the form min{c x : A x ≤ b, x∈^n} can be solved in polynomial time for constant n, in fact, that they are fixed-parameter tractable (FPT) in parameter n.
Note that this form is equivalent to (<ref>) by a standard transformation, but this would led to an increase in
parameter n.
Subsequent improvements by Kannan <cit.>
and most recently Reis and Rothvoss <cit.> lead to the currently best running time of (log n)^O(n)(I), where I is the input size.
The second direction is mainly based on dynamic programming approaches and goes
back to Papadimitriou <cit.>. He considered (<ref>) in the case when the absolute values of the entries of A and b are bounded by Δ and designed a pseudopolynomial time algorithm when m is fixed. This generalizes the classical pseudopolynomial time algorithm for Knapsack problems <cit.>. Later, his algorithm was improved by Eisenbrand and Weismantel <cit.>. The current best algorithm is due to Jansen and Rohwedder <cit.> with a running time of O(√(m)Δ)^2m(I).
In the latter two results the bound of Δ is only required on A, but not b.
Similar approaches were also applied successfully to constraint matrices A, where both dimensions n and m can
be large but satisfy certain structures of their non-zero entries, see e.g. <cit.>.
Space efficiency of integer linear programs.
NP-completeness tells us that any algorithm that solves general ILPs needs to have an exponential running
time at least in some parameter unless P=NP. But not only running time, also space usage can be a limiting factor.
It is well known that for practical ILP solvers based on Branch-and-Bound it is often a bottleneck to store
the list of active nodes, see e.g. <cit.>.
This motivates the theoretical question of whether the aforementioned
results can also be achieved in polynomial space or, more generally, what tradeoffs between running
time and space one can achieve.
Frank and Tardos <cit.> showed that Lenstra's algorithm and the improvement by Kannan
with running time n^O(n)(I) can be implemented in polynomial space.
For the improvement by Reis and Rothvoss this is not known.
Regarding the algorithms based on dynamic programming, the space usage is also pseudopolynomial, which
is inherent in the method. Alternative, space-efficient methods are known for Subset Sum, a very
restricted variant of ILP, where the task is to find a solution to
{x ∈{0,1}^n : a x = b } with a ∈^n and b ∈. Note that here m=1 and there is
no objective function.
For the Subset Sum problem, Lokshtanov and Nederlof <cit.> introduced an algebraization technique and designed an Õ(n^3 b logb) time and Õ(n^2) space algorithm (see also <cit.>). This algorithm was later improved by Bringmann <cit.>, and the current best low-space algorithm for Subset Sum is due to Jin et al. <cit.>, which runs in Õ(nb) time and O(lognloglogn + logb) space.
Despite the intensive research on polynomial space algorithms for Subset Sum, little is known for
more general problems.
The seminal result of Lokshtanov and Nederlof <cit.> is the only one that considers the more general Knapsack problem, that is, max{c x : a x ≤ b, x∈{0,1}^n} with c, a∈^n, and b∈. Nevertheless, their method suffers from pseudopolynomial factor in both b and c_∞ in the running time, see <cit.>, whereas the classical dynamic programming algorithm only requires pseudopolynomial time in one of the two parameters. A (n, b) time and polynomial space algorithm is widely open for the Knapsack problem and Unbounded Knapsack (the variant with x∈^n) <cit.>.
Savitch's theorem can be used to avoid the high space usage of dynamic programming, but often comes at a significant
cost for the running time.
In the case of (<ref>), this method leads to an algorithm in time (mΔ)^O(m^2 log(mΔ))(I)
and polynomial space, see <Ref> for details.
Our contribution.
In this paper, we show that integer programming (<ref>) can be solved in polynomial space
and a running time that nearly matches the best known running time of the dynamic programming approach by <cit.>.
Let A ∈^m × n, b ∈^m, and c ∈^n and
suppose that the absolute value of each entry of A is bounded by
Δ > 1. Then, in time
(mΔ)^O(m(logm+ loglogΔ))·(I)
and polynomial space, one can compute a solution
to (<ref>) or conclude that such a solution does not exist.
Here, I is the encoding size of A, b, and c.
Compared to the running time in <cit.>, this is an additional factor of
O(log m + loglogΔ) in the exponent of mΔ, whereas all known techniques lead to an additional factor of at
least O(mlogΔ), see <Ref>.
§ OVERVIEW OF PREVIOUS AND NEW TECHNIQUES
Let us briefly introduce some notation and the model of computation.
Throughout this paper, we assume all logarithms are base 2. We write Õ(f) = O(f ·(f)). Further, we make use of the shorthand notation [n] = {1, …, n}. For a vector x ∈^n, let x_p denote the ℓ_p norm of vector x. In the special case of p = 0, recall that x_0 = |(x)|, where (x) = { i ∈ [n] : x_i ≠ 0} are the non-zero coefficients.
For two vectors x,y ∈^n, we say that x is lexicographically smaller than y (x y) if there exists an index i ∈ [n] such that
(i) x_j = y_j for every j < i, and (ii) x_i < y_j.
As in related works, see <cit.>, we work with a random-access model with word length Θ(logn + logW), where W is the largest absolute value of an input integer. We measure space complexity in terms of the total number of words in the working memory.
We say that an algorithm runs in polynomial space if the space complexity is bounded by a polynomial in the
input size, that is, in (I) := (n,m,log(Δ), log(b_∞), log(c_∞)).
Similarly, the running time of an algorithm is measured in the number of arithmetic operations
on words.
Before we explain the ideas behind <ref> we will briefly review
previous techniques used to achieve polynomial space dependency.
The standard tool to reduce space complexity usage are the ideas in Savitch's theorem, which transforms
algorithms with non-deterministic space complexity to slightly higher deterministic space complexity.
Here, the key technique is to decide (s, t)-reachability in a directed graph with N vertices in
N^O(logN) time and O(log N) space. This is straight-forward by recursively
guessing the middle vertex of the path from s to t. The space complexity comes from the O(log N)
vertices stored in the stack.
This can easily be adapted to finding the shortest
path in a weighted graph in the same time and space.
One can already apply this to ILP. Namely,
the algorithm of Eisenbrand and Weismantel <cit.>
constructs a directed acyclic graph with N = (m Δ)^O(m)·b_∞ vertices and
then computes the shortest path between two of the vertices.
With standard polynomial time preprocessing described in <cit.>, one can reduce b_∞ to (mΔ)^O(m).
This shortest path query is the main source of the high space usage of <cit.> and
using Savitch's theorem, the ILP can therefore be solved in N^O(logN) =
(mΔ)^O(m^2 log(mΔ)) time and O(logN) = O(m log(mΔ)) space.
A more intricate technique is the algebraization technique <cit.>, which relies on the following coefficient test lemma.
Consider a polynomial f(x) = ∑_i=0^d α_i x^i ∈𝔽[x] of degree at most d
and assume there is an algorithm running in time T and space S for
evaluating f(a) for a given a ∈𝔽. Then, there
is an algorithm running in Õ(d T) time and Õ(S) space
that for a given t ∈{0,…,d} decides if α_t = 0.
The idea to solve the Subset Sum problem, is to consider the polynomial p(x) =
∏_i=1^n (1+x^a_i) and use <ref> to test whether the coefficient
α_t is nonzero, where a_1,…,a_n ∈ are the input integers,
t ∈ is the target, and α_t is a coefficient in front of monomial
x^t of polynomial p. Note that the polynomial p(x) can be evaluated in
(n, a_∞) time, which, combined with <ref> yields the result
of <cit.>.
This technique can be applied to check feasibility of (<ref>)
by aggregating the m constraints of A into a single one a x = b' where
a_∞ = (mΔ)^O(m^2), see e.g. <cit.>,
and reducing the non-negative integer variables to binary ones,
which can be done by applying proximity techniques <cit.>.
We omit the details here, since
the method has two inherent disadvantages for our application:
it has a higher running time with an additional factor of O(m) in the exponent compared to <cit.> and
to use <ref> to solve
the optimization variant in (<ref>) one would need to additionally encode the
coefficient of the objective function c in a, which would yield a pseudopolynomial running time
in c_∞ that is not necessary in <cit.>.
Another straight-forward approach is based on the notable property of (<ref>) that it always has sparse optimal solutions.
If (<ref>) is bounded and feasible, then there exists an optimal solution x^∗∈^n such that
x^∗_0 ≤Γ,
where Γ := 2(m+1)(log(m+1) + logΔ + 2). Moreover x^∗ is the lexicographically minimal solution.[Eisenbrand and Shmonin <cit.> did not state that x^∗ is the lexicographically minimal solution, but this can be derived easily:
one can replace the objective by c'_i = c_i + ϵ^i for each i∈ [n], where ϵ > 0 is sufficiently small. This simulates a lexicographical tie-breaking rule.]
This property is sometimes used in the following way: one can guess the support of an optimal solution
from the O(n^Γ) potential variable sets and then apply Lenstra's algorithm for integer linear programming
in small dimension. This idea was used for example in <cit.>.
Recall that Lenstra's algorithm and the improvement by Kannan can be implemented in polynomial space.
There is an algorithm that solves
max{c x : Ax ≤ b, x∈^n}
in time n^O(n)(I) and polynomial space.
Note that (<ref>) can be translated to the form in <Ref> without increasing the number
of variables, hence the same result holds there.
Together with the bound on Γ, this yields
a running time of n^Γ·Γ^Γ·(I) ≤ (mΔ)^O(m^2 logΔ)·(I) and polynomial space, using that without loss of generality all columns of A are different and, in particular,
n ≤ (2Δ + 1)^m.
For our result we introduce two new ideas.
In <ref> we introduce the following branching algorithm that can be seen as an application of
Savitch's theorem to the support of the solution. Consider
the lexicographically minimal solution x^∗ and let x^∗ = x^∗_ℓ +
x^∗_r be such that x^∗_ℓ and x^∗_r have half the support each.
We will then guess a vector b_ℓ∈^m such
that Ax^∗_ℓ = b_ℓ and for each such guess, recurs into subproblems
with the right-hand sides b_ℓ and b - b_ℓ and then retrieve both x_ℓ^∗
and x_r^∗. Note, that in each recursive call, the support halves,
so the total depth of the recursion is O(logΓ) =
O(log m + loglogΔ).
This idea already yields a (mΔ)^O(m^2 loglogΔ)(I) time and
polynomial space algorithm. However, the dependence on the number
of constraints m is much higher than what is promised in <ref>.
The bottleneck of the earlier mentioned idea of guessing the support and applying Lenstra's algorithm
is that one enumerates O(n^Γ) support sets.
In <ref> we show that the number of relevant support sets is much
lower and that they can be enumerated using the branching algorithm as a subroutine
on small instances where it runs more efficiently. This is then combined with running
Lenstra's algorithm on each support set.
§ BRANCHING ALGORITHM
As a subroutine in <ref>, we will first introduce an algorithm based on the idea
of branching on the support, which has the following properties.
Let A ∈^m × n, b ∈^m, c ∈^n, and Δ≥A_∞.
Assume that (<ref>) is bounded and feasible and let x^* be
the lexicographically minimal optimal solution.
For any σ∈ℕ we can in time (σΔ)^O(m logΓ)·(n)
and polynomial space, compute x^* under the promise that x^*_1 ≤σ. If this promise is not satisfied, there is no guarantee of correctness.
Note that the running time matches <ref> except for the dependence on σ.
There are known bounds on h, see e.g., <cit.>, and
after the preprocessing in <cit.> one can guarantee a bound of σ = (mΔ)^O(m).
This would therefore give a polynomial space algorithm with running time (mΔ)^O(m^2logΓ)(I),
but this is larger than the running time claimed in <ref>. We will
apply it more efficiently in <Ref>.
Now, we present the algorithm behind <ref>. For the pseudocode, see <ref>.
This recursive algorithm has an additional parameter s, which is a bound on the support.
On the initial call we use the bound Γ from <ref>.
The base case consists of the
scenario when s ≤ 1. If s = 0, we return the all-zero vector if and only
if b is an all-zero vector; otherwise, we return . If s = 1 let e_i, i
∈ [n], be the indicator vector, i.e., a vector with 1
at position i and 0 otherwise. We consider the set {z · e_i : z · A e_i = b, z
∈, i ∈ [n]} and return the last element
that minimizes the dot product with c (or if no such element exists).
Observe that for each i, such an integer z can be determined in O(m)
time; hence, in total, the base case can be decided in O(mn) time.
When s > 1, we intuitively split the support of the solution into two (almost) equal-size parts
and guess the target b_ℓ for the first
part in the original solution. More precisely, we guess a
vector b_ℓ∈{-σΔ, …, σΔ}^m, recursively compute
x_ℓ and x_r, which are lexicographically minimal optimal solutions to A x_ℓ = b_ℓ and A x_r = b -
b_ℓ with half the support of the previous call (rounded up for one recursion and down for the other),
and if both of them exist,
return x_r + x_ℓ, breaking ties according to lexicographical order.
If all of the guesses for b_ℓ fail in at least one recursion, we return .
This concludes the description of the algorithm.
For the running time, observe
that the integer s decreases by a constant factor in each recursive call and starts with
value Γ.
Thus, the depth of the recursion is O(logΓ). The number of possible b_ℓ
vectors at each recursion depth is (2 σΔ + 1)^m, so the total
size of the branching tree is (σΔ)^O(m logΓ).
The base case takes O(mn) time; hence, the total running time is as claimed.
Similarly, the space complexity is (n, logΓ) as the algorithm needs to store the
stack of the recursion of height O(logΓ); the base case requires
(n) space.
Next, we argue about the correctness. We prove by induction over s that
branch(A,b,c,σ,s) returns the lexicographically minimal optimal solution x^* ∈^n to
(<ref>), provided that x^*_1 ≤σ and x^*_0 ≤ s.
In the base case, s ≤ 1, the
algorithm straightforwardly checks over all possible non-zero coefficients of
x^* and returns a lexicographically minimal one. Note that if b is the all-zero vector then
assuming the integer program is bounded there cannot be a non-zero solution with negative objective
value. Hence, the all-zero solution is optimal and lexicographically minimal.
Now assume that s > 1. Let x^∗∈^n be the lexicographically
minimal solution to (<ref>) and assume that x^*_1 ≤σ and x^*_0 ≤ s. Let x_ℓ^∗, x_r^∗∈^n be
arbitrary vectors such that x_ℓ^∗ + x_r^∗ = x^∗, x_ℓ_0
≤x^∗_0/2, and x_r^∗_0 ≤x^∗_0/2. For example, x_ℓ^∗ could be the first half
of the non-zero coefficients of x^∗, and x_r^∗ could be the remaining
non-zero coefficients. Finally, let b^∗_ℓ = Ax^∗_ℓ. Observe
that x^*_ℓ_1 ≤x^*_1 ≤σ, x^*_r_1 ≤x^*_1 ≤σ, and b^∗_ℓ∈{-σΔ,…,σΔ}^m.
Furthermore, x^*_ℓ and x^*_r must be lexicographically minimal optimal solutions
for right-hand sides b^∗_ℓ and b - b^∗_ℓ since otherwise this would
contradict x^∗ being the lexicographically minimal optimal solution for b.
Hence, if in <ref> we guess b_ℓ = b_ℓ^∗, the induction hypothesis guarantees
that branch(A,b_ℓ,c,σ,s/2) and
ILP(A,b-b_ℓ,c,σ,s/2) return x^*_ℓ and x^*_r.
In particular, x^* = x^*_ℓ + x^*_r must be returned by the algorithm, since no
lexicographically smaller optimal solution exists that could dominate x^*.
§ CANDIDATE SUPPORT SETS AND MAIN THEOREM
A naive enumeration of all support sets of cardinality at most Γ would require
O(n^Γ). In this section we will design a more efficient enumeration and prove <Ref>.
For a matrix A ∈^m × n, we say that a set 𝒞⊆{0,1}^n is a candidate support set if:
* x_1 ≤Γ for every x ∈𝒞, and
* for every b ∈^m and c ∈^n such that (<ref>) is bounded and feasible there exists an optimal solution x^∗ to (<ref>) with (x^∗) ∈𝒞.
For every A ∈^m× n with A_∞≤Δ, there exists a candidate support set of size (m Δ)^O(m). Moreover, this set can be enumerated in time (m Δ)^O(m logΓ)·(I) and polynomial space.
We note that, with <ref>, the proof of <ref> is a straightforward application of known results.
We enumerate a candidate support set 𝒞 of A using <ref>. Then for every c ∈𝒞, we consider the instance truncated to variables in (c) and solve it using the exact algorithm for integer programming by Lenstra and Kannan, i.e., <Ref>.
The correctness follows from Property <ref> of 𝒞. For the running time, note that by <ref> we have |𝒞| ≤ (m Δ)^O(m) and it requires overhead of (m Δ)^O(m logΓ)(I) to enumerate 𝒞. Recall that the algorithm of Lenstra and Kannan <cit.> solves (<ref>) in Γ^O(Γ)(I) time and polynomial space for at most Γ variables. In total, the running time of the algorithm is
(m Δ)^O(m logΓ)(I) + |𝒞| ·Γ^O(Γ)·(I)
≤ (mΔ)^O(m logΓ)·Γ^O(Γ)·(I)
≤ (mΔ)^O(m (log m + loglogΔ))·(I).
It remains to prove <ref>.
This relies on the following property that Deng, Mao, and Zhong <cit.> noticed for Unbounded Knapsack
and which easily generalizes to ILP.
Let x^* ∈^n be the lexicographically minimum solution to (<ref>). Then, there exists x' ∈{0,1}^n and b' ∈^m such that (i) x' is the lexicographically minimal optimal solution to min{c x' : A x' = b'} and (ii) (x') = (x^*).
Let x' be the indicator vector of (x^*) and let b' = Ax'. Clearly, Property (ii) holds. Hence, it remains to prove (i). For the sake of contradiction, assume that y is a solution to the integer program with right-hand side b' and c y < c x'
Consider z := y + x^* - x'. Then z is a solution to {Ax = b', x∈^n}. But c z < c x^*, which contradicts the optimality of x^*.
Similarly, if there is a solution y with c y = c x' and y x'. Then
z = y + x^* - x' is an optimal solution to min{c x, Ax = b', x∈^n} that is lexicographically smaller than x^*.
First, we iterate over every b' ∈^m such that b'_∞≤Γ·Δ. Note that the number of such b' is (2ΓΔ + 1)^m. Next, for each such b', we invoke <ref> with σ = Γ, which returns the lexicographically minimal optimal solution x^* to
min{c x : Ax = b', x∈^n }
if x^*_1≤σ.
Finally, for each of these guesses, we output x if x ∈{0,1}^n.
This concludes the description of the algorithm. The algorithm runs in time
(2ΓΔ+1)^m · (ΓΔ)^O(m logΓ)·(I) ≤ (m Δ)^O(m logΓ)·(I)
and uses polynomial space.
Property <ref> of <ref> holds because every x^* considered
is lexicographically minimal and therefore by <ref> has support at most Γ. For Property <ref> consider some b∈^m, c∈^n for which (<ref>) is bounded and feasible. Let x^* be the lexicographically minimal optimal solution. By <ref>
we have x^*_0 ≤Γ. Furthermore, by <Ref> there is some x'∈{0,1}^n with (x') = (x^*), which is the lexicographically minimal optimal solution for b' = Ax'. Because b'_∞≤ΓΔ and x'_1 ≤σ the algorithm above will consider b' and correctly output x'.
§ CONCLUSION
In this paper, we show that an integer linear program can be solved
in time O(mΔ)^O(m(log m + loglogΔ)(I)
and polynomial space.
It remains open whether the additional factor of O(log m + loglogΔ) in the exponent
compared to the algorithm without space limitation can be avoided.
We note that our algorithm has two bottlenecks that lead to this running time:
(i) the depth of O(log m + loglogΔ) for our branching and
(ii) because we use the algorithm of Lenstra and
Kannan that runs in time N^O(N)(I) on an instance with N = O(m log(mΔ)).
Thus, even if the improvement to (logN)^O(N)(I) by Reis and
Rothvoss <cit.> could be implemented in polynomial space, this would not immediately
lead to an improvement.
Already for the special case of Unbounded Knapsack it is unclear how to avoid
the exponent of loglogΔ and achieve a pseudopolynomial running time.
Finally, we want to mention the case of binary integer linear programs, i.e., where
we substitude the domain constraint for x∈{0,1}^n in (<ref>).
Here, the best known algorithm runs in time
(mΔ)^O(m^2)(I) <cit.> and naive application of
Savitch's theorem yields polynomial
space at significant higher running time of (mΔ)^O(m^3 log(mΔ))(I).
It is unclear how to come close to
an exponent of O(m^2) in polynomial space and the ideas in this paper do not seem to be
applicable, since the solutions need not be sparse.
plain
|