_id
stringlengths 36
36
| text
stringlengths 5
665k
| marker
stringlengths 3
6
| marker_offsets
sequence | label
stringlengths 28
32
|
---|---|---|---|---|
736125ad-44a9-4af5-8450-d588e6c6b63b | On this basis, Eq. 1 is false because it implies two contradictory statements. How can there be an event at \(t_1\) if S can, before \(t_2\) , physically reverse it as if it never happened? Ref. [1]} is correct: a reversible measurement is a contradiction. The von Neumann amplification chain of Eq. 1, on which the in-principle possibility of SC rests, is internally inconsistent and thus false.
| [1] | [
[
196,
199
]
] | https://openalex.org/W3142343317 |
67c5b681-3ee2-47c1-8fed-30c96fb2b544 | A large part of the (vast) literature on metric dimension reduction focuses on showing that a typical low-rank linear operator chosen randomly from a specific ensemble acts as an approximate isometry on a given set \({S}\) with high probability. For subsets \({S}\) of Euclidean space, this principle has been confirmed for random projections [1]}, [2]}, [3]}, [4]}, matrices with Gaussian [5]}, [6]}, [7]}, Rademacher [8]}, [9]} and subgaussian [10]}, [11]}, [12]}, [13]} entries, randomizations of matrices with the RIP [14]} as well as more computationally efficient models [15]}, [16]}, [17]}, [18]}, [19]} which are based on sparse matrices. Beyond its inherent interest as an \(\ell _p\) -dimension reduction theorem (albeit, for specific configurations of points), Theorem REF also differs from the aforementioned works in its method of proof. The core of the argument, rather than sampling from a random matrix ensemble, relies on Maurey's empirical method [20]} (see Section REF ) which is a dimension-free way to approximate points in bounded convex subsets of Banach spaces by convex combinations of extreme points with prescribed length. An application of the method to the positive cone of \(L_p\) -distance matrices (the use of which in this context is inspired by classical work of Ball [21]}) equipped with the supremum norm allows us to deduce (see Proposition REF ) the conclusion of Theorem REF under the stronger assumption that
\( K\ge \max _{i\in \lbrace 1,\ldots ,n\rbrace } \Vert x_i\Vert _{L_\infty (\mu )}.\)
| [1] | [
[
345,
348
]
] | https://openalex.org/W2979473749 |
4cada0b3-94d2-48bf-90e2-8fe5f893d7ad | A large part of the (vast) literature on metric dimension reduction focuses on showing that a typical low-rank linear operator chosen randomly from a specific ensemble acts as an approximate isometry on a given set \({S}\) with high probability. For subsets \({S}\) of Euclidean space, this principle has been confirmed for random projections [1]}, [2]}, [3]}, [4]}, matrices with Gaussian [5]}, [6]}, [7]}, Rademacher [8]}, [9]} and subgaussian [10]}, [11]}, [12]}, [13]} entries, randomizations of matrices with the RIP [14]} as well as more computationally efficient models [15]}, [16]}, [17]}, [18]}, [19]} which are based on sparse matrices. Beyond its inherent interest as an \(\ell _p\) -dimension reduction theorem (albeit, for specific configurations of points), Theorem REF also differs from the aforementioned works in its method of proof. The core of the argument, rather than sampling from a random matrix ensemble, relies on Maurey's empirical method [20]} (see Section REF ) which is a dimension-free way to approximate points in bounded convex subsets of Banach spaces by convex combinations of extreme points with prescribed length. An application of the method to the positive cone of \(L_p\) -distance matrices (the use of which in this context is inspired by classical work of Ball [21]}) equipped with the supremum norm allows us to deduce (see Proposition REF ) the conclusion of Theorem REF under the stronger assumption that
\( K\ge \max _{i\in \lbrace 1,\ldots ,n\rbrace } \Vert x_i\Vert _{L_\infty (\mu )}.\)
| [13] | [
[
469,
473
]
] | https://openalex.org/W2288884963 |
e73339d7-7f45-4b73-ba47-2e77ed4a255a | Sign language recognition is a task taking the visual content performed by sign language signers to produce the associated glosses.
Early studies on SLR focused on recognizing isolated signs or gestures to produce word-level or phrase-level outputs, which followed a pattern recognition pipeline: various hand-crafted visual features such as SIFT and SURF were obtained from an input signing video and a trained classifier took these features to produce signing labels [1]}, [2]}.
With the advances of deep learning, convolution neural networks (CNN) and recurrent neural networks (RNN) were also adopted for isolated SLR [3]}. However, recognizing isolated signs provides limited understanding of a complete sign language sentence.
Therefore, continuous SLR has been investigated at the sentence level by treating continuous signings as a sequence of signing poses.
Recently, various deep architectures have been proposed to perform continuous SLR task [4]}, [5]}. To consider the domain knowledge of sign languages, fine-level regional patterns were investigated for accurate SLR in an independent manner [6]}, [7]}, [8]}, [9]}, [10]}, [11]}.
To explore the interactions between the local regions of a human body in a signing pose, graph-based neural networks were proposed to formulate the spatial relations between the regions or the temporal relations within a region across frames [12]}, [13]}, [14]}, [15]}, [16]}.
However, the spatio-temporal relations have been seldom explored with hierarchical structures, which could miss important sign language patterns.
| [8] | [
[
1119,
1122
]
] | https://openalex.org/W2188882108 |
407595e5-0f1a-4b94-8af6-57c5268e4caf | Sign language recognition is a task taking the visual content performed by sign language signers to produce the associated glosses.
Early studies on SLR focused on recognizing isolated signs or gestures to produce word-level or phrase-level outputs, which followed a pattern recognition pipeline: various hand-crafted visual features such as SIFT and SURF were obtained from an input signing video and a trained classifier took these features to produce signing labels [1]}, [2]}.
With the advances of deep learning, convolution neural networks (CNN) and recurrent neural networks (RNN) were also adopted for isolated SLR [3]}. However, recognizing isolated signs provides limited understanding of a complete sign language sentence.
Therefore, continuous SLR has been investigated at the sentence level by treating continuous signings as a sequence of signing poses.
Recently, various deep architectures have been proposed to perform continuous SLR task [4]}, [5]}. To consider the domain knowledge of sign languages, fine-level regional patterns were investigated for accurate SLR in an independent manner [6]}, [7]}, [8]}, [9]}, [10]}, [11]}.
To explore the interactions between the local regions of a human body in a signing pose, graph-based neural networks were proposed to formulate the spatial relations between the regions or the temporal relations within a region across frames [12]}, [13]}, [14]}, [15]}, [16]}.
However, the spatio-temporal relations have been seldom explored with hierarchical structures, which could miss important sign language patterns.
| [13] | [
[
1394,
1398
]
] | https://openalex.org/W2463640844 |
f8c54597-a553-4271-ae5c-452456ae5e51 | Sign language recognition is a task taking the visual content performed by sign language signers to produce the associated glosses.
Early studies on SLR focused on recognizing isolated signs or gestures to produce word-level or phrase-level outputs, which followed a pattern recognition pipeline: various hand-crafted visual features such as SIFT and SURF were obtained from an input signing video and a trained classifier took these features to produce signing labels [1]}, [2]}.
With the advances of deep learning, convolution neural networks (CNN) and recurrent neural networks (RNN) were also adopted for isolated SLR [3]}. However, recognizing isolated signs provides limited understanding of a complete sign language sentence.
Therefore, continuous SLR has been investigated at the sentence level by treating continuous signings as a sequence of signing poses.
Recently, various deep architectures have been proposed to perform continuous SLR task [4]}, [5]}. To consider the domain knowledge of sign languages, fine-level regional patterns were investigated for accurate SLR in an independent manner [6]}, [7]}, [8]}, [9]}, [10]}, [11]}.
To explore the interactions between the local regions of a human body in a signing pose, graph-based neural networks were proposed to formulate the spatial relations between the regions or the temporal relations within a region across frames [12]}, [13]}, [14]}, [15]}, [16]}.
However, the spatio-temporal relations have been seldom explored with hierarchical structures, which could miss important sign language patterns.
| [16] | [
[
1415,
1419
]
] | https://openalex.org/W2997931247 |
b76fd16f-0fc9-4c6e-a86f-336b216cfe14 | Although there have been impressive SLR results, the gap between glosses and spoken language sentences still exist. To address this problem, sign language translation has been studied to take the rich grammatical structures in spoken languages into consideration. It aims to generate spoken language sentences rather than a sequence of glosses [1]}.
For example, RNN [2]} and Hierarchical LSTM [3]} were adopted to extract visual information and to generate spoken language sentences. Moreover, the above-mentioned deep learning based SLR methods were also explored for SLT [4]}, [5]}, [6]}. However, domain knowledge of sign languages such as the interactions between human body regions has not been adequately investigated yet. Missing such fine-grained patterns could result in less accurate sign language representations, and thus negatively impacts the quality of the generated spoken language scripts.
| [2] | [
[
367,
370
]
] | https://openalex.org/W2799020610 |
7383f1e4-b8a1-4035-babf-ba02c7c981d5 | Although there have been impressive SLR results, the gap between glosses and spoken language sentences still exist. To address this problem, sign language translation has been studied to take the rich grammatical structures in spoken languages into consideration. It aims to generate spoken language sentences rather than a sequence of glosses [1]}.
For example, RNN [2]} and Hierarchical LSTM [3]} were adopted to extract visual information and to generate spoken language sentences. Moreover, the above-mentioned deep learning based SLR methods were also explored for SLT [4]}, [5]}, [6]}. However, domain knowledge of sign languages such as the interactions between human body regions has not been adequately investigated yet. Missing such fine-grained patterns could result in less accurate sign language representations, and thus negatively impacts the quality of the generated spoken language scripts.
| [3] | [
[
394,
397
]
] | https://openalex.org/W2954798773 |
ce9b139c-7006-466e-bda8-7045d6a3e972 | Although there have been impressive SLR results, the gap between glosses and spoken language sentences still exist. To address this problem, sign language translation has been studied to take the rich grammatical structures in spoken languages into consideration. It aims to generate spoken language sentences rather than a sequence of glosses [1]}.
For example, RNN [2]} and Hierarchical LSTM [3]} were adopted to extract visual information and to generate spoken language sentences. Moreover, the above-mentioned deep learning based SLR methods were also explored for SLT [4]}, [5]}, [6]}. However, domain knowledge of sign languages such as the interactions between human body regions has not been adequately investigated yet. Missing such fine-grained patterns could result in less accurate sign language representations, and thus negatively impacts the quality of the generated spoken language scripts.
| [4] | [
[
574,
577
]
] | https://openalex.org/W3114337930 |
c89d4d80-0b4b-442c-953b-80eec3921f07 | A high-level graph characterizes the spatio-temporal relationships between the three key regions of a human body in video frames.
That is, a high-level graph can be constructed with three vertices which denote facial region, left-hand region, and right-hand region, respectively.
The three key regions can be obtained by pose estimation algorithms (e.g., HRNet [1]}). To characterize each vertex, a bounding box is used to extract an associated frame sub-patch and the vertex-level feature can be computed using the image patch with a pre-trained CNN. Note that the frame used to extract the visual features can be in the modality of appearance (RGB) or motion (e.g., optical flow[2]}) in this paper. As a result, two high-level graphs are constructed, one for each modality.
| [1] | [
[
361,
364
]
] | https://openalex.org/W3014641072 |
1d9e0889-f90c-4fa9-a496-eece11142c5f | The proposed method was evaluated on two widely used benchmark sources: PHOENIX-2014, PHOENIX-2014-T [1]} and Chinese Sign Language Recognition (CSL) [2]}, [3]}, [4]}.
PHOENIX-2014 contains videos from PHOENIX TV station, which includes the weather forecast content featured with signers over a period of three years. The videos were collected with a resolution of 210 by 260 at 25 frames per second (fps) using a stationary color camera.
The dataset was annotated with sign language glosses and texts in German spoken language. The vocabulary size is 1,115 for sign glosses and 3,000 for German.
The CSL dataset contains two subsets: Split I and Split II. In this study, Split II was adopted, which contains 100 sign language sentences related to the daily life with a vocabulary size of 178. Each sentence in the split was performed by 50 signers each of whom repeated the signing for 5 times. These sentences were recorded in RGB videos with a spatial resolution 1280 by 720 at 30 fps.
Among the 100 sentences, 94 sentences were in the training set and 6 sentences were in the test set.
| [1] | [
[
101,
104
]
] | https://openalex.org/W2759302818 |
cb726154-ab49-4805-a3e9-f9dbc77b327b | To demonstrate the effectiveness of the proposed method, a number of recently proposed methods were compared as shown in Table REF : iterative alignment network (IAN) [1]}, which adopts an iterative alignment network to reduce the gap between videos and generated glosses, enabling a better correspondence between glosses and frames;
DenseTCN [2]}, which introduced temporal convolutions to efficiently explore temporal patterns; CNN-LSTM-HMM [3]}, which combined LSTM and HMM in language modeling for the construction of gloss sequences and an intermediate synchronization constraints, respectively.
Deep neural frame (DNF) [4]}, which incorporates RGB and motion features of body regions to explore fine-level sign language patterns;
Spatial-temporal multi-cue network (STMC) [5]}, which involved a dense network to characterize the spatial and the temporal dependencies to improve the recognition quality;
Hierarchical LSTM (HLSTM) [6]}, which was proposed to extract multiple levels of attention with adaptive online key clip mining;
Neural sign language translation (NSLT) [7]}, which was the first study using CNN-LSTM sign language translation in an end-to-end manner;
Neural language translation with transformer (SLTT) [8]}, which utilized transformers for sign language translation based on STMC [5]}.
| [1] | [
[
167,
170
]
] | https://openalex.org/W2948139159 |
915fb1d5-d5ca-45dc-99b8-f9330775805d | To demonstrate the effectiveness of the proposed method, a number of recently proposed methods were compared as shown in Table REF : iterative alignment network (IAN) [1]}, which adopts an iterative alignment network to reduce the gap between videos and generated glosses, enabling a better correspondence between glosses and frames;
DenseTCN [2]}, which introduced temporal convolutions to efficiently explore temporal patterns; CNN-LSTM-HMM [3]}, which combined LSTM and HMM in language modeling for the construction of gloss sequences and an intermediate synchronization constraints, respectively.
Deep neural frame (DNF) [4]}, which incorporates RGB and motion features of body regions to explore fine-level sign language patterns;
Spatial-temporal multi-cue network (STMC) [5]}, which involved a dense network to characterize the spatial and the temporal dependencies to improve the recognition quality;
Hierarchical LSTM (HLSTM) [6]}, which was proposed to extract multiple levels of attention with adaptive online key clip mining;
Neural sign language translation (NSLT) [7]}, which was the first study using CNN-LSTM sign language translation in an end-to-end manner;
Neural language translation with transformer (SLTT) [8]}, which utilized transformers for sign language translation based on STMC [5]}.
| [6] | [
[
935,
938
]
] | https://openalex.org/W2788334925 |
8180e8b3-e23d-4d70-921d-90f7a32c36de | The interfoamsolver has been validated in previous studies [1]}, and it showed nice agreement in tracking interfaces for highly inertial flows and surface-tension-driven capillary flows. Standard tests showed that the CSF formulation in interfoamled to a proper discrete balance between pressure and surface tension [1]}.
| [1] | [
[
59,
62
],
[
316,
319
]
] | https://openalex.org/W1973206369 |
98737d65-7eb3-4a3c-9c34-b55078baf934 | Although details are not presented here, we independently performed validation studies to assess the quantitative predictive ability of the interfoamsolver on two-phase problems with oscillating liquid-gas interfaces. We evaluated extensive simulation results from interfoamagainst reference solutions in canonical multiphase flow test cases in the high capillarity limit, obtained by analytical investigations [1]}, [2]}, [3]}, [4]} with matched experimental investigations [5]}, [6]}, or numerical solutions obtained by other numerical methods on highly resolved grids (level set [7]} or phase field[8]}). These canonical validation test cases involve damped oscillations of initially perturbed two-dimensional circular (cylinder) and spherical droplets [8]}, [7]}, of capillary waves in a periodic domain [7]}, and of the brimful cylindrical tank with water [5]}, [2]}, [4]}. These simulations were performed with material properties of either the water-air or the liquid metal-argon systems. We confirmed that the oscillation frequency and the damping rate on perturbed capillary flows calculated by the interfoamsolver showed nice agreement against the reference solutions within \(\sim 10\%\) errors. We note that the numerical dissipation of interfoam, especially on some coarse grids, contributed to the overestimation of the damping rate, and we acknowledged that the grid convergence study was a critical component of the validation. We tested three-step grid refinements and demonstrated the grid convergence on both damping rate and oscillation frequency.
| [2] | [
[
417,
420
],
[
867,
870
]
] | https://openalex.org/W2037356874 |
15494c25-c4be-4556-969c-26c0928468bd | Although details are not presented here, we independently performed validation studies to assess the quantitative predictive ability of the interfoamsolver on two-phase problems with oscillating liquid-gas interfaces. We evaluated extensive simulation results from interfoamagainst reference solutions in canonical multiphase flow test cases in the high capillarity limit, obtained by analytical investigations [1]}, [2]}, [3]}, [4]} with matched experimental investigations [5]}, [6]}, or numerical solutions obtained by other numerical methods on highly resolved grids (level set [7]} or phase field[8]}). These canonical validation test cases involve damped oscillations of initially perturbed two-dimensional circular (cylinder) and spherical droplets [8]}, [7]}, of capillary waves in a periodic domain [7]}, and of the brimful cylindrical tank with water [5]}, [2]}, [4]}. These simulations were performed with material properties of either the water-air or the liquid metal-argon systems. We confirmed that the oscillation frequency and the damping rate on perturbed capillary flows calculated by the interfoamsolver showed nice agreement against the reference solutions within \(\sim 10\%\) errors. We note that the numerical dissipation of interfoam, especially on some coarse grids, contributed to the overestimation of the damping rate, and we acknowledged that the grid convergence study was a critical component of the validation. We tested three-step grid refinements and demonstrated the grid convergence on both damping rate and oscillation frequency.
| [4] | [
[
429,
432
],
[
873,
876
]
] | https://openalex.org/W1982595563 |
4a3744ec-2303-445b-8438-2c10a68ececd | This rest of the paper is organized as follows. The system model is described in Sec. II. Then in Sec. III we propose two fast optimal time allocation algorithms for the sum-throughput maximization and the min-throughput maximization, respectively. Sec. IV presents the simulation results for comparison with the algorithm in [1]}. Finally, Sec. V concludes the paper.
| [1] | [
[
326,
329
]
] | https://openalex.org/W2094000142 |
924a805e-b494-4b44-b442-45125088dbef | The first term in Eq. (REF ) corresponds to the so-called tadpole (reducible) contribution explored in numerous studies (see, e.g., Refs. [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}). It predicts emission of photons similar to those constituting the external field or higher harmonics. The most robust technique for computing this term is based on the locally-constant field approximation (LCFA) [6]}, [7]}, [8]}, [9]}, [10]}, [11]}. This technique is expected to be accurate once the external-field frequency \(\omega \) is much less than \(m\) , which was evidently confirmed in our recent investigation [2]}, where it was also shown that the LCFA prediction may considerably differ from the exact values of the photon yield if \((\omega /m)^2 \gtrsim 0.3\) .
| [3] | [
[
150,
153
]
] | https://openalex.org/W1584972625 |
2600d80e-6237-49f6-ba9a-223b5d0a2c36 | Computing now the number of pairs (REF ), we will identify the threshold value of the field amplitude \(E_0\) depending on \(\tau \) by the condition \(N^{(\text{pairs})} = 10\) . Finally, we point out that the expression (REF ) is quite universal with respect to the choice of the temporal profile of the external field. For instance, in the case of an oscillating background with duration \(T = \tau \) , one obtains a similar expression which differs from Eq. (REF ) only by factor \(2^{3/2}/\pi \approx 0.9\) (see, e.g., Refs. [1]}, [2]}, [3]}, [4]}, [5]}).
| [1] | [
[
534,
537
]
] | https://openalex.org/W3205166059 |
937f908c-34ed-413f-9b3c-61a3441c70f2 | It follows from the previous equation that the temporal factor of the ponderomotive force is nonzero only when we consider the effects of the temperature in the dielectric tensor [1]} . Hence, the temporal term of the Washimi and Karpman ponderomotive force for unmagnetized plasmas is a thermal effect. Figure REF shows the relative difference of the Kappa and Maxwellian ponderomotive temporal factor for transverse waves as function of \(\bar{k}\) . As we can see in this figure the non-thermal effect on the temporal term of the ponderomotive force it is very significant and it remains almost constant with respect to the scaled wavenumber. We can also deduce in the expression (REF ) that for any value of \(\kappa \) the ratio between the forces does not vary significantly with thermal velocity. Also the effect of the kappa parameter is given mainly by the term \(\kappa /(\kappa -3/2)\) that accompanies the other dispersion relation dependent terms by reason of the previous section where we know that if \(\alpha _e/c \ll 1\) then \(\bar{\omega }_{\kappa \perp }/\bar{\omega }_{\textsf {M}\perp } \approx 1\) . So for non-relativistic velocities we can use the following approximation \( f^{\kappa \perp }_{(t)}/f^{\mathsf {M}\perp }_{(t)} \approx \kappa /(\kappa -3/2) > 1\) .
| [1] | [
[
179,
182
]
] | https://openalex.org/W3016053968 |
bc997727-aeb9-4483-bf19-ed13a2a3e971 | where we have written them in terms of an orthonormal
basis \(\left|{x}\right\rangle \) and \(\left|{y}\right\rangle \) of a two dimensional Hilbert
space and an angle \(\theta \) between them.
Let us denote the overlap between them as \(\langle {\psi _0}|{\psi _1}\rangle =\cos 2\theta =:c\) .
We use a three outcome POVM because the protocol considered here is unambiguous [1]}.
Following [2]} we have that the sequential probability of success
for unambiguously discriminating 2 hypotheses when \(N\) copies are available goes as
\(P_S^{UA}=1-c^N\) . Remarkably, this is a result that applies for a global strategy
as well as for online strategies. This equivalence implies a negative answer to our
original question. To see this last statement imagine that Bob makes
chunks of size \(l\) from the original set of \(N\) states.
We would therefore have the states \(\left|{\Psi _a^l}\right\rangle =\left|{\psi _a}\right\rangle ^{\otimes l}\) .
The effective overlap between the redefined copies is \(C=\langle {\Psi _0^l}|{\Psi _1^l}\rangle =\langle {\psi _0}|{\psi _1}\rangle ^l=c^l\) .
We would therefore
have \(N/l\) chunks. As we have chunks of size \(l\) then we can see this fact as
a redefinition of a copy. We have therefore the probability of success
for unambiguous of these chunks as
\(P_s^{UA}=1-C^{N/l}=1-c^N.\)
| [1] | [
[
378,
381
]
] | https://openalex.org/W2041907204 |
c6596492-f39f-49ea-8b91-633b0259a79c | The reason of the simple substitution on Eq. (REF ) is that the global
performance of the unambiguous protocol is achieved by an online strategy [1]}.
The online strategy is to apply an unambiguous POVM for each available copy.
Being an unambiguous measurement then the probability of success with \(N\)
copies coincides with probability of
stopping at step \(N\) because this measurement yields a zero error answer. Only if we get
an inconclusive outcome we would have to keep on measuring.
However, we can wait to have all the \(N\) copies and make a global unambiguous measurement
and have a result with the same success probability,
therefore there is no gain by the ordering strategy by Bob in the unambiguous protocol.
| [1] | [
[
145,
148
]
] | https://openalex.org/W3108274925 |
894e7e7d-2da5-458f-bb26-1110d7a6be56 | Now we can ask for collective measurements. Returning to our original question: Bob has \(N\) copies of pure states
and makes collective measurements such that he has \(N_0=N/l\) chunks of \(l\) copies. Bob uses an
ordering given by a function \(f:{N}\rightarrow {N}\) that for our purposes can be any function that assigns order to a
finite set. The probabilities \(p\) and \(q\) depend also on the chunk size \(l\) .
The reason for this dependence comes from
the fact that the effective “angle” between chunks is a function of \(l\) , i.e. \(\Theta (l)\) .
Recall that the angle \(\Theta \) goes exponentially fast to \(\pi /2\) with respect to \(l\) [1]},
\(\cos 2\Theta = \cos ^l 2\theta =:c^l.\)
| [1] | [
[
662,
665
]
] | https://openalex.org/W1963779635 |
8cf7e069-0623-479b-81cb-bb48010f0719 |
By Chen's theory of iterated integrals (see e.g., [1]}) we know that
AMMVs must satisfy the integral shuffle relations. For example,
\(&M(\check{\bar{1}})M(\check{\bar{2}})=M_{-1}^{\operatorname{od}}(1)M_{-1}^{\operatorname{od}}(2)\\=&\displaystyle \int _{0}^{1} w_{-1}^{-1} \displaystyle \int _{0}^{1} w_0w_{-1}^{-1}=\displaystyle \int _{0}^{1} w_{-1}^{-1}w_0w_{-1}^{-1}+2\displaystyle \int _{0}^{1} w_0w_{-1}^{-1}w_{-1}^{-1}=-M(\bar{1},\check{2})-2M(\bar{2},\check{1})\)
| [1] | [
[
51,
54
]
] | https://openalex.org/W4230781866 |
d033ae73-778a-4f3e-bb48-d75bcf3fd9a4 | The proof is the same as that for the corresponding results of MMVs. See [1]}.
| [1] | [
[
73,
76
]
] | https://openalex.org/W3082022163 |
5f9cf838-6533-4e34-bf37-8ba2f73af7c9 | In [1]}, Flajolet and Salvy used residue computations on large circular contour and specific functions to obtain more independent relations for Euler sums. These functions are of the form \(\xi (s)r(s)\) , where \(r(s):=1/{s^q}\) and \(\xi (s)\) is a product of cotangent (or cosecant) and polygamma function. Applying Lemmas REF -REF and residue computations, we can establish some new identities. The main results are as follows.
| [1] | [
[
3,
6
]
] | https://openalex.org/W2083779556 |
c4709049-da7a-4c0c-81b7-e6269a57d0d6 | For each weight \(w\) , we let \({\bf MB}_w\) be the conjectural basis of \(\mathbb {Q}\) -vector space generated by all AMMVs of weight \(w\) . Using Au's Mathematica package [1]}, [2]}, we obtain
\(&{\bf MB}_1=\lbrace \pi ,\log (2)\rbrace ,\\&{\bf MB}_2=\lbrace G,\pi ^2,\pi \log (2),\log ^2(2)\rbrace ,\\&{\bf MB}_3=\left\lbrace \zeta (3),\Im \operatorname{Li}_3\Big (\displaystyle \frac{1+i}{2}\Big ),G\pi ,G\log (2),\pi ^3,\pi ^2\log (2),\pi \log ^2(2),\log ^3(2)\right\rbrace ,\\&{\bf MB}_4=\left\lbrace \operatorname{Li}_4(1/2),\beta (4), \Im \operatorname{Li}_4\Big (\displaystyle \frac{1+i}{2}\Big ),\pi \zeta (3),\log (2)\zeta (3),\pi \Im \operatorname{Li}_3\Big (\displaystyle \frac{1+i}{2}\Big ),\log (2)\Im \operatorname{Li}_3\Big (\displaystyle \frac{1+i}{2}\Big ),\atop G^2,G\pi ^2,G\pi \log (2),G\log ^2(2),\pi ^4,\pi ^3\log (2),\pi ^2\log ^2(2),\pi \log ^3(2),\log ^4(2)\right\rbrace .\)
| [1] | [
[
177,
180
]
] | https://openalex.org/W3041180731 |
59969008-8cd2-491d-82d6-b341c1d4a314 | In this section, we briefly review end-to-end trainable Bilinear pooling (BP) [1]}.
This method extracts representations from the same image with two CNNs and computes the cross-covariance as representation.
This representation outperforms its first-order version and other second-order representations such as Fisher Vectors [2]} once the global architecture is fine-tuned.
Most of recent works on bilinear pooling only focus on computing covariance of the extracted features with a single CNN, that is :
\({\mathbf {y}}= \sum _{i} {\mathbf {x}}_i {\mathbf {x}}_i^T = {\mathbf {X}}{\mathbf {X}}^T \in {\mathbb {R}}^{d \times d}\)
| [2] | [
[
326,
329
]
] | https://openalex.org/W1606858007 |
0e928f42-b40e-4723-9fb8-fe47bc0bd9b1 | In this section, we compare our method to the state-of-the-art on 3 retrieval datasets: Stanford Online Products [1]}, CUB-200-2011 [2]} and Cars-196 [3]}.
For Stanford Online Products and CUB-200-2011, we use the same train/test split as [1]}.
For Cars-196, we use the same as [5]}.
We report the standard recall@K with \(K \in \lbrace 1, 10, 100, 1000\rbrace \) for Stanford Online Products and with \(K \in \lbrace 1, 2, 4, 8\rbrace \) for the other two.
We implement the codebook factorization from Eq. (REF ) with a codebook size of 32 (denoted JCF -32).
While JCF -32 outperforms state-of-the-art methods on the three dataset, our low-rank approximation JCF -32-8, which cost 4 times less also leads to state-of-the-art performances on two of them with a loss between 1-2% consistent with our ablation studies.
In the case of Cars-196 however, the performances are much more lower than the full model.
We argue that the variety introduced by the colors, the shapes, etc. in cars requires more projections to be estimated, as it is observed for the full model.
<TABLE><TABLE> | [1] | [
[
113,
116
],
[
239,
242
]
] | https://openalex.org/W2963026686 |
ddacac0e-0a02-4ab3-b2bb-fe386b26783d | Li and Panigrahi [1]} gave an algorithm for finding exact
isolating cuts using \(O(\log n)\) max-flow/mincut computations that crucially relies
on the uncrossing property of mincuts. This property ensures
that if we take a minimum isolating cut \(X\) containing a terminal
vertex \(s\) and a crossing mincut \(Y\) , then their
intersection \(X\cap Y\) or difference \(X\setminus Y\) (depending on which set
the terminal vertex \(s\) is in) is also a minimum isolating cut.
This allows us to partition the
graph by removing edges corresponding to a set of mincuts, such that each
terminal is in one of the components of this partition. For each terminal,
we can now find the corresponding minimum isolating cut by simply contracting
the rest of the components and running a max-flow algorithm on this contracted
graph. The advantage of this contraction is that the total size of all the graphs
on which we are running max-flows is only a constant times the size of the overall
graph.
| [1] | [
[
17,
20
]
] | https://openalex.org/W3129188913 |
858d4662-a884-4e91-b5eb-9184a8fd3c42 | The more complicated application of approximate isolating
cuts is the approximate Gomory-Hu tree problem. In the recursive algorithm of Li and Panigrahi [1]}, it is crucial to ensure that
the approximation is one-sided in the sense that
the “large” recursive subproblem preserves mincut
values exactly. But, in general,
if we use the approximate isolating cuts subroutine
as a black box, this would not be the case. To alleviate
this concern, we augment the approximate isolating cuts
procedure using an additional fairness condition for
the isolating cuts returned by the algorithm. This fairness
condition ensures that although we
do not have one-sided approximation, the approximation
factor in the “large” subproblem can be controlled using
a much finer parameter than the overall approximation
factor of the algorithm, which then allows us to run
the recursion correctly. The details of the Gomory-Hu tree
algorithm establishing thm:ghtree are presented in
sec:ghtree.
| [1] | [
[
153,
156
]
] | https://openalex.org/W3170659189 |
4110044f-5517-47fd-92fd-2f952bb45449 | When we want to argue that flow \(f\) \(\epsilon \) -satsifies a source
function \(\Delta \) , it can be inconvenient to ensure that \(|\Delta ^{f}(S)|\le \epsilon \cdot \delta (S)\)
for all \(S\subseteq V\) because there are exponentially many sets.
Surprisingly, there is a collection \(\mathcal {S}\) of linearly many
sets of vertices, where if \(|\Delta (S)|\le \epsilon \cdot \delta (S)\) for each \(S\in {\cal S}\) ,
then this is also true for all \(S\subseteq V\) with some \(\mathrm {polylog}(n)\)
blow-up factor. Moreover, \(\mathcal {S}\) can be computed in near-linear
time.
[Congestion approximator [1]}, [2]}]There
is a randomized algorithm that, given a graph \(G=(V,E)\) with \(n\)
vertices and \(m\) edges, constructs in \(\tilde{O}(m)\) time with high probability a laminar
family \(\mathcal {S}\) of subsets of \(V\) such that
| [2] | [
[
626,
629
]
] | https://openalex.org/W4307609184 |
8e75c235-fdeb-48f6-9795-4f90422a9000 | The second approach uses the current graph technique. Within this method, it was provided the first exponential lower bound (of type \(2^{dv}\) ) on the number of non-isomorphic face 2-colorable triangular embeddings of \(K_v\) for infinitely many values of \(v\) . Then, similar results have been also given in the cases of genus and quadrangular embeddings (see [1]}, [2]}, [3]}, [4]}).
The approach used in this paper belongs to this second family.
The main tool we will use is the concept of Heffter array, introduced by Archdeacon in [5]} to provide constructions of current graphs.
Section 2 of this paper will be dedicated to introducing this kind of array, to reviewing the literature on this topic
and to further investigating the connection with biembeddings.
Then, in Section 3, we will deal with the following problem: given a family of embeddings each of which admits \(\mathbb {Z}_v\) as a regular automorphism group (i.e. embeddings that are \(\mathbb {Z}_v\) -regular), how many of its elements can be isomorphic? Proposition REF will provide an upper bound on this number.
In the last two sections we will consider, and we will hugely extend, two families of \(\mathbb {Z}_v\) -regular embeddings obtained via Heffter arrays, respectively, in [6]}, [7]} and in [8]}, [9]}.
These families, together with Proposition REF , allow us to achieve the existence of an exponential number of non-isomorphic \(k\) -gonal biembeddings of \(K_v\) and \(K_{\frac{v}{t}\times t}\) in several situations.
In particular, in Section 4 we obtain that, when \(k\) is congruent to 3 modulo 4 and \(v\) belongs to an infinite family of values, there are \(k^{\frac{k}{2}+o(k)} \cdot 2^{g(k)v+o(v)}\) non-isomorphic \(k\) -gonal biembeddings of \(K_v\) where \(g(k)\) is a rational function of \(k\) . Finally, in Section 5, we consider the embeddings of \(K_{\frac{v}{t}\times t}\) . In this case, for \(t\in \lbrace 1,2,k\rbrace \) , we provide a construction of \(2^{h(k)v+o(v,k)}\) non-isomorphic \(k\) -gonal biembeddings whenever \(k\) is odd, \(v\) belongs to a wide infinite family of values and where \(h(k)\) is a rational function of \(k\) .
| [3] | [
[
377,
380
]
] | https://openalex.org/W2023344889 |
02c76d2c-ee56-4772-84a1-f425e14770d2 | An \(m\times n\) partially filled (p.f., for short) array on a given set \(\Omega \) is an \(m\times n\) matrix with elements in \(\Omega \)
in which some cells can be empty. Archdeacon [1]} introduced a class of p.f. arrays, called Heffter arrays, and showed how it is related to several other mathematical concepts such as difference families, graph decompositions, current graphs and biembeddings.
These arrays have been then generalized by Costa and al. in [2]} as follows.
| [1] | [
[
190,
193
]
] | https://openalex.org/W2963868215 |
e6b2e4a6-b467-4bdc-a706-f71f635a6feb | Reasoning as in [1]}, we get the following.
| [1] | [
[
16,
19
]
] | https://openalex.org/W3034195266 |
616184ad-dab5-4012-96f4-86573531150f | The Crazy Knight's Tour Problem for a given array \(A\) is denoted by \(P(A)\) , known results can be found in [1]}.
Also, given a filled cell \((i,j)\) , if \(L_{\mathcal {R},\mathcal {C}}(i,j)\) covers all the filled positions of \(A\) we will
say that \((\mathcal {R},\mathcal {C})\) is a solution of \(P(A)\) .
The relationship between the Crazy Knight's Tour Problem and (globally simple) relative Heffter arrays is explained in the following result,
see [2]}.
| [1] | [
[
112,
115
]
] | https://openalex.org/W3134379180 |
d9b68799-f47a-434e-af66-917a84522755 | We consider now the family of embeddings of \(K_v\) obtained by Cavenagh, Donovan, and Yazıcı in [1]}. In their constructions, all the face boundaries are cycles of length \(k\) .
| [1] | [
[
98,
101
]
] | https://openalex.org/W3090730147 |
13d89151-3bb6-449d-b46a-701feed00561 | (1)
\(t\in \lbrace 1,2\rbrace \) and \(k\equiv 0\pmod {4}\) [1]}, [2]};
(2)
\(t\in \lbrace 1,2\rbrace \) , \(k\equiv 1\pmod {4}\) and \(n\equiv 3\pmod {4}\) [3]}, [4]};
(3)
\(t\in \lbrace 1,2\rbrace \) , \(k\equiv 3\pmod {4}\) and \(n\equiv 0,1\pmod {4}\) [1]};
(4)
\(t=k\) , \(k\equiv 1\pmod {4}\) and \(n\equiv 3\pmod {4}\) [6]};
(5)
\(t=k\) , \(k\equiv 3\pmod {4}\) and \(n\equiv 0,3\pmod {4}\) [6]};
(6)
\(t\in \lbrace n,2n\rbrace \) , \(k=3\) and \(n\) is odd [8]}.
| [3] | [
[
161,
164
]
] | https://openalex.org/W2755624844 |
6f30262e-f312-43a3-ae96-a6bdc2eda6b0 | The second class of techniques are overlay-based set visualizations.
Here, all the elements are explicitly shown at pre-defined positions in the plane, which could represent geolocations in spatial datasets or positions of nodes in a node-link diagram of underlying network data.
The techniques create visual overlays that indicate the set memberships of the input point set.
Bubble Sets [1]} create isocontours for each set, which enclose their corresponding points similarly to regions in Euler diagrams.
Pairs of isocontours, however, may intersect even if their two sets have an empty intersection in the data.
LineSets [2]} is based on the idea of connecting the elements of each set by a smooth and short curve reminiscent of lines in metro maps.
Kelp Diagrams [3]} and KelpFusion [4]} extend and combine ideas from LineSets and Bubble Sets by parameterizing set representations from sparse spanning graphs to convex hulls, with the middle range resulting in bubble shapes for local point clusters in a set and edge-like links to connect elements further apart.
All overlay techniques inherently require a given embedding of the set elements and so these methods are not directly applicable for abstract set systems.
| [2] | [
[
624,
627
]
] | https://openalex.org/W2159205441 |
38c0f828-7c50-49b4-bc75-158233373b45 | Visualizations based on bipartite graphs represent elements and sets as two types of vertices.
Each element is connected by an edge to all sets containing that element, which yields a bipartite graph.
General visualization techniques for (bipartite) graphs can then be employed and have been integrated in several systems [1]}, [2]}, [3]}, [4]}.
However, for complex set systems, the resulting layouts are dense with many edge crossings and there is little support for set-related tasks [5]}.
| [5] | [
[
487,
490
]
] | https://openalex.org/W2180041965 |
cca408c8-99db-45dc-a027-e88122dfd40c | The archetypal analysis formulation we consider adds a constraint
and forces the endmembers to be convex combinations of the pixels present in
\(\mathbf {X}\) . Formally, it simply means that there exists a
matrix \(\mathbf {B}\) in \(\mathbb {R}^{N \times p}\) such that \(\mathbf {E}= \mathbf {X}\mathbf {B}\) and the columns of \(\mathbf {B}\)
are in the simplex \(\Delta _N\) (their entries are non-negative and sum to one).
This formulation yields the archetypal analysis formulation [1]}:
A,B12X- XBAF2,
aip for 1 i N
bjN for 1 j p
where \(\mathbf {A}= [\mathbf {a}_{1}, \ldots , \mathbf {a}_{N}]\) and \(\mathbf {B}= [\mathbf {b}_{1}, \ldots ,\mathbf {b}_{p}]\) .
| [1] | [
[
494,
497
]
] | https://openalex.org/W4253763636 |
69308d7f-c6ba-4b62-84e6-778ed671b2c5 | We will now see how the negative entropy \(h\) yields
explicit steps that effectively enforce the simplicial constraints.
Indeed, after a short classical calculation (see, for instance [1]}),
it is possible to show that the update (REF ) is equivalent to the
following one: for all \(j\) in \(\lbrace 1, \ldots , d\rbrace \) ,
\( \mathbf {z}_j^{k+1} = \frac{\mathbf {z}_j^k e^{-\eta _k \nabla f(\mathbf {z}^k)_j}}{\sum _{l=1}^d \mathbf {z}_l^k e^{-\eta _k \nabla f(\mathbf {z}^k)_l}},\)
| [1] | [
[
186,
189
]
] | https://openalex.org/W2016384870 |
1c8de755-e50e-4343-8c97-d1f13157a160 | In this section, we introduce the developed dataset and compare pre-trained language models [1]}, [2]} on our dataset DocBank-TB to understand the extent to which table caption can be generated automatically.
Specifically, we compared the table parts to be used,
and retrieval methods to find relevant sentences
by the caption prediction task,
in which,
given the first sentence in a caption,
the prediction accuracy of the remaining sentences was measured.
| [1] | [
[
92,
95
]
] | https://openalex.org/W3082274269 |
8903cc70-0426-4d5a-8525-6fcbdfa4e6c6 | In this work, we propose a fine-tuning technique, Custom Diffusion for text-to-image diffusion models. Our method is computationally and memory efficient. To overcome the above-mentioned challenges, we identify a small subset of model weights, namely the key and value mapping from text to latent features in the cross-attention layers [1]}, [2]}. Fine-tuning these is sufficient to update the model with the new concept. To prevent model forgetting, we use a small set of real images with similar captions as the target images. We also introduce augmentation during fine-tuning, which leads to faster convergence and improved results. To inject multiple concepts, our method supports training on both simultaneously or training them separately and then merging.
| [1] | [
[
336,
339
]
] | https://openalex.org/W2963403868 |
c538cf62-0205-4324-b190-9da0f34cf8f4 | Deep generative models Generative models aim to synthesize samples from a data distribution, given a set of training examples. These include GANs [1]}, [2]}, [3]}, VAEs [4]}, auto-regressive [5]}, [6]}, [7]}, flow-based [8]}, [9]}, and diffusion models [10]}, [11]}, [12]}. To improve controllability, these models can be conditioned on a class [3]}, [14]}, image [15]}, [16]}, [17]}, [18]}, or text prompt [19]}, [20]}, [21]}. Our work mainly relates to text-conditioned synthesis [22]}. While earlier works [23]}, [24]}, [19]}, [20]}, [27]}, [28]} were limited to a few classes [29]}, [30]}, recent text-to-image models [21]}, [32]}, [33]}, [34]}, [12]}, [36]}, [37]}, trained on extremely large-scale data, have demonstrated remarkable generalization ability. However, such models are by nature generalists and struggle to generate specific instances like personal toys or rare categories, e.g., “moongate”.
We aim to adapt these models to become specialists in new concepts.
| [10] | [
[
255,
259
]
] | https://openalex.org/W3036167779 |
1ce5eaca-b46f-420d-843b-32cb23c5ce70 | Deep generative models Generative models aim to synthesize samples from a data distribution, given a set of training examples. These include GANs [1]}, [2]}, [3]}, VAEs [4]}, auto-regressive [5]}, [6]}, [7]}, flow-based [8]}, [9]}, and diffusion models [10]}, [11]}, [12]}. To improve controllability, these models can be conditioned on a class [3]}, [14]}, image [15]}, [16]}, [17]}, [18]}, or text prompt [19]}, [20]}, [21]}. Our work mainly relates to text-conditioned synthesis [22]}. While earlier works [23]}, [24]}, [19]}, [20]}, [27]}, [28]} were limited to a few classes [29]}, [30]}, recent text-to-image models [21]}, [32]}, [33]}, [34]}, [12]}, [36]}, [37]}, trained on extremely large-scale data, have demonstrated remarkable generalization ability. However, such models are by nature generalists and struggle to generate specific instances like personal toys or rare categories, e.g., “moongate”.
We aim to adapt these models to become specialists in new concepts.
| [29] | [
[
582,
586
]
] | https://openalex.org/W1797268635 |
f43d0376-9850-4251-bf5a-90940a9a5073 | Deep generative models Generative models aim to synthesize samples from a data distribution, given a set of training examples. These include GANs [1]}, [2]}, [3]}, VAEs [4]}, auto-regressive [5]}, [6]}, [7]}, flow-based [8]}, [9]}, and diffusion models [10]}, [11]}, [12]}. To improve controllability, these models can be conditioned on a class [3]}, [14]}, image [15]}, [16]}, [17]}, [18]}, or text prompt [19]}, [20]}, [21]}. Our work mainly relates to text-conditioned synthesis [22]}. While earlier works [23]}, [24]}, [19]}, [20]}, [27]}, [28]} were limited to a few classes [29]}, [30]}, recent text-to-image models [21]}, [32]}, [33]}, [34]}, [12]}, [36]}, [37]}, trained on extremely large-scale data, have demonstrated remarkable generalization ability. However, such models are by nature generalists and struggle to generate specific instances like personal toys or rare categories, e.g., “moongate”.
We aim to adapt these models to become specialists in new concepts.
| [30] | [
[
589,
593
]
] | https://openalex.org/W1861492603 |
d5eee6d9-15f5-4910-851d-6b1fb5f25121 | Adapting text-to-image models. Similar to our goals, two concurrent works, DreamBooth [1]} and Textual Inversion [2]}, adopt transfer learning to text-to-image diffusion models [3]}, [4]} via either fine-tuning all the parameters [1]} or introducing and optimizing a word vector [2]} for the new concept. Our work differs in several aspects. First, our work tackles a challenging setting: compositional fine-tuning of multiple concepts, where concurrent works struggle. Second, we only fine-tune a subset of cross-attention layer parameters, which significantly reduces the fine-tuning time. We find these design choices lead to better results, validated by automatic metrics and human preference studies.
<FIGURE> | [3] | [
[
179,
182
]
] | https://openalex.org/W4281485151 |
2a256fd0-eb6e-425f-be56-d3967d25fae1 | Adapting text-to-image models. Similar to our goals, two concurrent works, DreamBooth [1]} and Textual Inversion [2]}, adopt transfer learning to text-to-image diffusion models [3]}, [4]} via either fine-tuning all the parameters [1]} or introducing and optimizing a word vector [2]} for the new concept. Our work differs in several aspects. First, our work tackles a challenging setting: compositional fine-tuning of multiple concepts, where concurrent works struggle. Second, we only fine-tune a subset of cross-attention layer parameters, which significantly reduces the fine-tuning time. We find these design choices lead to better results, validated by automatic metrics and human preference studies.
<FIGURE> | [4] | [
[
185,
188
]
] | https://openalex.org/W4226317937 |
d84017fe-b3ab-4ece-9753-13e52cf7d664 | Regularization dataset.
Fine-tuning on the target concept and text caption pair can lead to the issue of language drift [1]}, [2]}. For example, training on “moongate” will lead to the model forgetting the association of “moon” and “gate” with their previously trained visual concepts, as shown in Figure REF . Similarly, training on a personalized concept of V\(^{*}\) tortoise plushy can leak, causing all examples with plushy to produce the specific target images. To prevent this, we select a set of 200 regularization images from the LAION-400M [3]} dataset with corresponding captions that have a high similarity with the target text prompt, above threshold \(0.85\) in CLIP [4]} text encoder feature space.
<FIGURE> | [4] | [
[
685,
688
]
] | https://openalex.org/W3135367836 |
91955f1e-60b4-4d57-8f7b-07c055745a5b | Baselines.
We compare our method with the two concurrent works, DreamBooth [1]} (third-party implementation [2]}) and Textual Inversion [3]}. DreamBooth fine-tunes all the parameters in the diffusion model, keeping the text transformer frozen, and uses generated images as the regularization dataset. Each target concept is represented by a unique identifier, followed by its category name, i.e., “[V] category”, where [V] is a rarely occurring token in the text token space and not optimized during fine-tuning. Textual Inversion optimizes a new V\(^*\) token for each new concept. We also compare with the competitive baseline of Custom Diffusion (w/ fine-tune all), where we fine-tune all the parameters in the U-Net [4]} diffusion model, along with the V\(^*\) token embedding in our method. We provide implementation details for all baselines in the supplement.
| [3] | [
[
138,
141
]
] | https://openalex.org/W4289785095 |
8e2554d3-a09f-4f13-9436-3bb0fb00ca14 | Quantitative evaluation.
We evaluate on 20 text prompts and 50 samples per prompt for each dataset, resulting in a total of 1000 generated images. We use DDPM sampling with 50 steps and a classifier-free guidance scale of 6 across all methods. As shown in Figure REF , our method outperforms the concurrent methods [1]}, [2]}. Also, Custom Diffusion works on par with our proposed baseline of fine-tuning all the weights in the diffusion model, while being more computationally and time efficient. Table REF also shows the KID of generated images by each fine-tuned model on a reference dataset, with captions similar to the fine-tuned concept. As we observe, our method has lower KID than most baselines, which suggests less overfitting to the target concept.
| [1] | [
[
317,
320
]
] | https://openalex.org/W4293342478 |
01f4d80a-4d64-4fc7-855c-9c217aabb329 |
We propose a new FL algorithm with only assumption of strongly convex and smooth loss functions, named FEDL. The crux of FEDL is a new local surrogate function, which is designed for each UE to solve its local problem approximately up to a local accuracy level \(\theta \) , and is characterized by a hyper-learning rate \(\eta \) . Using primal convergence analysis, we show the linear convergence rate of FEDL by controlling \(\eta \) and \(\theta \) , which also provides the trade-off between the number of local computation and global communication rounds. We then employ FEDL, using both strongly convex and non-convex loss functions, on PyTorch to verify its performance with several federated datasets. The experimental results show that FEDL outperforms the vanilla FedAvg [1]} in terms of training loss, convergence rate and test accuracy.
We propose a resource allocation problem for FEDL over wireless networks to capture the trade-off between the wall clock training time of FEDL and UE energy consumption by using the Pareto efficiency model. To handle the non-convexity of this problem, we exploit its special structure to decompose it into three sub-problems. The first two sub-problems relate to UE resource allocation over wireless networks, which are transformed to be convex and solved separately; then their solutions are used to obtain the solution to the third sub-problem, which gives the optimal \(\eta \) and \(\theta \) of FEDL. We derive their closed-form solutions, and characterize the impact of the Pareto-efficient controlling knob to the optimal: (i) computation and communication training time, (ii) UE resource allocation, and (iii) hyper-learning rate and local accuracy. We also provide extensive numerical results to examine the impact of UE heterogeneity and Pareto curve of UE energy cost and wall clock training time.
| [1] | [
[
784,
787
]
] | https://openalex.org/W2541884796 |
338fd3f1-9bb6-4861-bb9a-698471d973cd | The first approach is based on de facto algorithm SGD with a fixed number of local iterations on each device [1]}. Despite its feasibility, these studies still have limitations as lacking the convergence analysis. The work in [2]}, on the other hand, used GD and additional assumptions on Lipschitz local functions and bounded gradient divergence to prove the algorithm convergence.
| [2] | [
[
226,
229
]
] | https://openalex.org/W2963318081 |
336a89c0-829d-4b93-a828-bc7fb693b33f |
The convergence of FEDL can always be obtained by setting sufficiently small values of both \(\eta \) and \(\theta \in (0, 1)\) such that \(\Theta \in (0,1)\) . While the denominator of (REF ) is greater than 2, its numerator can be rewritten as \(2 \eta (A - B)\) , where \(A = 2(\theta -1)^2- (\theta +1)\theta (3\eta +2)\rho ^2\) and \(B = (\theta +1)\eta \rho ^2\) . Since \(\lim _{\theta \rightarrow 0} A = 2\) and \(\lim _{\theta ,\eta \rightarrow 0} B = 0\) , there exists small values of \(\theta \) and \(\eta \) such that \(A-B > 0\) , thus \(\Theta > 0\) . On the other hand, we have \(\lim _{\eta \rightarrow 0} \Theta = 0\) ; thus, there exists a small value of \(\eta \) such that \(\Theta < 1\) . We note that \(\Theta \in (0,1)\) is only the sufficient condition, but not the necessary condition, for the convergence of FEDL. Thus, there exist possible hyper-parameter settings such that FEDL converges but \(\Theta \notin (0,1)\) .
There is a convergence trade-off between the number of local and global rounds characterized by \(\theta \) : small \(\theta \) makes large \(K_l\) , yet small \(K_g\) , according to (REF ) and (REF ), respectively. This trade-off was also observed by authors of [1]}, though their technique (i.e., primal-dual optimization) is different from ours.
While \(\theta \) affects to both local and global convergence, \(\eta \) only affects to the global convergence rate of FEDL. If \(\eta \) is small, then \(\Theta \) is also small, thus inducing large \(K_g\) . However, if \(\eta \) is large enough, \(\Theta \) may not be in \((0,1)\) , which leads to the divergence of FEDL. We call \(\eta \) the hyper-learning rate for the global problem (REF ).
The condition number \(\rho \) also affects to the FEDL convergence: if \(\rho \) is large (i.e., poorly conditioned problem (REF )), both \(\eta \) and \(\theta \) should be sufficiently small in order for \(\Theta \in (0,1)\) (i.e., slow convergence rate.) This observation is well-aligned to traditional optimization convergence analysis [2]}.
In this work, the theory of FEDL is applicable to (i) full data passing using GD, (ii) the participation of all UEs, and (iii) strongly convex and smooth loss functions. However, using mini-batch is a common practice in machine learning to reduce the computation load at the UEs. On the other hand, choosing a subset of participating UEs in each global iteration is a practical approach to reduce the straggler effect, in which the run-time of each iteration is limited by the “slowest” UEs (the straggler) because heterogeneous UEs compute and communicate at different speeds. Finally, non-convex loss functions capture several essential machine learning tasks using neural networks. In Section , we will experimentally show that FEDL works well with (i) mini-batch, (ii) subset of UEs samplings, and (iii) non-convex loss functions.
| [1] | [
[
1223,
1226
]
] | https://openalex.org/W2962741697 |
8ae2a4a5-8741-43a5-b43a-7aba5f1cbb11 | Minimize both UEs' energy consumption and the FL time are conflicting. For example, the UEs can save the energy by setting the lowest frequency level all the time, but this will certainly increase the training time. Therefore, to strike the balance between energy cost and training time, the weight \(\kappa \) (Joules/second), used in the objective as an amount of additional energy cost that FEDL is willing to bear for one unit of training time to be reduced, captures the Pareto-optimal tradeoff between the UEs' energy cost and the FL time. For example, when most of the UEs are plugged in, then UE energy is not the main concern, thus \(\kappa \) can be large. According to optimization theory, \(1/\kappa \) also plays the role of a Lagrange multiplier for a “hard constraint” on UE energy [1]}.
| [1] | [
[
800,
803
]
] | https://openalex.org/W2296319761 |
69005716-9bca-4d2b-8713-babbecac7364 | Most recently, there have been some other approaches particularly designed to solve state classification challenge [1]}, [2]}, [3]}, [4]}. These models are based on pretrained models like VGG[5]}, ResNet[6]}, Inception network [7]}. The current state recognition challenge requires to classify the states without using any pretrained model. That is why we have fine-tuned our parameters by thorough analysis and tried to find the best configuration for our case. Then, we have trained the model from scratch.
| [5] | [
[
191,
194
]
] | https://openalex.org/W1686810756 |
d8ffa33d-0fe6-4d1a-8dcc-ea33a81a3358 | Most recently, there have been some other approaches particularly designed to solve state classification challenge [1]}, [2]}, [3]}, [4]}. These models are based on pretrained models like VGG[5]}, ResNet[6]}, Inception network [7]}. The current state recognition challenge requires to classify the states without using any pretrained model. That is why we have fine-tuned our parameters by thorough analysis and tried to find the best configuration for our case. Then, we have trained the model from scratch.
| [7] | [
[
227,
230
]
] | https://openalex.org/W2097117768 |
7f07dc70-7608-4005-8a60-4aca7eefbd85 | Image generation in various forms has been a long quest in the vision community, not only for its practical use and aesthetic value, but also for assessing our model's ability to disentangle and exploit different aspects of image variability. Separating content and style in image generation dates back to the seminar work of Tenenbaum and Freeman [1]} more than twenty years ago. Since then, resorting to a hierarchical architecture has enabled us to generate images of more structured objects such as human faces [2]}, [3]}. More recently, adversarial training [4]} and improved model architecture design [5]} have brought the quality of computer-generated images to a level that is oftentimes indistinguishable from real images to our eyes. Besides generating photo-realistic images, we can also generate images of various artistic styles by learning from artwork examples [6]}, [7]}, [8]}.
| [3] | [
[
521,
524
]
] | https://openalex.org/W1959608418 |
6d15160c-6938-4a9c-9a84-1cd7d1acb506 | Generative Adversarial Networks. Generative Adversarial Networks [1]} have achieved impressive performance in lots of computer vision tasks [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}. StyleGAN [10]} proposed an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. StyleGAN and other state-of-the-art works [6]}, [12]} can generate high quality synthetic images which are almost indistinguishable from real images. [13]} significantly stabilizes training in limited data regimes by an adaptive discriminator augmentation mechanism. [13]} also proposes an artistic styles dataset and they show good image generation result on it by only using limited training data. But the learning process doesn't account for the differences between training examples which can't provide fine-grained style control. Unlike the unconditional (original) GAN, conditional GANs use conditional information for the generator and discriminator. GAN-based techniques have also been widely explored in the image-to-image translation task. For supervised training, Pix2Pix [4]} uses L1 loss with a cGAN framework to learn the mapping network from source domain images to target domain images. For unsupervised training, in order to get rid of the dependency on paired data, a cycle-consistency loss was proposed by CycleGAN [2]}. We show our framework is general in the sense that the components can be used in either image synthesis framework (StyleGAN) or image-to-image translation framework (CycleGAN). Unlike the unconditional (original) GAN, conditional GANs use conditional information for the generator and discriminator. Instead of naively concatenating conditional information to the input and feeding them to discriminators, [17]} proposed a novel, projection based method to incorporate the conditional information into GAN discriminators.
| [4] | [
[
152,
155
],
[
1112,
1115
]
] | https://openalex.org/W2963073614 |
fb5f9e57-ab10-4a7e-8ae1-add74f12f506 | Generative Adversarial Networks. Generative Adversarial Networks [1]} have achieved impressive performance in lots of computer vision tasks [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}. StyleGAN [10]} proposed an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. StyleGAN and other state-of-the-art works [6]}, [12]} can generate high quality synthetic images which are almost indistinguishable from real images. [13]} significantly stabilizes training in limited data regimes by an adaptive discriminator augmentation mechanism. [13]} also proposes an artistic styles dataset and they show good image generation result on it by only using limited training data. But the learning process doesn't account for the differences between training examples which can't provide fine-grained style control. Unlike the unconditional (original) GAN, conditional GANs use conditional information for the generator and discriminator. GAN-based techniques have also been widely explored in the image-to-image translation task. For supervised training, Pix2Pix [4]} uses L1 loss with a cGAN framework to learn the mapping network from source domain images to target domain images. For unsupervised training, in order to get rid of the dependency on paired data, a cycle-consistency loss was proposed by CycleGAN [2]}. We show our framework is general in the sense that the components can be used in either image synthesis framework (StyleGAN) or image-to-image translation framework (CycleGAN). Unlike the unconditional (original) GAN, conditional GANs use conditional information for the generator and discriminator. Instead of naively concatenating conditional information to the input and feeding them to discriminators, [17]} proposed a novel, projection based method to incorporate the conditional information into GAN discriminators.
| [17] | [
[
1775,
1779
]
] | https://openalex.org/W2962754210 |
2f3357f0-52bd-4baf-b3fa-145f0d00471e | Control for image generation. Many approaches [1]}, [2]}, [3]}, [4]}, [5]} exploit the inherent disentanglement properties of GAN latent space to control the generated images. InfoGAN [6]} maximizes the mutual information between the observation and a subset of the latent variables. InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset in a completely unsupervised manner. Some methods [7]}, [8]}, [9]} were proposed to control face image generation explicitly. However, they are only applicable to the controls parameterized by a 3D face model. [10]} uses contrastive learning to obtain GANs with an explicitly disentangled latent space. In the face domain, they can control over pose, age, identity, etc.. In the painted portrait domain, they can also control the style. But similar to [7]}, [8]}, there is a deterioration in FID when control is introduced [10]}. Compared with these methods, our method improves the quality of the generated images over the original non-control version without the need of 3D face rending framework and we embed original artwork examples into a continuous so we can interpolate between them (moving in the style space) at test time. There are some works designed for neural style transfer. [14]} aligns the mean and variance of the content features with those of the style features by adaptive instance normalization (AdaIN). They use an encoder to embed all style images in a continuous space and can achieve style transfer by interpolating between arbitrary styles. However this kind of methods can't generate high quality facial images by transferring the style from training artistic styles examples, because it's more likely to transfer the texture styles to the target image. Compared with it, our method incorporates the projection discriminator to make the generator output images which close to the distribution of real style images and provide the fine-grained style control.
| [10] | [
[
582,
586
],
[
895,
899
]
] | https://openalex.org/W3120777408 |
2df5e90f-897e-4a4e-8ccc-5e8083391a53 | Overview. We start by creating an embedding of the training images that reflects their continuously varying artistic style. Vectors in the style embedding space (that we call style vectors) are subsequently used in the image generator and the extended projection discriminator [1]} that we use to help training. To be more specific we introduce these components in the context of StyleGAN [2]}, [3]}. Our modules can be used with image translation frameworks as well such as CycleGAN (see supplementary materials).
| [2] | [
[
389,
392
]
] | https://openalex.org/W2962770929 |
e0208484-5a33-414a-b822-debe9e59dd65 | Overview. We start by creating an embedding of the training images that reflects their continuously varying artistic style. Vectors in the style embedding space (that we call style vectors) are subsequently used in the image generator and the extended projection discriminator [1]} that we use to help training. To be more specific we introduce these components in the context of StyleGAN [2]}, [3]}. Our modules can be used with image translation frameworks as well such as CycleGAN (see supplementary materials).
| [3] | [
[
395,
398
]
] | https://openalex.org/W3035574324 |
1e7cae02-25aa-449b-9dc4-8b037ec21e94 | We hope to embed original artwork examples into a continuous style space which reflects their slightly varying artistic style. The Gram matrix has been widely used to extract style information from images [1]}, [2]}, it consists of the correlations between different filter responses. We use VGG-19 CNN architecture pre-trained for the ImageNet [3]} classification task to extract Gram matrix features as described in the following. With an artwork image as input, we compute the the activations for each convolution layer of the network.
| [2] | [
[
211,
214
]
] | https://openalex.org/W2475287302 |
9cdcf903-2adb-4ef3-95c0-fae42cc11bb6 | One of the basic building block of a generator \(G\) is the adaptive instance normalization (AdaIN) module [1]} defined as:
\(\mathbf {AdaIN}(\mathbf {f}, \gamma , \beta ) = \gamma (\frac{\mathbf {f}-\mu (\mathbf {f})}{\sigma (\mathbf {f})})+\beta \)
| [1] | [
[
108,
111
]
] | https://openalex.org/W2603777577 |
9af9fba3-f809-48fa-91a2-8428403b54bc |
Metfaces [1]} with 1,336 high-quality \(1024 \times 1024\) artistic faces collected from Metropolitan Museum of Art.
Selfie2anime with 3,400 anime images for training, with an image size of \(256 \times 256\) . For this dataset we follow the same experimental setting as in UGATIT [2]}.
We also collected our own extremely challenging game-figure dataset with only 78 images. All images are collected from the Internet and resized to \(512 \times 512\) .
<TABLE><TABLE> | [1] | [
[
10,
13
]
] | https://openalex.org/W3104876213 |
5640c5a5-e2a7-4f32-91d4-112763c39d68 | This example generalizes, recovering a result of [1]} about theta functions of monomial curves.
| [1] | [
[
49,
52
]
] | https://openalex.org/W1983011652 |
c065b82d-d30d-4af8-af09-19b367a33b13 | Conversely, suppose that \(\Lambda _C=0\) . Then it must be that \(\Lambda _{\widetilde{C}}=0\) as well. Since \(\widetilde{C}\) is a smooth curve, this implies that it is a union of projective lines, hence every component of \(C\) is rational.
Hence, \(\operatorname{Jac}(\widetilde{C})=0\) and then [1]} shows that \(\operatorname{Jac}(C^{\prime })\) is a torus. But it is also unipotent because \(\Lambda _{C^{\prime }}=0\) . Thus, it must be trivial, and [1]} again proves that this is possible only if \(C\) is tree-like and every component has all unibranch singularities.
| [1] | [
[
305,
308
],
[
464,
467
]
] | https://openalex.org/W4234505739 |
195658c1-2392-42c4-97b8-8a5ae7e6573a | Values for the hyperparameters were obtained from preliminary experiments by evaluating the model performance on the development sets.
Concretely, all neural and hybrid models used the same \(D_m = 300\) -dimensional word embeddings, the hidden-size of the LSTM was set to \(D_h = 100\) , the hidden-layer size of the attention module was set to \(D_a = 100\) , and the size of the low-dimensional projection of the sparse features was set to \(D_l=50\) .
We used Adam [1]} as our optimisation method with a learning rate of \(0.001\) , a mini-batch size of \(1,000\) , and iterated over the training data for five epochs.
As a regularizer we used dropout [2]} with probability \(0.5\) applied to the mention representation and sparse feature representation.
The context window size was set to \(C=10\) and if the length of a context extends beyond the sentence length, we used a padding symbol in-place of a word.
After training, we picked the best model on the development set as our final model and report their performance on the test sets.
Our model implementation was done in Python using the TensorFlow [3]} machine learning library.
| [1] | [
[
469,
472
]
] | https://openalex.org/W2964121744 |
d858f7a0-b3d8-4031-b9ed-6f4ac0313553 | In this work, we focus on the form of collision term for dense gases and consider only spinless particles, whcih is free from the complexity of handling spinor calculations. The Wigner function for spin-0 particles is defined as
[1]},
\(W(x,p)=4\pi \int \frac{d^4y}{(2\pi \hbar )^5} e^{-\frac{i}{\hbar }p\cdot y}\left\langle :\phi ^\dagger \left(x_1\right)\phi \left(x_2\right):\right\rangle \; , \)
| [1] | [
[
229,
232
]
] | https://openalex.org/W659068132 |
e375f6b5-3f16-42b9-8f6e-e8481564e051 | In ANN, deep learning uses a cascade of multiple layers for feature extraction [1]}. Convolutional neural networks (CNNs) are specific deep learning architecture that enhances state-of-the-art performance for various image tasks, such as classification, object detection, and pattern recognition [2]}. A CNN consists of an input layer, an output layer, and multiple hidden layers. The hidden layers of a CNN typically consist of convolutional layers, pooling layers, and fully connected layers. Convolutional layers apply a convolution operation to the input, followed by a non-linear activation function.
| [1] | [
[
79,
82
]
] | https://openalex.org/W2919115771 |
d29958db-3bf8-4135-9121-9c6776cde8b8 | In this report, the Hadamard method is used instead of convolution to extract the features in the images. The CNNs using the Hadamard method are implemented to show the effect of reduced energy consumption. In single-layer and three-layer CNNs, the performance of the Hadamard method is compared with convolution for MNIST [1]} and CIFAR10 [2]} datasets. In this paper, Section explains the Hadamard method with WHT. The CNNs architecture with the Hadamard method is given in Section . Section discusses the simulation results of CNNs with the Hadamard method and convolution on MNIST and CIFAR10 datasets including energy saving for single channel and multi-channel features with various kernel sizes. Conclusions and future work are given in Section .
| [2] | [
[
340,
343
]
] | https://openalex.org/W3118608800 |
32fc8ae3-e551-467a-919b-ea8979b9dbfb | Neutron stars are on the one side a challenge requiring to understand their properties i.e mass, radius, tidal deformability, structure etc. On the other hand, they can provide information about the dense, strongly interacting matter in a region of the phase diagram that is inaccessible to terrestrial experiments [1]}, [2]}, [3]}. Since the Tolman-Oppenheimer-Volkoff (TOV) equations [4]}, [5]} provide a direct relation between the equation of state (EoS) of the compact star matter and the mass-radius (M-R) relation of the compact star, these data can help to select those effective models, used to describe the strongly interacting matter, whose predictions are consistent with compact star observables. For example, the EoS must support the existence of a two-solar-mass neutron star with star radii in the permitted radius window of 11-12.5km, cf. [6]}, [7]}.
| [3] | [
[
327,
330
]
] | https://openalex.org/W4246383199 |
ec0a69f3-20dd-4815-8326-c9214524d199 | The atomic neural networks are able to describe the complex relation between the atomic energies and the local chemical environments of the atoms. These environments are described by vectors of many-body atom-centered symmetry functions (ACSF)[1]} \(\mathbf {G}_n^\alpha \) serving as structural fingerprints of the local geometry inside a cutoff sphere of radius \(R_\mathrm {c}\) . ACSFs represent a general transformation from the Cartesian coordinates \(\lbrace \mathbf {R}\rbrace \) to a translationally, rotationally, and permutationally invariant structural description based on interatomic distances and angles. Moreover, for all atoms of a given element, the ACSF vectors have the same dimensionality to ensure the applicability of the trained atomic neural networks to large-scale simulations of systems containing different numbers of atoms. As the ACSFs depend only on the elements and positions of the atoms, HDNNPs are able to describe the making and breaking of bonds. The parameters defining the spatial shapes of the radial and angular ACSFs can be adjusted to optimize the performance as described in the Supplementary Material. More detailed information about HDNNPs, ACSFs, their properties and their construction are provided in several reviews.[2]}, [3]}, [4]}, [5]}
| [5] | [
[
1286,
1289
]
] | https://openalex.org/W3141927472 |
a9b72e8a-43cd-4e24-a2f3-953d93f8a7da | For the generation of the reference data the local hybrid exchange-correlation functional PBE0r[1]}, [2]} including D3 dispersion corrections[3]}, [4]} was used in collinar spin-polarized DFT calculations. PBE0r considers only on-site Hartree-Fock exchange terms yielding an accurate description of the partially filled Mn d shell with a computational effort comparable to generalized gradient approximation functionals. The Car-Parrinello Projector Augmented-Wave (CP-PAW) code (version from September 28, 2016)[5]}, [6]} and the DFT-D3 code (version from June 14, 2016)[3]}, [4]} were employed using the same setup as in our previous studies.[9]}, [10]}
| [9] | [
[
644,
647
]
] | https://openalex.org/W3039309865 |
14415ab5-b7c7-45d2-85d3-9bf022a1de9d | The HDNNP and HDNNS were constructed using the RuNNer code (versions from October 19, 2020 and December 4, 2018, respectively).[1]}, [2]}, [3]} The architecture of the atomic neural networks is 180-25-20-15-1 for all elements in the HDNNP and 180-20-15-10-1 for Mn in the HDNNS, which is the only spin-polarized atom in the system. The parameters of the 180 radial and angular ACSFs per element with cutoff radius \(R_\mathrm {c}=10.5\,a_0\) are compiled in the Supplementary Material along with the description of a generally applicable scheme to adjust the parameter \(\eta \) of the ACSFs to the element-specific nearest-neighbor distances. \(a_0\) is the Bohr radius.
| [1] | [
[
127,
130
]
] | https://openalex.org/W2055526416 |
054d3f22-4f3c-4694-b5dd-e6850ec23d5e | The HDNNP and HDNNS were constructed using the RuNNer code (versions from October 19, 2020 and December 4, 2018, respectively).[1]}, [2]}, [3]} The architecture of the atomic neural networks is 180-25-20-15-1 for all elements in the HDNNP and 180-20-15-10-1 for Mn in the HDNNS, which is the only spin-polarized atom in the system. The parameters of the 180 radial and angular ACSFs per element with cutoff radius \(R_\mathrm {c}=10.5\,a_0\) are compiled in the Supplementary Material along with the description of a generally applicable scheme to adjust the parameter \(\eta \) of the ACSFs to the element-specific nearest-neighbor distances. \(a_0\) is the Bohr radius.
| [2] | [
[
133,
136
]
] | https://openalex.org/W2746244909 |
08090bfa-aa57-46e7-b510-7e44c81e3689 | To assess the impact of the interface on the properties of the liquid, the system has to be sufficiently large to ensure the presence of a bulk-like region in the center of the liquid phase. This bulk-like region is not only important for comparing interfacial properties to those of the bulk, but also to obtain converged data for the interfacial properties. Figures REF (a) to (d) show the averaged atomic distributions in all four systems. The central region of the liquid phase is very similar for all different surfaces and shows only small fluctuations, which are much larger in the vicinity of the surfaces. The density of H\(_2\) O in the central 5 \(\mathrm {Å}\) slice of the liquid phase is 0.946 kg l\(^{-1}\) (\(c_{\mathrm {H}_2\mathrm {O}}=52.5\,\mathrm {mol\,l}^{-1}\) ). This density agrees very well with the density of bulk water simulations based on the HDNNP fitted to the PBE0r-D3 functional yielding 0.947 kg l\(^{-1}\) . Further, the properties of the hydrogen bond network are very similar in the centers of the liquid in all simulations (Supplementary Material). Consequently, the simulation cells, which all have a water region with a diameter of at least \(50\,\mathrm {Å}\) , are large enough to yield bulk properties in the center, which is in excellent agreement with previous studies on other solid-water interfaces.[1]}, [2]} An underestimation of the density compared to the experimental value of 0.997 kg l\(^{-1}\) at 298 K and 1 bar[3]} is common in DFT calculations and was also observed in a previous study of water based on the revPBE0-D3[4]}, [5]} DFT functional yielding 0.94 kg l\(^{-1}\) .[6]}
| [6] | [
[
1636,
1639
]
] | https://openalex.org/W2900475422 |
2cc9b9f9-b9e9-408b-b4ce-864c025eeff1 | At the \(\lbrace 100\rbrace _\mathrm {Li}\) interface the dominant PT mechanism is the hydroxylation of the solid surface (Figure REF (a)), which has been called “surface PT” in earlier work.[1]}, [2]}, [3]} An O\(^{2-}\) ion from the solid accepts one proton from an H\(_2\) O molecule such that two OH\(^-\) ions are formed (Figure REF (a)). The backward reactions most often happen between the same atoms (Figure REF (a)) since the OH\(^-\) ions are not long-living at the \(\lbrace 100\rbrace _\mathrm {Li}\) interface. PTs between H\(_2\) O molecules and OH\(^-\) ions in the first liquid layer, i.e., “adlayer PT”,[1]} happen rarely. Due to the effectively zero-dimensional proton transport by these PT reactions, the long-range transport is only given by diffusion and not by PT reactions following a Grotthus-like mechanism. In the liquid phase we can observe rare events of the autoionization of water, i.e., the formation of OH\(^-\) and H\(_3\) O\(^+\) ions from two H\(_2\) O molecules (very small green peaks between \(-15\,\mathrm {Å}<z<15\,\mathrm {Å}\) in Figure REF (a)). These PT processes in the bulk can also be observed in some of the other simulations which are not shown in Figure REF . The OH\(^-\) and H\(_3\) O\(^+\) ions always recombine very quickly. This behavior is expected as a pH value of 7 in bulk water would mean that only a concentration of \(10^{-7}\,\mathrm {mol\,l}^{-1}\) H\(_3\) O\(^+\) ions are present. Consequently, there would be around 0.1 H\(_3\) O\(^+\) ions in one of the \(5\cdot 10^4\) structures of a simulation trajectory including about \(10^3\) H\(_2\) O molecules each.
| [1] | [
[
193,
196
],
[
629,
632
]
] | https://openalex.org/W2600764488 |
e8e2548f-4ed6-4947-bf24-f0bb87923735 | We observe that when \(r=1\) , we obtain the linear case studied in [1]}, [2]}, [3]}. Instead values of \(r\) greater than one are useful both from an analytical and a physical point of view, as the power-type nonlinearity in the pairwise force function resembles a fractional derivative (see for instance [4]}, [5]}), and the well-posedness of the model is achieved in this setting (see [6]}, [4]}). Additionally, it could be easily generalized to the following more common nonlinearities used in [4]}:
\(f(\xi ,\eta ) = \frac{|\eta |^{p-2} \eta }{|\xi |^{2+\alpha p}},\qquad p\ge 2,\quad \alpha \in (0,1).\)
| [2] | [
[
74,
77
]
] | https://openalex.org/W1993499036 |
fde67cb0-bc96-4e87-bcf7-ce624b1def27 | On the other hand, the time integration of the model can be done by using explicit forward and backward difference techniques (see [1]}, [2]}, [3]}). The Störmer-Verlet method consists in an explicit central second-order finite difference scheme widely used in elastodynamics and in the context of wave propagation (see for example [4]}, [5]}, [6]}, [7]}, [8]}). It is a robust and symplectic scheme simple to implement which preserves geometric properties of the flow, such as the energy of the system, but it requires a restriction on the step size.
| [5] | [
[
338,
341
]
] | https://openalex.org/W3089129046 |
748867be-6c67-460c-92f6-621fc45b1189 | On the other hand, the time integration of the model can be done by using explicit forward and backward difference techniques (see [1]}, [2]}, [3]}). The Störmer-Verlet method consists in an explicit central second-order finite difference scheme widely used in elastodynamics and in the context of wave propagation (see for example [4]}, [5]}, [6]}, [7]}, [8]}). It is a robust and symplectic scheme simple to implement which preserves geometric properties of the flow, such as the energy of the system, but it requires a restriction on the step size.
| [6] | [
[
344,
347
]
] | https://openalex.org/W2962982010 |
206bbce9-cd46-4ca4-9f19-3e75a63ba82a | Lemma 1 (see [1]})
For every real \(0\le \mu \le s\) , there exists a positive constant \(L\) such that
\(\left\Vert u-P_N u \right\Vert _{H_p^{\mu }(V)} \le L N^{\mu -s}\left\Vert u \right\Vert _{H^s_p(V)}, \quad \text{for every $u\in H_p^{s}(V)$}.\)
| [1] | [
[
13,
16
]
] | https://openalex.org/W2031567600 |
d74a8c1f-dd70-4b10-a536-a24e0862a387 | To achieve a computationally
efficient implementationRemark: For the sake of a fast prototyping of algorithms, the proposed
time-stepping scheme is implemented in the high-level programming
tool MATLAB., the matrix polynomial \(\mathbf {Q}\) is factorized such that only
terms linear or quadratic in \(\mathbf {A}\) need to be evaluated as
suggested in Ref. [1]} for the solution of parabolic
differential equations. Therefore, the solution of Eq. (REF )
is obtained by successively solving equations that have been set up
for individual terms. For structural dynamics, the equation of a term
is partitioned into two sets of equations. After decoupling, a system
of equations similar to that of Newmark's method [2]}
is obtained. Note that the matrix \(\mathbf {A}\) —see Eq. (REF )—is
not explicitly constructed and no inversion of the mass matrix is
required. Moreover, if the stiffness matrix \(\mathbf {K}\) , the damping
matrix \(\mathbf {C}\) , and the mass matrix \(\mathbf {M}\) are sparse,
the algorithm requires only sparse matrix operations. Instead of directly
solving Eq. (REF ), it is helpful to either add
or subtract \(\mathbf {Q}\mathbf {z}_{n-1}\) from both sides of the equation
depending on the order \(M\) of the Padé approximation. Hence, for
odd orders of \(M\) we will solve
\(\mathbf {Q}(\mathbf {z}_{n}+\mathbf {z}_{n-1})=(\mathbf {P}+\mathbf {Q})\mathbf {z}_{n-1}+\sum \limits _{k=0}^{p_{\mathrm {f}}}\mathbf {C}_{k}\tilde{\mathbf {F}}_{\mathrm {m}n}^{(k)}\,,\)
| [2] | [
[
714,
717
]
] | https://openalex.org/W2550093246 |
122bed9d-4ada-43a4-b502-a6ed4b3946ec | The situation is obviously different when studying high-order accurate
time-stepping methods such as the one proposed in this article. Here,
larger time increments are possible due to the increased accuracy
of the algorithm. Hence, it is in principle possible to apply similar
refinement strategies in both space and time. The spatial h-refinement
corresponds to a decrease in the time step size \(\Delta t\) in the
time domain, while an increase in the polynomial order of the shape
functions is equivalent to an elevation of the order of the time integrator.
Thus, to achieve a highly accurate solution with the least numerical
costs a holistic hp-refinement strategy might be developed
that is not only limited to the spatial discretization but also exploited
for the temporal one. This, however, is out of the scope of the current
contribution, where we want to introduce the algorithm and discuss
its fundamental properties. Therefore, three important quantities,
i.e., stability, amplitude error (AE), and period elongation (PE),
will be briefly discussed in the remainder of this section. To this
end, we consider the SDOF system described by case #1 for which the
analytical solution is known: \(u(t)\,{=}\,\cos (\omega _{\mathrm {n}}t)\) .
The simulation is run for 10,000 periods of vibration to study the
effect of long-term numerical analysis. For a complete analysis, different
IVPs with different ICs, damping, and loading functions would have
to be considered. However, the defining numerical properties of the
time integration scheme can be investigated by only solving the problem
stated above [1]}. For a general comparison, the
results obtained using Newmark's method (constant average acceleration),
the HHT-\(\alpha \) method (\(\alpha \,{=}\,-0.05;\) default setting in
ABAQUS/standard), the generalized-\(\alpha \) method (\(\rho _{\infty }\,{=}\,0.8\) ),
and Bathe's method (\(\gamma \,{=}\,(2-\sqrt{2})\Delta t\) ; splitting
ratio) are included in the discussion.
| [1] | [
[
1612,
1615
]
] | https://openalex.org/W1573186872 |
ec886f4e-93bb-47bf-9cec-b9366f887c69 | In general we assume a graph \(\mathcal {G} = (\mathcal {V}, \mathcal {E})\) with the set of nodes \(\mathcal {V}\) , \(N = |\mathcal {V}|\) and the set of directed edges \(\mathcal {E} \subseteq \lbrace (i, j) \subseteq \mathcal {V} \rbrace \) .
Moreover, we assume an original node feature matrix \(X \in \operatorname{\mathbb {R}}^{N \times d_{\text{in}}}\) and edge features \(e_{i, j} \in \operatorname{\mathbb {R}}\) for each edge \((i,j)\) .
To leverage this structural information many authors have used Recurrent Neural Networks (RNN)[1]}, [2]}, [3]}, Transformers[4]}, [5]}, [6]}, [7]} or Graph Neural Networks (GNN)[8]}, [9]}, [10]}.
| [3] | [
[
559,
562
]
] | https://openalex.org/W2970706905 |
8eb0b2ce-a79c-4c9e-8fc5-34d6190c80cf | In general we assume a graph \(\mathcal {G} = (\mathcal {V}, \mathcal {E})\) with the set of nodes \(\mathcal {V}\) , \(N = |\mathcal {V}|\) and the set of directed edges \(\mathcal {E} \subseteq \lbrace (i, j) \subseteq \mathcal {V} \rbrace \) .
Moreover, we assume an original node feature matrix \(X \in \operatorname{\mathbb {R}}^{N \times d_{\text{in}}}\) and edge features \(e_{i, j} \in \operatorname{\mathbb {R}}\) for each edge \((i,j)\) .
To leverage this structural information many authors have used Recurrent Neural Networks (RNN)[1]}, [2]}, [3]}, Transformers[4]}, [5]}, [6]}, [7]} or Graph Neural Networks (GNN)[8]}, [9]}, [10]}.
| [5] | [
[
583,
586
]
] | https://openalex.org/W2912555327 |
2e04cd49-082b-4b09-9aa4-1c399374fe4c | CVRP For the CVRP we use the recent benchmark dataset of Uchoa et al.[1]} and select all instance up to 300 customer nodes. We define four groups of instances with 100-149 nodes as n100, 150-199 nodes as n150, 200-249 as n200 and 250-299 as n250. We compare against the same meta-heuristics mentioned above and additionally to GLS and TS provided by the OR-Tools (ORT) library[2]}. Furthermore, we compare to the recent state-of-the-art ML approaches POMO[3]} and DACT[4]} which outperformed all other ML methods mentioned in section REF in their experiments and provide open source code, which is why we consider them to be sufficient for a suitable comparison.
| [3] | [
[
455,
458
]
] | https://openalex.org/W3098134815 |
b9ba7bd6-e1d8-466c-bb44-a1b3f61f5390 | CVRP For the CVRP we use the recent benchmark dataset of Uchoa et al.[1]} and select all instance up to 300 customer nodes. We define four groups of instances with 100-149 nodes as n100, 150-199 nodes as n150, 200-249 as n200 and 250-299 as n250. We compare against the same meta-heuristics mentioned above and additionally to GLS and TS provided by the OR-Tools (ORT) library[2]}. Furthermore, we compare to the recent state-of-the-art ML approaches POMO[3]} and DACT[4]} which outperformed all other ML methods mentioned in section REF in their experiments and provide open source code, which is why we consider them to be sufficient for a suitable comparison.
| [4] | [
[
468,
471
]
] | https://openalex.org/W3203997865 |
05061427-94c2-4892-a1ab-8d22f7b04fcf | For spatiotemporal fields \(\underline{{\rm E}}(\underline{r},t)\) and \(\underline{{\rm B}}(\underline{r},t)\) , the
(real)
local instantaneous AM density with reference to a location \(\underline{r}_0\) is [1]}, [2]}
\(\underline{{\rm M}} (\underline{r}, \underline{r}_0, t) \stackrel{\Delta }{=} (\underline{r} - \underline{r}_0) \times \underline{{\rm P}} = \mu _0 \epsilon _0 (\underline{r} - \underline{r}_0) \times \underline{{\rm S}}\)
| [1] | [
[
210,
213
]
] | https://openalex.org/W3039083618 |
19cc9c1c-c7c5-44cd-8376-8cb2e201df77 | For each plane-wave component \(\lbrace \underline{{\cal E}}_i,\underline{{\cal H}}_i,\underline{k}_i\rbrace \) , a TE/TM decomposition is performed with respect to its plane of incidence \(o k_i z\) defining \(\phi _i=0\) [1]}, [2]}.
The incident plus reflected field is aggregated across the angular spectrum by integration across \(\phi _i\) , \(\theta _i\) and the uniformly distributed polarization angle \(-\psi _i\) in the locally transverse plane.
For example, for \(\alpha =x\) and \(\beta =y\) , substituting [2]} into (REF ) yields
\(&~& \hspace{-22.76228pt} E_x(z) H^*_y(z)=\frac{{\rm j}}{\pi }\int ^{\pi /2}_0 \cos \theta _1 \sin (2 k_1 z \cos \theta _1) \sin \theta _1 {\rm d}\theta _1\nonumber \\&~& \times \int ^{2\pi }_0 \left( {\cal E}_{1\theta } {\cal H}^*_{2\phi } \cos ^2 \phi _1 - {\cal E}_{1\phi } {\cal H}^*_{2\theta } \sin ^2 \phi _1 \right) {\rm d}\phi _1\)
| [2] | [
[
231,
234
],
[
524,
527
]
] | https://openalex.org/W1981596389 |
d9effd9f-27ce-40a4-9593-7851c5d4d809 | because the azimuthal dependence of their kernels are of the form \(\sin \phi \) , \(\cos \phi \) or \(\sin (2\phi )\) , or because \(\langle {\cal E}_{1\phi }{\cal E}^*_{2\theta }\rangle \propto \langle \sin \psi _1 \cos \psi _2 \rangle = 0\) .
From (REF ) and (REF ), with the aid of [1]}, it follows that \(\langle \underline{\underline{T}}\rangle \) is diagonal, isotropic, and homogeneous (i.e., nondispersive), viz.,
\(\langle \underline{\underline{T}}(kz) \rangle &=& -\frac{\langle U_{em,z} \rangle }{2} \underline{\underline{I}}_t + \frac{\langle U_{em,z} \rangle - \langle U_{em,t} \rangle }{2} \underline{1}_z \underline{1}_z \nonumber \\&=& - \frac{\epsilon _0 \langle |{\cal E}_0|^2 \rangle }{3} \underline{\underline{I}}\)
| [1] | [
[
287,
290
]
] | https://openalex.org/W2163961553 |
6d1a6556-eef6-4321-a184-8a5090f368e0 | For deterministic fields in vacuum, the conservation of LM density states that [1]}, [2]}
\(\underline{\nabla } \cdot \underline{\underline{T}} - {\rm j}\omega \underline{P} = \underline{0}.\)
| [2] | [
[
85,
88
]
] | https://openalex.org/W1505881492 |
260cfb7f-521d-4d6f-b803-9c9e7df16aa1 | In contrast, the ellipsoid mapping approach recovers all the correct long-time limits by construction, as shown with green and red lines for the global and local constructions, respectively. Because these methods preserve their respective distributions and obey Liouville's theorem, it is possible to obtain converged results with a low number of trajectories by using the time-averaging procedure[1]}
\(\langle A(0)B(t)\rangle _\mathrm {cc}&= \frac{1}{\tau _\mathrm {max}}\int _0^{\tau _\mathrm {max}} \mathrm {d}\tau \,\langle A(\tau )B(t+\tau )\rangle _\mathrm {cc}\)
| [1] | [
[
397,
400
]
] | https://openalex.org/W1548618691 |
269f6b66-17e0-4c32-a2f4-0a83db91efa6 | This author has attempted to follow the lead exhibited in two excellent publications, namely a paper by H. Wiseman [1]} and an article by A. Shimony [2]}. Wiseman's paper points out how Bell's thinking evolved over the years, and he analyzes two of his papers in depth, namely [3]} and [4]}. Along the way he creates a complete anthology of the main concepts. Shimony discusses the many concepts of locality (among other things). But the main point here is that they both start with clear mathematical definitions. This is reflected in the approach taken in this paper.
| [3] | [
[
277,
280
]
] | https://openalex.org/W2135830616 |
0013c806-9bae-40e0-ba32-1d09e9cbba8e | This author has attempted to follow the lead exhibited in two excellent publications, namely a paper by H. Wiseman [1]} and an article by A. Shimony [2]}. Wiseman's paper points out how Bell's thinking evolved over the years, and he analyzes two of his papers in depth, namely [3]} and [4]}. Along the way he creates a complete anthology of the main concepts. Shimony discusses the many concepts of locality (among other things). But the main point here is that they both start with clear mathematical definitions. This is reflected in the approach taken in this paper.
| [4] | [
[
286,
289
]
] | https://openalex.org/W1595578028 |
39b58ea0-9be6-49e6-8c07-a078c2f61944 | First, an advantage of trees is the possibility to implement a handling strategy for missing values (NA). Several approaches have been introduced (some of them specifically designed for classification trees), for a short overview and comparison see [1]} and references therein. In this work the approach based on so-called surrogate splits is employed [2]}, [3]}. These surrogate splits are used in case the (primary) splitting variable is not available in the observation which is sent down the tree. In that case (one of) the surrogate variables is used in order to send the observation further down the tree until it reaches a terminal node. In contrast, regression models are not able to issue a prediction for a case containing missing values in one or more predictors. Non-complete cases have either be deleted from the data or the missing values have to be imputed.
| [2] | [
[
352,
355
]
] | https://openalex.org/W3085162807 |
8b66f0dd-3893-4a9b-8692-cc9b620774a1 | To evaluate the classification performance across all possible classification thresholds \(\tau \) , the respective Receiving Operator Characteristic (ROC, see, for example, [1]}, [2]}) curves are inspected. An ROC curve is obtained by plotting the true positive rate (sensitivity) versus the true negative rate (specificity) of the classifier as a function of \(\tau \in [0,1]\) .
The area under the curve (AUC) has a value between 0.5 and 1, where values close to 1 indicate high classification performance.
| [1] | [
[
174,
177
]
] | https://openalex.org/W1554944419 |
122dad5d-f188-46ea-9557-a9d135c54582 | Baseline REF shows that achieving the social welfare benchmark \(S^*\) requires two ideal conditions.
Condition C1 requires the prior knowledge on the uncertain environment.
Similar condition also appears in the benchmark definition of stochastic MAB [1]}.
Condition C2 relaxes that fact that the agents are selfish.
This is the new condition for the IOL framework.
We measure the gap between \(S^{*}\) and the social performance achieved by our mechanism based on the cumulative regret:
\(\begin{aligned}\mathtt {Reg}(T)\triangleq \sum \limits _{t=1}^{T}\Big (S^{*}-\mathbb {E}\left[ S\left(\mathbf {x}^{t},\mathbf {a}^{t};\mathbf {r}^{t},\mathbf {c}^{t}\right) \right]\Big ),\end{aligned}\)
| [1] | [
[
253,
256
]
] | https://openalex.org/W2049934117 |