id
stringlengths
9
16
title
stringlengths
1
382
abstract
stringlengths
6
6.09k
categories
stringlengths
5
125
1807.02125
Scalable Gaussian Processes with Grid-Structured Eigenfunctions (GP-GRIEF)
We introduce a kernel approximation strategy that enables computation of the Gaussian process log marginal likelihood and all hyperparameter derivatives in $\mathcal{O}(p)$ time. Our GRIEF kernel consists of $p$ eigenfunctions found using a Nystrom approximation from a dense Cartesian product grid of inducing points. By exploiting algebraic properties of Kronecker and Khatri-Rao tensor products, computational complexity of the training procedure can be practically independent of the number of inducing points. This allows us to use arbitrarily many inducing points to achieve a globally accurate kernel approximation, even in high-dimensional problems. The fast likelihood evaluation enables type-I or II Bayesian inference on large-scale datasets. We benchmark our algorithms on real-world problems with up to two-million training points and $10^{33}$ inducing points.
stat.ML cs.LG
1807.02126
Cognitive chimera states in human brain networks
The human brain is a complex dynamical system that gives rise to cognition through spatiotemporal patterns of coherent and incoherent activity between brain regions. As different regions dynamically interact to perform cognitive tasks, variable patterns of partial synchrony can be observed, forming chimera states. We propose that the emergence of such states plays a fundamental role in the cognitive organization of the brain, and present a novel cognitively-informed, chimera-based framework to explore how large-scale brain architecture affects brain dynamics and function. Using personalized brain network models, we systematically study how regional brain stimulation produces different patterns of synchronization across predefined cognitive systems. We then analyze these emergent patterns within our novel framework to understand the impact of subject-specific and region-specific structural variability on brain dynamics. Our results suggest a classification of cognitive systems into four groups with differing levels of subject and regional variability that reflect their different functional roles.
q-bio.NC math.DS physics.data-an
1807.02127
Mixtures of blue phase liquid crystal with simple liquids: elastic emulsions and cubic fluid cylinders
We investigate numerically the behaviour of a phase-separating mixture of a blue phase I liquid crystal with an isotropic fluid. The resulting morphology is primarily controlled by an inverse capillary number, $\chi$, setting the balance between interfacial and elastic forces. When $\chi$ and the concentration of the isotropic component are both low, the blue phase disclination lattice templates a cubic array of fluid cylinders. For larger $\chi$, the isotropic phase arranges primarily into liquid emulsion droplets which coarsen very slowly, rewiring the blue phase disclination lines into an amorphous elastic network. Our blue phase/simple fluid composites can be externally manipulated: an electric field can trigger a morphological transition between cubic fluid cylinder phases with different topologies.
cond-mat.soft
1807.02128
Adaptive Path-Integral Autoencoder: Representation Learning and Planning for Dynamical Systems
We present a representation learning algorithm that learns a low-dimensional latent dynamical system from high-dimensional \textit{sequential} raw data, e.g., video. The framework builds upon recent advances in amortized inference methods that use both an inference network and a refinement procedure to output samples from a variational distribution given an observation sequence, and takes advantage of the duality between control and inference to approximately solve the intractable inference problem using the path integral control approach. The learned dynamical model can be used to predict and plan the future states; we also present the efficient planning method that exploits the learned low-dimensional latent dynamics. Numerical experiments show that the proposed path-integral control based variational inference method leads to tighter lower bounds in statistical model learning of sequential data. The supplementary video: https://youtu.be/xCp35crUoLQ
cs.LG cs.RO stat.ML
1807.02129
Operads and Maurer-Cartan spaces
This thesis is divided into two parts. The first one is composed of recollections on operad theory, model categories, simplicial homotopy theory, rational homotopy theory, Maurer-Cartan spaces, and deformation theory. The second part deals with the theory of convolution algebras and some of their applications, as explained below. Suppose we are given a type of algebras, a type of coalgebras, and a relationship between those types of algebraic structures (encoded by an operad, a cooperad, and a twisting morphism respectively). Then, it is possible to endow the space of linear maps from a coalgebra C and an algebra A with a natural structure of Lie algebra up to homotopy. We call the resulting homotopy Lie algebra the convolution algebra of A and C. We study the theory of convolution algebras and their compatibility with the tools of homotopical algebra: infinity morphisms and the homotopy transfer theorem. After doing that, we apply this theory to various domains, such as derived deformation theory and rational homotopy theory. In the first case, we use the tools we developed to construct an universal Lie algebra representing the space of Maurer-Cartan elements, a fundamental object of deformation theory. In the second case, we generalize a result of Berglund on rational models for mapping spaces between pointed topological spaces. In the last chapter of this thesis, we give a new approach to two important theorems in deformation theory: the Goldman-Millson theorem and the Dolgushev-Rogers theorem.
math.AT math.QA
1807.02130
Learning to pinpoint effective operators at the LHC: a study of the $t\bar{t}b\bar{b}$ signature
In the context of the Standard Model effective field theory (SMEFT), we study the LHC sensitivity to four fermion operators involving heavy quarks by employing cross section measurements in the $t\bar{t}b\bar{b}$ final state. Starting from the measurement of total rates, we progressively exploit kinematical information and machine learning techniques to optimize the projected sensitivity at the end of Run III. Indeed, in final states with high multiplicity containing inter-correlated kinematical information, multi-variate methods provide a robust way of isolating the regions of phase space where the SMEFT contribution is enhanced. We also show that training for multiple output classes allows for the discrimination between operators mediating the production of tops in different helicity states. Our projected sensitivities not only constrain a host of new directions in the SMEFT parameter space but also improve on existing limits demonstrating that, on one hand, $t\bar{t}b\bar{b}$ production is an indispensable component in a future global fit for top quark interactions in the SMEFT, and on the other, multi-class machine learning algorithms can be a valuable tool for interpreting LHC data in this framework.
hep-ph
1807.02131
3D Human Action Recognition with Siamese-LSTM Based Deep Metric Learning
This paper proposes a new 3D Human Action Recognition system as a two-phase system: (1) Deep Metric Learning Module which learns a similarity metric between two 3D joint sequences using Siamese-LSTM networks; (2) A Multiclass Classification Module that uses the output of the first module to produce the final recognition output. This model has several advantages: the first module is trained with a larger set of data because it uses many combinations of sequence pairs.Our deep metric learning module can also be trained independently of the datasets, which makes our system modular and generalizable. We tested the proposed system on standard and newly introduced datasets that showed us that initial results are promising. We will continue developing this system by adding more sophisticated LSTM blocks and by cross-training between different datasets.
cs.CV
1807.02132
Gradual Liquid Type Inference
Liquid typing provides a decidable refinement inference mechanism that is convenient but subject to two major issues: (1) inference is global and requires top-level annotations, making it unsuitable for inference of modular code components and prohibiting its applicability to library code, and (2) inference failure results in obscure error messages. These difficulties seriously hamper the migration of existing code to use refinements. This paper shows that gradual liquid type inference---a novel combination of liquid inference and gradual refinement types---addresses both issues. Gradual refinement types, which support imprecise predicates that are optimistically interpreted, can be used in argument positions to constrain liquid inference so that the global inference process e effectively infers modular specifications usable for library components. Dually, when gradual refinements appear as the result of inference, they signal an inconsistency in the use of static refinements. Because liquid refinements are drawn from a nite set of predicates, in gradual liquid type inference we can enumerate the safe concretizations of each imprecise refinement, i.e. the static refinements that justify why a program is gradually well-typed. This enumeration is useful for static liquid type error explanation, since the safe concretizations exhibit all the potential inconsistencies that lead to static type errors. We develop the theory of gradual liquid type inference and explore its pragmatics in the setting of Liquid Haskell.
cs.PL
1807.02133
Prospects for axion searches with Advanced LIGO through binary mergers
The observation of gravitational waves from a binary neutron star merger by LIGO/VIRGO and the associated electromagnetic counterpart provides a high precision test of orbital dynamics, and therefore a new and sensitive probe of extra forces and new radiative degrees of freedom. Axions are one particularly well-motivated class of extensions to the Standard Model leading to new forces and sources of radiation, which we focus on in this paper. Using an effective field theory (EFT) approach, we calculate the first post-Newtonian corrections to the orbital dynamics, radiated power, and gravitational waveform for binary neutron star mergers in the presence of an axion. This result is applicable to many theories which add an extra massive scalar degree of freedom to General Relativity. We then perform a detailed forecast of the potential for Advanced LIGO to constrain the free parameters of the EFT, and map these to the mass $m_a$ and decay constant $f_a$ of the axion. At design sensitivity, we find that Advanced LIGO can potentially exclude axions with $m_a \lesssim 10^{-11} \ {\rm eV}$ and $f_a \sim (10^{14} - 10^{17}) \ {\rm GeV}$. There are a variety of complementary observational probes over this region of parameter space, including the orbital decay of binary pulsars, black hole superradiance, and laboratory searches. We comment on the synergies between these various observables.
hep-ph astro-ph.CO gr-qc hep-th
1807.02134
The Impact Of World War I On Relativity: Part III, The Aftermath
Neither the world nor science came to an end when the gunfire stopped on 11 November 1918 (close to 11 AM in some time zone), but neither would ever be the same again. Part I of this inquiry (Observatory 138, 46-58, April 2018) looked at the development of general relativity under the Rubric of Gerald Holton's "Only Einstein, only there, only then." Part II (Observatory 138, 98 116, June, 2018) addressed the activities, relativistic, classical, and otherwise of many (mostly) physicists who were interacting with Einstein, working on relativistic gravity, or, sometimes, against it, and leaving tracks that can still be followed. Part III considers some of what happened to Einstein, his theory of gravity, and related science after the war and, perhaps, because of it. A subset of the items will probably be familiar -- the 1919 eclipse expedition and the founding of the International Astronomical Union the same year, Einsteins 1921 Nobel Prize (for the discovery of the law of the photoelectric effect). Others perhaps less so, including a flood of books about GR (pro and con) with the end of paper rationing surely playing a role, AE's 1922 trip to Paris, and the gory details, swings and roundabouts of gravitational radiation/waves and the cosmological constant. It is left as an exercise for the reader to decide which items are primarily scientific and which primarily political. The long-range issues of "is general relativity the right theory of gravity?" and "do we have better wars?" come at the end. And I am going to start in a slightly improbable place.
physics.hist-ph
1807.02135
Face Recognition Using Map Discriminant on YCbCr Color Space
This paper presents face recognition using maximum a posteriori (MAP) discriminant on YCbCr color space. The YCbCr color space is considered in order to cover the skin information of face image on the recognition process. The proposed method is employed to improve the recognition rate and equal error rate (EER) of the gray scale based face recognition. In this case, the face features vector consisting of small part of dominant frequency elements which is extracted by non-blocking DCT is implemented as dimensional reduction of the raw face images. The matching process between the query face features and the trained face features is performed using maximum a posteriori (MAP) discriminant. From the experimental results on data from four face databases containing 2268 images with 196 classes show that the face recognition YCbCr color space provide better recognition rate and lesser EER than those of gray scale based face recognition which improve the first rank of grayscale based method result by about 4%. However, it requires three times more computation time than that of grayscale based method.
cs.CV
1807.02136
Detecting Visual Relationships Using Box Attention
We propose a new model for detecting visual relationships, such as "person riding motorcycle" or "bottle on table". This task is an important step towards comprehensive structured image understanding, going beyond detecting individual objects. Our main novelty is a Box Attention mechanism that allows to model pairwise interactions between objects using standard object detection pipelines. The resulting model is conceptually clean, expressive and relies on well-justified training and prediction procedures. Moreover, unlike previously proposed approaches, our model does not introduce any additional complex components or hyperparameters on top of those already required by the underlying detection model. We conduct an experimental evaluation on three challenging datasets, V-COCO, Visual Relationships and Open Images, demonstrating strong quantitative and qualitative results.
cs.CV
1807.02137
Multigrid Algorithm Based on Hybrid Smoothers for Variational and Selective Segmentation Models
Automatic segmentation of an image to identify all meaningful parts is one of the most challenging as well as useful tasks in a number of application areas. This is widely studied. Selective segmentation, less studied, aims to use limited user specified information to extract one or more interesting objects (instead of all objects). Constructing a fast solver remains a challenge for both classes of model. However our primary concern is on selective segmentation. In this work, we develop an effective multigrid algorithm, based on a new non-standard smoother to deal with non-smooth coefficients, to solve the underlying partial differential equations (PDEs) of a class of variational segmentation models in the level set formulation. For such models, non-smoothness (or jumps) is typical as segmentation is only possible if edges (jumps) are present. In comparison with previous multigrid methods which were shown to produce an acceptable {\it mean} smoothing rate for related models, the new algorithm can ensure a small and {\it global} smoothing rate that is a sufficient condition for convergence. Our rate analysis is by Local Fourier Analysis and, with it, we design the corresponding iterative solver, improving on an ineffective line smoother. Numerical tests show that the new algorithm outperforms multigrid methods based on competing smoothers.
math.NA
1807.02138
The weak Lefschetz property of equigenerated monomial ideals
We determine a sharp lower bound for the Hilbert function in degree $d$ of a monomial algebra failing the weak Lefschetz property over a polynomial ring with $n$ variables and generated in degree $d$, for any $d\geq 2$ and $n\geq 3$. We consider artinian ideals in the polynomial ring with $n$ variables generated by homogeneous polynomials of degree $d$ invariant under an action of the cyclic group $\mathbb{Z}/d\mathbb{Z}$, for any $n\geq 3$ and any $d\geq 2$. We give a complete classification of such ideals in terms of the weak Lefschetz property depending on the action.
math.AC
1807.02139
Fundamentally fastest optical processes at the surface of a topological insulator
We predict that a single oscillation of a strong optical pulse can significantly populate the surface conduction band of a three-dimensional topological insulator, Bi2Se3. Both linearly- and circularly-polarized pulses generate chiral textures of interference fringes of population in the surface Brillouin zone. These fringes constitute a self-referenced electron hologram carrying information on the topology of the surface Bloch bands, in particular, on the effect of the warping term of the low-energy Hamiltonian. These electron-interference phenomena are in a sharp contrast to graphene where there are no chiral textures for a linearly-polarized pulse and no interference fringes for circularly-polarized pulse. These predicted reciprocal space electron-population textures can be measured experimentally by time resolved angle resolved photoelectron spectroscopy (TR-ARPES) to gain direct access to non-Abelian Berry curvature at topological insulator surfaces.
cond-mat.mes-hall
1807.02140
Distances between zeroes and critical points for random polynomials with i.i.d. zeroes
Consider a random polynomial $Q_n$ of degree $n+1$ whose zeroes are i.i.d. random variables $\xi_0,\xi_1,\ldots,\xi_n$ in the complex plane. We study the pairing between the zeroes of $Q_n$ and its critical points, i.e. the zeroes of its derivative $Q_n'$. In the asymptotic regime when $n\to\infty$, with high probability there is a critical point of $Q_n$ which is very close to $\xi_0$. We localize the position of this critical point by proving that the difference between $\xi_0$ and the critical point has approximately complex Gaussian distribution with mean $1/(nf(\xi_0))$ and variance of order $\log n \cdot n^{-3}$. Here, $f(z)= \mathbb E[1/(z-\xi_k)]$ is the Cauchy-Stieltjes transform of the $\xi_k$'s. We also state some conjectures on critical points of polynomials with dependent zeroes, for example the Weyl polynomials and characteristic polynomials of random matrices.
math.PR
1807.02141
Slow magnetoacoustic gravity waves in an equilibrium stratified solar atmosphere: cut-off periods through the transition region
Assuming the thin flux tube approximation, we introduce an analytical model that contemplates the presence of: a non-isothermal temperature; a varying magnetic field and a non-uniform stratified medium in hydrostatic equilibrium due to a constant gravity acceleration. This allows the study of slow magnetoacoustic cut-off periods across the solar transition region, from the base of the solar chromosphere to the lower corona. The used temperature profile approaches the VAC solar atmospheric model. The periods obtained are consistent with observations. Similar to the acoustic cut-off periods, the resulting magnetoacoustic gravity ones follow the sharp temperature profile, but shifted towards larger heights; in other words, at a given height the magnetoacoustic cut-off period is significantly lower than the corresponding acoustic one. Along a given longitude of an inclined thin magnetic tube, the greater its inclination the softener the temperature gradient it crosses. Changes in the magnetic field intensity do not significantly modify the periods at the coronal level but modulate the values below the transition region within periods between $\sim [2\,- 6]\,$min. Within the limitations of our model, we show that monochromatic oscillations of the solar atmosphere are the atmospheric response at its natural frequency to random or impulsive perturbations, and not a consequence of the forcing from the photosphere.
astro-ph.SR
1807.02142
Metallicity gradients in the globular cluster systems of early-type galaxies: In-situ and accreted components?
Massive early-type galaxies typically have two subpopulations of globular clusters (GCs) which often reveal radial colour (metallicity) gradients. Collating gradients from the literature, we show that the gradients in the metal-rich and metal-poor GC subpopulations are the same, within measurement uncertainties, in a given galaxy. Furthermore, these GC gradients are similar in strength to the {\it stellar} metallicity gradient of the host galaxy. At the very largest radii (e.g. greater than 8 galaxy effective radii) there is some evidence that the GC gradients become flat with near constant mean metallicity. Using stellar metallicity gradients as a proxy, we probe the assembly histories of massive early-type galaxies with hydrodynamical simulations from the Magneticum suite of models. In particular, we measure the stellar metallicity gradient for the in-situ and accreted components over a similar radial range as those observed for GC subpopulations. We find that the in-situ and accreted stellar metallicity gradients are similar but have a larger scatter than the metal-rich and metal-poor GC subpopulations gradients in a given galaxy. We conclude that although metal-rich GCs are predominately formed during the in-situ phase and metal-poor GCs during the accretion phase of massive galaxy formation, they do not have a strict one-to-one connection.
astro-ph.GA astro-ph.CO
1807.02143
Spatiotemporal KSVD Dictionary Learning for Online Multi-target Tracking
In this paper, we present a new spatial discriminative KSVD dictionary algorithm (STKSVD) for learning target appearance in online multi-target tracking. Different from other classification/recognition tasks (e.g. face, image recognition), learning target's appearance in online multi-target tracking is impacted by factors such as posture/articulation changes, partial occlusion by background scene or other targets, background changes (human detection bounding box covers human parts and part of the scene), etc. However, we observe that these variations occur gradually relative to spatial and temporal dynamics. We characterize the spatial and temporal information between target's samples through a new STKSVD appearance learning algorithm to better discriminate sparse code, linear classifier parameters and minimize reconstruction error in a single optimization system. Our appearance learning algorithm and tracking framework employ two different methods of calculating appearance similarity score in each stage of a two-stage association: a linear classifier in the first stage, and minimum residual errors in the second stage. The results tested using 2DMOT2015 dataset and its public Aggregated Channel features (ACF) human detection for all comparisons show that our method outperforms the existing related learning methods.
cs.CV
1807.02144
Ergodic invariant measures on the space of geodesic currents
Let $S$ be a compact, connected, oriented surface, possibly with boundary, of negative Euler characteristic. In this article we extend Lindenstrauss-Mirzakhani's and Hamenst\"adt's classification of locally finite mapping class group invariant ergodic measures on the space of measured laminations $\mathcal{M}\mathcal{L}(S)$ to the space of geodesic currents $\mathcal{C}(S)$, and we discuss the homogeneous case. Moreover, we extend Lindenstrauss-Mirzakhani's classification of orbit closures to $\mathcal{C}(S)$. Our argument relies on their results and on the decomposition of a current into a sum of three currents with isotopically disjoint supports: a measured lamination without closed leaves, a simple multi-curve and a current that binds its hull.
math.GT math.DS
1807.02145
Nature of the $\Omega$(2012) through its strong decays
We extend our previous analysis on the mass of the recently discovered $\Omega(2012)$ state by investigation of its strong decays and calculation of its width employing the method of light cone QCD sum rule. Considering two possibilities for the quantum numbers of $\Omega(2012)$ state, namely $1P$ orbital excitation with $J^P=\frac{3}{2}^-$ and $2S$ radial excitation with $J^P=\frac{3}{2}^+$, we obtain the strong coupling constants defining the $\Omega(1P/2S)\rightarrow\Xi K$ decays. The results of the coupling constants are then used to calculate the decay width corresponding to each possibility. Comparison of the obtained results on the total widths in this work with the experimental value and taking into account the results of our previous mass prediction on the $\Omega(2012)$ state, we conclude that this state is $1P$ orbital excitation of the ground state $\Omega$ baryon, whose quantum numbers are $J^P=\frac{3}{2}^-$.
hep-ph hep-ex hep-lat
1807.02146
Absence of Criticality in the Phase Transitions of Open Floquet Systems
We address the nature of phase transitions in periodically driven systems coupled to a bath. The latter enables a synchronized non-equilibrium Floquet steady state at finite entropy, which we analyse for rapid drives within a non-equilibrium RG approach. While the infinitely rapidly driven limit exhibits a second order phase transition, here we reveal that fluctuations turn the transition first order when the driving frequency is finite. This can be traced back to a universal mechanism, which crucially hinges on the competition of degenerate, near critical modes associated to higher Floquet Brillouin zones. The critical exponents of the infinitely rapidly driven system -- including a new, independent one -- can yet be probed experimentally upon smoothly tuning towards that limit.
cond-mat.stat-mech cond-mat.quant-gas hep-th
1807.02147
Theory for strained graphene beyond the Cauchy-Born rule
The low-energy electronic properties of strained graphene are usually obtained by transforming the bond vectors according to the Cauchy-Born rule. In this work, we derive a new effective Dirac Hamiltonian by assuming a more general transformation rule for the bond vectors under uniform strain, which takes into account the strain-induced relative displacement between the two sublattices of graphene. Our analytical results show that the consideration of such relative displacement yields a qualitatively different Fermi velocity with respect to previous reports. Furthermore, from the derived Hamiltonian, we analyze effects of this relative displacement on the local density of states and the optical conductivity, as well as the implications on the scanning tunneling spectroscopy, including external magnetic field, and optical transmittance experiments of strained graphene.
cond-mat.mes-hall
1807.02148
The twin paradox: the role of acceleration
The twin paradox, which evokes from the the idea that two twins may age differently because of their relative motion, has been studied and explained ever since it was first described in 1906, the year after special relativity was invented. The question can be asked: "Is there anything more to say?" It seems evident that acceleration has a role to play, however this role has largely been brushed aside since it is not required in calculating, in a preferred reference frame, the relative age difference of the twins. Indeed, if one tries to calculate the age difference from the point of the view of the twin that undergoes the acceleration, then the role of the acceleration is crucial and cannot be dismissed. In the resolution of the twin paradox, the role of the acceleration has been denigrated to the extent that it has been treated as a red-herring. This is a mistake and shows a clear misunderstanding of the twin paradox.
gr-qc physics.class-ph
1807.02149
Large gaps of CUE and GUE
In this article, we study the largest gaps of the classical random matrices of CUE and GUE, and we will derive the rescaling limit of the $k$-th largest gap, which is given by the Gumbel distribution.
math.PR
1807.02150
Scalable Recommender Systems through Recursive Evidence Chains
Recommender systems can be formulated as a matrix completion problem, predicting ratings from user and item parameter vectors. Optimizing these parameters by subsampling data becomes difficult as the number of users and items grows. We develop a novel approach to generate all latent variables on demand from the ratings matrix itself and a fixed pool of parameters. We estimate missing ratings using chains of evidence that link them to a small set of prototypical users and items. Our model automatically addresses the cold-start and online learning problems by combining information across both users and items. We investigate the scaling behavior of this model, and demonstrate competitive results with respect to current matrix factorization techniques in terms of accuracy and convergence speed.
cs.IR cs.LG stat.ML
1807.02151
On Ergodic Capacity and Optimal Number of Tiers in UAV-Assisted Communication Systems
In this paper, we consider unmanned aerial vehicle (UAV) assisted communication systems where a number of UAVs are utilized as multi-tier relays between a number of users and a base-transceiver station (BTS). We model the wireless propagation channel between the users and the BTS as a Rayleigh product channel, which is a product of a series of independent and identically distributed (i.i.d.) Rayleigh multi-input multi-output (MIMO) channels. We put a special interested in optimizing the number of tiers in such UAV-assisted systems for a given total number of UAVs to maximize the ergodic capacity. To achieve this goal, in a first part we derive a lower-bound in closed-form for the ergodic capacity which is shown to be asymptotically tight as signal-to-noise ratio (SNR) increases. With the derived bound, in a second part we analyze the optimal number of UAV-tiers, and propose a low-complexity procedure that significantly reduces the search-size and yields near-optimal performance. Moreover, asymptotic properties both for the ergodic capacity of Rayleigh product channel, and the optimal solutions on number of tiers are extensively analyzed.
cs.IT math.IT
1807.02152
Automatic deep learning-based normalization of breast dynamic contrast-enhanced magnetic resonance images
Objective: To develop an automatic image normalization algorithm for intensity correction of images from breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) acquired by different MRI scanners with various imaging parameters, using only image information. Methods: DCE-MR images of 460 subjects with breast cancer acquired by different scanners were used in this study. Each subject had one T1-weighted pre-contrast image and three T1-weighted post-contrast images available. Our normalization algorithm operated under the assumption that the same type of tissue in different patients should be represented by the same voxel value. We used four tissue/material types as the anchors for the normalization: 1) air, 2) fat tissue, 3) dense tissue, and 4) heart. The algorithm proceeded in the following two steps: First, a state-of-the-art deep learning-based algorithm was applied to perform tissue segmentation accurately and efficiently. Then, based on the segmentation results, a subject-specific piecewise linear mapping function was applied between the anchor points to normalize the same type of tissue in different patients into the same intensity ranges. We evaluated the algorithm with 300 subjects used for training and the rest used for testing. Results: The application of our algorithm to images with different scanning parameters resulted in highly improved consistency in pixel values and extracted radiomics features. Conclusion: The proposed image normalization strategy based on tissue segmentation can perform intensity correction fully automatically, without the knowledge of the scanner parameters. Significance: We have thoroughly tested our algorithm and showed that it successfully normalizes the intensity of DCE-MR images. We made our software publicly available for others to apply in their analyses.
cs.CV
1807.02153
Measurement of D0 ->omega eta Branching Fraction with CLEO-c Data
Using CLEO-c data, we confirm the observation of D0 ->omega eta by BESIII. In the Dalitz Plot of D0 -> Kshort eta pi0, we find a background in the Kshort(->pipi)pi0 projection with a m(pipipi0) equal to the omega(782) mass. In a direct search for D0 -> (omega eta) we find a clear signal and measure BF(D0 ->omega eta) = (1.78+/-0.19+/-0.14) x 10**(-3), in good agreement with BESIII.
hep-ex
1807.02154
Betti numbers of toric ideals of graphs: A case study
We compute the graded Betti numbers for the toric ideal of a family of graphs constructed by adjoining a cycle to a complete bipartite graph. The key observation is that this family admits an initial ideal which has linear quotients. As a corollary, we compute the Hilbert series and $h$-vector for all the toric ideals of graphs in this family.
math.AC math.CO
1807.02155
Gridbot: An autonomous robot controlled by a Spiking Neural Network mimicking the brain's navigational system
It is true that the "best" neural network is not necessarily the one with the most "brain-like" behavior. Understanding biological intelligence, however, is a fundamental goal for several distinct disciplines. Translating our understanding of intelligence to machines is a fundamental problem in robotics. Propelled by new advancements in Neuroscience, we developed a spiking neural network (SNN) that draws from mounting experimental evidence that a number of individual neurons is associated with spatial navigation. By following the brain's structure, our model assumes no initial all-to-all connectivity, which could inhibit its translation to a neuromorphic hardware, and learns an uncharted territory by mapping its identified components into a limited number of neural representations, through spike-timing dependent plasticity (STDP). In our ongoing effort to employ a bioinspired SNN-controlled robot to real-world spatial mapping applications, we demonstrate here how an SNN may robustly control an autonomous robot in mapping and exploring an unknown environment, while compensating for its own intrinsic hardware imperfections, such as partial or total loss of visual input.
q-bio.NC cs.AI
1807.02156
A Geometric Interpretation of the Intertwining Number
We exhibit a connection between two statistics on set partitions, the intertwining number and the depth-index. In particular, results link the intertwining number to the algebraic geometry of Borel orbits. Furthermore, by studying the generating polynomials of our statistics, we determine the $q=-1$ specialization of a $q$-analogue of the Bell numbers. Finally, by using Renner's $H$-polynomial of an algebraic monoid, we introduce and study a $t$-analog of $q$-Stirling numbers.
math.CO
1807.02157
Fluctuations and noise-limited sensing near the exceptional point of $\mathcal{PT}$-symmetric resonator systems
We theoretically explore the role of mesoscopic fluctuations and noise on the spectral and temporal properties of systems of $\mathcal{PT}$-symmetric coupled gain-loss resonators operating near the exceptional point, where eigenvalues and eigenvectors coalesce. We show that the inevitable detuning in the frequencies of the uncoupled resonators leads to an unavoidable modification of the conditions for reaching the exceptional point, while, as this point is approached in ensembles of resonator pairs, statistical averaging significantly smears the spectral features. We also discuss how these fluctuations affect the sensitivity of sensors based on coupled $\mathcal{PT}$-symmetric resonators. Finally, we show that temporal fluctuations in the detuning and gain of these sensors lead to a quadratic growth of the optical power in time, thus implying that maintaining operation at the exceptional point over a long period can be rather challenging. Our theoretical analysis clarifies issues central to the realization of $\mathcal{PT}$-symmetric devices, and should facilitate future experimental work in the field.
cond-mat.mes-hall physics.optics
1807.02158
Some results on affine Deligne-Lusztig varieties
The study of affine Deligne-Lusztig varieties originally arose from arithmetic geometry, but many problems on affine Deligne-Lusztig varieties are purely Lie-theoretic in nature. This survey deals with recent progress on several important problems on affine Deligne-Lusztig varieties. The emphasis is on the Lie-theoretic aspect, while some connections and applications to arithmetic geometry will also be mentioned.
math.AG math.NT math.RT
1807.02159
Entangled-path interferometer simpler, faster than LISA
Entangled light can provide a seminal improvement in resolution sensitivity even without achieving Heisenberg limit in a single channel. In this paper, based on the back-of-the-envelope type calculations, I demonstrate an alternative path to space based long-arm interferometer. Its advantage with respect to LISA is that it does not require complex satellites with many active components to achieve similar resolution.
quant-ph astro-ph.IM
1807.02160
Affine-Goldstone/quartet-metric gravity: emergent vs. existent
As a group-theoretic foundation of gravity, it is considered an affine-Goldstone nonlinear model based upon the nonlinear realization of the global affine symmetry spontaneously broken at the Planck scale to the Poincare symmetry. It is shown that below this scale the model justifies and elaborates an earlier introduced effective field theory of the quartet-metric gravity incorporating the gravitational dark substances emerging in addition to the tensor graviton. The prospects for subsequent going beyond the nonlinear model above the Planck scale are indicated.
gr-qc hep-ph
1807.02161
Minimizing Sensitivity to Model Misspecification
We propose a framework for estimation and inference when the model may be misspecified. We rely on a local asymptotic approach where the degree of misspecification is indexed by the sample size. We construct estimators whose mean squared error is minimax in a neighborhood of the reference model, based on one-step adjustments. In addition, we provide confidence intervals that contain the true parameter under local misspecification. As a tool to interpret the degree of misspecification, we map it to the local power of a specification test of the reference model. Our approach allows for systematic sensitivity analysis when the parameter of interest may be partially or irregularly identified. As illustrations, we study three applications: an empirical analysis of the impact of conditional cash transfers in Mexico where misspecification stems from the presence of stigma effects of the program, a cross-sectional binary choice model where the error distribution is misspecified, and a dynamic panel data binary choice model where the number of time periods is small and the distribution of individual effects is misspecified.
econ.EM stat.ME
1807.02162
Feature Assisted bi-directional LSTM Model for Protein-Protein Interaction Identification from Biomedical Texts
Knowledge about protein-protein interactions is essential in understanding the biological processes such as metabolic pathways, DNA replication, and transcription etc. However, a majority of the existing Protein-Protein Interaction (PPI) systems are dependent primarily on the scientific literature, which is yet not accessible as a structured database. Thus, efficient information extraction systems are required for identifying PPI information from the large collection of biomedical texts. Most of the existing systems model the PPI extraction task as a classification problem and are tailored to the handcrafted feature set including domain dependent features. In this paper, we present a novel method based on deep bidirectional long short-term memory (B-LSTM) technique that exploits word sequences and dependency path related information to identify PPI information from text. This model leverages joint modeling of proteins and relations in a single unified framework, which we name as Shortest Dependency Path B-LSTM (sdpLSTM) model. We perform experiments on two popular benchmark PPI datasets, namely AiMed & BioInfer. The evaluation shows the F1-score values of 86.45% and 77.35% on AiMed and BioInfer, respectively. Comparisons with the existing systems show that our proposed approach attains state-of-the-art performance.
cs.IR cs.CL
1807.02163
Testing logarithmic corrections on $R^2$-exponential gravity by observational data
This paper is devoted to the analysis of a class of $F(R)$ gravity, where additional logarithmic corrections are assumed. The gravitational action includes an exponential term and a $R^2$ inflationary term, both with logarithmic corrections. This model can unify an early time inflationary era and also the late time acceleration of the universe expansion. This model is deeply analysed, confronting with recent observational data coming from the largest Pantheon Type Ia supernovae sample, the latest measurements of the Hubble parameter $H(z)$, manifestations of Baryon Acoustic Oscillations and Cosmic Microwave Background radiation. The viability of the model is studied and the corresponding constraints on the free parameters are obtained, leading to an statistical analysis in comparison to $\Lambda$CDM model. The inflationary era is also analysed within this model and its compatibility with the latest observational data for the spectral index of primordial curvature perturbations and the scalar-to-tensor ratio. Finally, possible corrections on the Newton's law and constraints due to primordial nucleosynthesis are analysed.
gr-qc astro-ph.CO
1807.02164
V-CNN: When Convolutional Neural Network encounters Data Visualization
In recent years, deep learning poses a deep technical revolution in almost every field and attracts great attentions from industry and academia. Especially, the convolutional neural network (CNN), one representative model of deep learning, achieves great successes in computer vision and natural language processing. However, simply or blindly applying CNN to the other fields results in lower training effects or makes it quite difficult to adjust the model parameters. In this poster, we propose a general methodology named V-CNN by introducing data visualizing for CNN. V-CNN introduces a data visualization model prior to CNN modeling to make sure the data after processing is fit for the features of images as well as CNN modeling. We apply V-CNN to the network intrusion detection problem based on a famous practical dataset: AWID. Simulation results confirm V-CNN significantly outperforms other studies and the recall rate of each invasion category is more than 99.8%.
cs.HC cs.LG stat.ML
1807.02165
On the determination of nonlinear terms appearing in semilinear hyperbolic equations
We consider the inverse problem of determining a general nonlinear term appearing in a semilinear hyperbolic equation on a Riemannian manifold with boundary $(M,g)$ of dimension $n=2,3$. We prove results of unique recovery of the nonlinear term $F(t,x,u)$, appearing in the equation $\partial_t^2u-\Delta_gu+F(t,x,u)=0$ on $(0,T)\times M$ with $T>0$, from some partial knowledge of the solutions $u$ on the boundary of the time-space cylindrical manifold $(0,T)\times M$ or on the lateral boundary $(0,T)\times\partial M$. We determine the expression $F(t,x,u)$ both on the boundary $x\in\partial M$ and inside the manifold $x\in M$.
math.AP
1807.02166
Oscillation modes of hybrid stars within the relativistic Cowling approximation
The first direct detection of gravitational waves has opened a new window to study the Universe and would probably start a new era: the gravitational wave Astronomy. Gravitational waves emitted by compact objects like neutron stars could provide significant information about their structure, composition and evolution. In this paper we calculate, using the relativistic Cowling approximation, the oscillations of compact stars focusing on hybrid stars, with and without a mixed phase in their cores. We study the existence of a possible hadron-quark phase transition in the central regions of neutron stars and the changes it produces on the gravitational modes frequencies emitted by these stars. We pay particular attention to the $g$-modes, which are extremely important as they could signal the existence of pure quark matter inside neutron stars. Our results show a relationship between the frequency of the $g$-modes and the constant speed of sound parametrization for the quark matter phase. We also show that the inclusion of color superconductivity produces an increase on the oscillation frequencies. We propose that observations of $g$-modes with frequencies $f_{\rm g}$ between $1$ kHz and $1.5$ kHz should be interpreted as an evidence of a sharp hadron-quark phase transition in the core of a compact object.
astro-ph.HE
1807.02167
A $Z'$-model and the magnetism of a dark fermion candidate
Our contribution sets out to investigate the phenomenology of a gauge model based on an $SU_{L}(2) \times U_{R}(1)_{Q} \times U(1)_{Q'}$-symmetry group. The model can accommodate, through its symmetry-breaking pattern, a candidate to a heavy $Z'$-boson at the TeV-scale. The extended Higgs sector introduces a heavy scalar whose mass lies in the region $1.2-3.7 \, \mbox{TeV}$. The fermion sector includes an exotic candidate to Dark Matter that mixes with the right-handed neutrino component in the Higgs sector, so that the whole field content ensures the cancellation of the $U(1)$-quiral anomaly. The masses are fixed according to the particular way the symmetry breaking takes place. In view of the possible symmetry breakdown pattern, we study the phenomenological implications in a high-energy scenario. We worek out the magnetic dipole momentum (MDM) of the exotic fermion and the transition MDM due to its mixing with the right-neutrino.
hep-ph hep-th
1807.02168
The halon: a quasiparticle featuring critical charge fractionalization
The halon is a special critical state of an impurity in a quantum-critical environment. The hallmark of the halon physics is that a well-defined integer charge gets fractionalized into two parts: a microscopic core with half-integer charge and a critically large halo carrying a complementary charge of $\pm 1/2$. The halon phenomenon emerges when the impurity--environment interaction is fine-tuned to the vicinity of a boundary quantum critical point (BQCP), at which the energies of two quasiparticle states with adjacent integer charges approach each other. The universality class of such BQCP is captured by a model of pseudo-spin-$1/2$ impurity coupled to the quantum-critical environment, in such a way that the rotational symmetry in the pseudo-spin $xy$-plane is respected, with a small local "magnetic" field along the pseudo-spin $z$-axis playing the role of control parameter driving the system away from the BQCP. On the approach to BQCP, the half-integer projection of the pseudo-spin on its $z$-axis gets delocalized into a halo of critically divergent radius, capturing the essence of the phenomenon of charge fractionalization. With large-scale Monte Carlo simulations, we confirm the existence of halons---and quantify their universal features---in O(2) and O(3) quantum critical systems.
cond-mat.quant-gas quant-ph
1807.02169
Quantum master equations for entangled qubit environments
We study the Markovian dynamics of a collection of n quantum systems coupled to an irreversible environmental channel consisting of a stream of n entangled qubits. Within the framework of repeated quantum interactions, we derive the master equation that describes the dynamics of the composite quantum system. We investigate the evolution of the joint system for two-qubit environments and find that (1) the presence of antidiagonal coherences (in the local basis) in the environment is a necessary condition for entangling two remote systems, and (2) that maximally entangled two-qubit baths are an exceptional point without a unique steady state. For the general case of n-qubit environments we show that coherences in maximally entangled baths (when expressed in the local energy basis), do not affect the system evolution in the weak coupling regime
quant-ph
1807.02170
First Point-Spread Function and X-Ray Phase-Contrast Imaging Results With an 88-mm Diameter Single Crystal
In this study, we report initial demonstrations of the use of single crystals in indirect x-ray imaging with a benchtop implementation of propagation-based (PB) x-ray phase contrast imaging. Based on single Gaussian peak fits to the x-ray images, we observed a four times smaller system point-spread function (PSF) with the 50-{\mu}m thick single crystal scintillators than with the reference polycrystalline phosphor/scintillator. Fiber-optic plate depth-of-focus and Al reflective-coating aspects are also elucidated. Guided by the results from the 25-mm diameter crystal samples, we report additionally the first results with a unique 88-mm diameter single crystal bonded to a fiber optic plate and coupled to the large format CCD. Both PSF and x-ray phase contrast imaging data are quantified and presented.
physics.ins-det physics.acc-ph
1807.02171
Analysis of continuous and discrete Wigner approximations for spin dynamics
We compare the continuous and discrete truncated Wigner approximations of various spin models' dynamics to exact analytical and numerical solutions. We account for all components of spin-spin correlations on equal footing, facilitated by a recently introduced geometric correlation matrix visualization technique [R. Mukherjee {\em et al.}, Phys. Rev. A {\bf 97}, 043606 (2018)]. We find that at modestly short times, the dominant error in both approximations is to substantially suppress spin correlations along one direction.
quant-ph cond-mat.quant-gas
1807.02172
Curved detectors developments and characterization: application to astronomical instruments
Many astronomical optical systems have the disadvantage of generating curved focal planes requiring flattening optical elements to project the corrected image on flat detectors. The use of these designs in combination with a classical flat sensor implies an overall degradation of throughput and system performances to obtain the proper corrected image. With the recent development of curved sensor this can be avoided. This new technology has been gathering more and more attention from a very broad community, as the potential applications are multiple: from low-cost commercial to high impact scientific systems, to mass-market and on board cameras, defense and security, and astronomical community. We describe here the first concave curved CMOS detector developed within a collaboration between CNRS- LAM and CEA-LETI. This fully-functional detector 20 Mpix (CMOSIS CMV20000) has been curved down to a radius of Rc = 150 mm over a size of 24x32 mm^2 . We present here the methodology adopted for its characterization and describe in detail all the results obtained. We also discuss the main components of noise, such as the readout noise, the fixed pattern noise and the dark current. Finally we provide a comparison with the flat version of the same sensor in order to establish the impact of the curving process on the main characteristics of the sensor.
astro-ph.IM
1807.02173
A survey on subjecting electronic product code and non-ID objects to IP identification
Over the last decade, both research on the Internet of Things (IoT) and real-world IoT applications have grown exponentially. The IoT provides us with smarter cities, intelligent homes, and generally more comfortable lives. However, the introduction of these devices has led to several new challenges that must be addressed. One of the critical challenges facing interacting with IoT devices is to address billions of devices (things) around the world, including computers, tablets, smartphones, wearable devices, sensors, and embedded computers, and so on. This article provides a survey on subjecting Electronic Product Code and non-ID objects to IP identification for IoT devices, including their advantages and disadvantages thereof. Different metrics are here proposed and used for evaluating these methods. In particular, the main methods are evaluated in terms of their: (i) computational overhead, (ii) scalability, (iii) adaptability, (iv) implementation cost, and (v) whether applicable to already ID-based objects and presented in tabular format. Finally, the article proves that this field of research will still be ongoing, but any new technique must favorably offer the mentioned five evaluative parameters.
cs.NI
1807.02174
Locally $p$-admissible measures on $\mathbb{R}$
In this note we show that locally $p$-admissible measures on $\mathbb{R}$ necessarily come from local Muckenhoupt $A_p$ weights. In the proof we employ the corresponding characterization of global $p$-admissible measures on $\mathbb{R}$ in terms of global $A_p$ weights due to Bj\"orn, Buckley and Keith, together with tools from analysis in metric spaces, more specifically preservation of the doubling condition and Poincar\'e inequalities under flattening, due to Durand-Cartagena and Li. As a consequence, the class of locally $p$-admissible weights on $\mathbb{R}$ is invariant under addition and satisfies the lattice property. We also show that measures that are $p$-admissible on an interval can be partially extended by periodical reflections to global $p$-admissible measures. Surprisingly, the $p$-admissibility has to hold on a larger interval than the reflected one, and an example shows that this is necessary.
math.MG math.FA
1807.02175
Adaptive Paired-Comparison Method for Subjective Video Quality Assessment on Mobile Devices
To effectively evaluate subjective visual quality in weakly-controlled environments, we propose an Adaptive Paired Comparison method based on particle filtering. As our approach requires each sample to be rated only once, the test time compared to regular paired comparison can be reduced. The method works with non-experts and improves reliability compared to MOS and DS-MOS methods.
stat.AP eess.IV
1807.02176
A stochastic Levenberg-Marquardt method using random models with complexity results
Globally convergent variants of the Gauss-Newton algorithm are often the methods of choice to tackle nonlinear least-squares problems. Among such frameworks, Levenberg-Marquardt and trust-region methods are two well-established, similar paradigms. Both schemes have been studied when the Gauss-Newton model is replaced by a random model that is only accurate with a given probability. Trust-region schemes have also been applied to problems where the objective value is subject to noise: this setting is of particular interest in fields such as data assimilation, where efficient methods that can adapt to noise are needed to account for the intrinsic uncertainty in the input data. In this paper, we describe a stochastic Levenberg-Marquardt algorithm that handles noisy objective function values and random models, provided sufficient accuracy is achieved in probability. Our method relies on a specific scaling of the regularization parameter, that allows us to leverage existing results for trust-region algorithms. Moreover, we exploit the structure of our objective through the use of a family of stationarity criteria tailored to least-squares problems. Provided the probability of accurate function estimates and models is sufficiently large, we bound the expected number of iterations needed to reach an approximate stationary point, which generalizes results based on using deterministic models or noiseless function values. We illustrate the link between our approach and several applications related to inverse problems and machine learning.
math.OC
1807.02177
Higher Structures in Homotopy Type Theory
The intended model of the homotopy type theories used in Univalent Foundations is the infinity-category of homotopy types, also known as infinity-groupoids. The problem of higher structures is that of constructing the homotopy types needed for mathematics, especially those that aren't sets. The current repertoire of constructions, including the usual type formers and higher inductive types, suffice for many but not all of these. We discuss the problematic cases, typically those involving an infinite hierarchy of coherence data such as semi-simplicial types, as well as the problem of developing the meta-theory of homotopy type theories in Univalent Foundations. We also discuss some proposed solutions.
math.LO
1807.02178
A Cheeger-M\"uller theorem for manifolds with wedge singularities
We study the spectrum and heat kernel of the Hodge Laplacian with coefficients in a flat bundle on a closed manifold degenerating to a manifold with wedge singularities. Provided the Hodge Laplacians in the fibers of the wedge have an appropriate spectral gap, we give uniform constructions of the resolvent and heat kernel on suitable manifolds with corners. When the wedge manifold and the base of the wedge are odd dimensional, this is used to obtain a Cheeger-M\"uller theorem relating analytic torsion with the Reidemeister torsion of the natural compactification by a manifold with boundary.
math.DG math.AP math.SP
1807.02179
Interpolating factorizations for acyclic Donaldson--Thomas invariants
We prove a family of factorization formulas for the combinatorial Donaldson--Thomas invariant for an acyclic quiver. A quantum dilogarithm identity due to Reineke, later interpreted by Rimanyi by counting codimensions of quiver loci, gives two extremal cases of our formulation in the Dynkin case. We establish our interpolating factorizations explicitly with a dimension counting argument by defining certain stratifications of the space of representations for the quiver and calculating Betti numbers in the corresponding equivariant cohomology algebras.
math.RT math.AG math.AT
1807.02180
Engineered Chiral Skyrmion and Skyrmionium States by the Gradient of Curvature
Curvilinear nanomagnets can support magnetic skyrmions stabilized at a local curvature without any intrinsic chiral interactions. Here, we propose a new mechanism to stabilize chiral N\'{e}el skyrmion states relying on the \textit{gradient} of curvature. We illustrate our approach with an example of a magnetic thin film with perpendicular magnetic anisotropy shaped as a circular indentation. We show that in addition to the topologically trivial ground state, there are two skyrmion states with winding numbers $\pm 1$ and a skyrmionium state with a winding number $0$. These chiral states are formed due to the pinning of a chiral magnetic domain wall at a bend of the nanoindentation due to spatial inhomogeneity of the curvature-induced Dzyaloshinskii--Moriya interaction. The latter emerges due to the gradient of the local curvature at a bend. While the chirality of the skyrmion is determined by the sign of the local curvature, its radius can be varied in a broad range by engineering the position of the bend with respect to the center of the nanoindentation. We propose a general method, which enables one to reduce a magnetic problem for any surface of revolution to the common planar problem by means of proper modification of constants of anisotropy and Dzyaloshinskii--Moriya interaction.
cond-mat.mes-hall cond-mat.str-el
1807.02181
The Conformal Anomaly and a new Exact RG
For scalar field theory, a new generalization of the Exact RG to curved space is proposed, in which the conformal anomaly is explicitly present. Vacuum terms require regularization beyond that present in the canonical formulation of the Exact RG, which can be accomplished by adding certain free fields, each at a non-critical fixed-point. Taking the Legendre transform, the sole effect of the regulator fields is to remove a divergent vacuum term and they do not explicitly appear in the effective average action. As an illustration, both the integrated conformal anomaly and Polyakov action are recovered for the Gaussian theory in d=2.
hep-th cond-mat.stat-mech gr-qc
1807.02182
Detector setup of the VIP2 Underground Experiment at LNGS
The VIP2 experiment tests the Pauli Exclusion Principle with high sensitivity, by searching for Pauli-forbidden atomic transitions from the 2p to the 1s shell in copper at about 8keV. The transition energy of Pauli-forbidden K X-rays is shifted by about 300 eV with respect to the normal allowed K line. This energy difference can be resolved using Silicon Drift Detectors. The data for this experiment is taken in the Gran Sasso underground laboratory (LNGS), which provides shielding from cosmic radiation. An overview of the detection system of the VIP2 experiment will be given. This includes the Silicon Drift Detectors used as X-ray detectors which provide an energy resolution of around 150 eV at 6 keV and timing information for active shielding. Furthermore, the low maintenance requirement makes them excellent X-ray detectors for the use in an underground laboratory. The VIP2 setup will be discussed which consists of a high current target system and a passive as well as an active shielding system using plastic scintillators read out by Silicon Photomultipliers.
physics.ins-det quant-ph
1807.02183
Application of the Complex Kohn Variational Method to Attosecond Spectroscopy
The complex Kohn variational method is extended to compute light-driven electronic transitions between continuum wavefunctions in atomic and molecular systems. This development enables the study of multiphoton processes in the perturbative regime for arbitrary light polarization. As a proof of principle, we apply the method to compute the photoelectron spectrum arising from the pump-probe two-photon ionization of helium induced by a sequence of extreme ultraviolet and infrared-light pulses. We compare several two-photon ionization pump-probe spectra, resonant with the (2s2p)1P1o Feshbach resonance, with independent simulations based on the atomic B-spline close- coupling STOCK code, and find good agreement between the two approaches. This new finite-pulse perturbative approach is a step towards the ab initio study of weak-field attosecond processes in poly-electronic molecules.
physics.atom-ph
1807.02184
Slytherin: Dynamic, Network-assisted Prioritization of Tail Packets in Datacenter Networks
Datacenter applications demand both low latency and high throughput; while interactive applications (e.g., Web Search) demand low tail latency for their short messages due to their partition-aggregate software architecture, many data-intensive applications (e.g., Map-Reduce) require high throughput for long flows as they move vast amounts of data across the network. Recent proposals improve latency of short flows and throughput of long flows by addressing the shortcomings of existing packet scheduling and congestion control algorithms, respectively. We make the key observation that long tails in the Flow Completion Times (FCT) of short flows result from packets that suffer congestion at more than one switch along their paths in the network. Our proposal, Slytherin, specifically targets packets that suffered from congestion at multiple points and prioritizes them in the network. Slytherin leverages ECN mechanism which is widely used in existing datacenters to identify such tail packets and dynamically prioritizes them using existing priority queues. As compared to existing state-of-the-art packet scheduling proposals, Slytherin achieves 18.6% lower 99th percentile flow completion times for short flows without any loss of throughput. Further, Slytherin drastically reduces 99th percentile queue length in switches by a factor of about 2x on average.
cs.NI
1807.02185
A new nonlocal nonlinear Schroedinger equation and its soliton solutions
A new integrable nonlocal nonlinear Schroedinger (NLS) equation with clear physical motivations is proposed. This equation is obtained from a special reduction of the Manakov system, and it describes Manakov solutions whose two components are related by a parity symmetry. Since the Manakov system governs wave propagation in a wide variety of physical systems, this new nonlocal equation has clear physical meanings. Solitons and multi-solitons in this nonlocal equation are also investigated in the framework of Riemann-Hilbert formulations. Surprisingly, symmetry relations of discrete scattering data for this equation are found to be very complicated, where constraints between eigenvectors in the scattering data depend on the number and locations of the underlying discrete eigenvalues in a very complex manner. As a consequence, general $N$-solitons are difficult to obtain in the Riemann-Hilbert framework. However, one- and two-solitons are derived, and their dynamics investigated. It is found that two-solitons are generally not a nonlinear superposition of one-solitons, and they exhibit interesting dynamics such as meandering and sudden position shifts. As a generalization, other integrable and physically meaningful nonlocal equations are also proposed, which include NLS equations of reverse-time and reverse-space-time types as well as nonlocal Manakov equations of reverse-space, reverse-time and reverse-space-time types.
nlin.SI
1807.02186
Holographic Complexity and Volume
The previously proposed "Complexity=Volume" or CV-duality is probed and developed in several directions. We show that the apparent lack of universality for large and small black holes is removed if the volume is measured in units of the maximal time from the horizon to the "final slice" (times Planck area). This also works for spinning black holes. We make use of the conserved "volume current", associated with a foliation of spacetime by maximal volume surfaces, whose flux measures their volume. This flux picture suggests that there is a transfer of the complexity from the UV to the IR in holographic CFTs, which is reminiscent of thermalization behavior deduced using holography. It also naturally gives a second law for the complexity when applied at a black hole horizon. We further establish a result supporting the conjecture that a boundary foliation determines a bulk maximal foliation without gaps, establish a global inequality on maximal volumes that can be used to deduce the monotonicity of the complexification rate on a boost-invariant background, and probe CV duality in the settings of multiple quenches, spinning black holes, and Rindler-AdS.
hep-th gr-qc
1807.02187
Encoding Motion Primitives for Autonomous Vehicles using Virtual Velocity Constraints and Neural Network Scheduling
Within the context of trajectory planning for autonomous vehicles this paper proposes methods for efficient encoding of motion primitives in neural networks on top of model-based and gradient-free reinforcement learning. It is distinguished between 5 core aspects: system model, network architecture, training algorithm, training tasks selection and hardware/software implementation. For the system model, a kinematic (3-states-2-controls) and a dynamic (16-states-2-controls) vehicle model are compared. For the network architecture, 3 feedforward structures are compared including weighted skip connections. For the training algorithm, virtual velocity constraints and network scheduling are proposed. For the training tasks, different feature vector selections are discussed. For the implementation, aspects of gradient-free learning using 1 GPU and the handling of perturbation noise therefore are discussed. The effects of proposed methods are illustrated in experiments encoding up to 14625 motion primitives. The capabilities of tiny neural networks with as few as 10 scalar parameters when scheduled on vehicle velocity are emphasized.
cs.RO cs.LG
1807.02188
Implicit Generative Modeling of Random Noise during Training for Adversarial Robustness
We introduce a Noise-based prior Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks. We find that the implicit generative modeling of random noise with the same loss function used during posterior maximization, improves a model's understanding of the data manifold furthering adversarial robustness. We evaluate our approach's efficacy and provide a simplistic visualization tool for understanding adversarial data, using Principal Component Analysis. Our analysis reveals that adversarial robustness, in general, manifests in models with higher variance along the high-ranked principal components. We show that models learnt with our approach perform remarkably well against a wide-range of attacks. Furthermore, combining NoL with state-of-the-art adversarial training extends the robustness of a model, even beyond what it is adversarially trained for, in both white-box and black-box attack scenarios.
cs.LG cs.CV stat.ML
1807.02189
Functional Object-Oriented Network: Construction & Expansion
We build upon the functional object-oriented network (FOON), a structured knowledge representation which is constructed from observations of human activities and manipulations. A FOON can be used for representing object-motion affordances. Knowledge retrieval through graph search allows us to obtain novel manipulation sequences using knowledge spanning across many video sources, hence the novelty in our approach. However, we are limited to the sources collected. To further improve the performance of knowledge retrieval as a follow up to our previous work, we discuss generalizing knowledge to be applied to objects which are similar to what we have in FOON without manually annotating new sources of knowledge. We discuss two means of generalization: 1) expanding our network through the use of object similarity to create new functional units from those we already have, and 2) compressing the functional units by object categories rather than specific objects. We discuss experiments which compare the performance of our knowledge retrieval algorithm with both expansion and compression by categories.
cs.RO cs.AI
1807.02190
Strong Numerical Methods of Orders 2.0, 2.5, and 3.0 for Ito Stochastic Differential Equations Based on the Unified Stochastic Taylor Expansions and Multiple Fourier-Legendre Series
The article is devoted to the construction of explicit one-step numerical methods with the strong orders of convergence 2.0, 2,5, and 3.0 for Ito stochastic differential equations with multidimensional non-commutative noise. We consider the numerical methods based on the unified Taylor-Ito and Taylor-Stratonovich expansions. For numerical modeling of iterated Ito and Stratonovich stochastic integrals of multiplicities 1 to 6 we appling the method of multiple Fourier-Legendre series converging in the sense of norm in Hilbert space $L_2([t, T]^k),$ $k=1,\ldots,6$. The article is addressed to engineers who use numerical modeling in stochastic control and for solving the non-linear filtering problem. The article can be interesting for the mathematicians who working in the field of high-order strong numerical methods for Ito stochastic differential equations.
math.PR
1807.02191
An MCMC Approach to Empirical Bayes Inference and Bayesian Sensitivity Analysis via Empirical Processes
Consider a Bayesian situation in which we observe $Y \sim p_{\theta}$, where $\theta \in \Theta$, and we have a family $\{ \nu_h, \, h \in \mathcal{H} \}$ of potential prior distributions on $\Theta$. Let $g$ be a real-valued function of $\theta$, and let $I_g(h)$ be the posterior expectation of $g(\theta)$ when the prior is $\nu_h$. We are interested in two problems: (i) selecting a particular value of $h$, and (ii) estimating the family of posterior expectations $\{ I_g(h), \, h \in \mathcal{H} \}$. Let $m_y(h)$ be the marginal likelihood of the hyperparameter $h$: $m_y(h) = \int p_{\theta}(y) \, \nu_h(d\theta)$. The empirical Bayes estimate of $h$ is, by definition, the value of $h$ that maximizes $m_y(h)$. It turns out that it is typically possible to use Markov chain Monte Carlo to form point estimates for $m_y(h)$ and $I_g(h)$ for each individual $h$ in a continuum, and also confidence intervals for $m_y(h)$ and $I_g(h)$ that are valid pointwise. However, we are interested in forming estimates, with confidence statements, of the entire families of integrals $\{ m_y(h), \, h \in \mathcal{H} \}$ and $\{ I_g(h), \, h \in \mathcal{H} \}$: we need estimates of the first family in order to carry out empirical Bayes inference, and we need estimates of the second family in order to do Bayesian sensitivity analysis. We establish strong consistency and functional central limit theorems for estimates of these families by using tools from empirical process theory. We give two applications, one to Latent Dirichlet Allocation, which is used in topic modelling, and the other is to a model for Bayesian variable selection in linear regression.
stat.ME
1807.02192
A Survey of Knowledge Representation in Service Robotics
Within the realm of service robotics, researchers have placed a great amount of effort into learning, understanding, and representing motions as manipulations for task execution by robots. The task of robot learning and problem-solving is very broad, as it integrates a variety of tasks such as object detection, activity recognition, task/motion planning, localization, knowledge representation and retrieval, and the intertwining of perception/vision and machine learning techniques. In this paper, we solely focus on knowledge representations and notably how knowledge is typically gathered, represented, and reproduced to solve problems as done by researchers in the past decades. In accordance with the definition of knowledge representations, we discuss the key distinction between such representations and useful learning models that have extensively been introduced and studied in recent years, such as machine learning, deep learning, probabilistic modelling, and semantic graphical structures. Along with an overview of such tools, we discuss the problems which have existed in robot learning and how they have been built and used as solutions, technologies or developments (if any) which have contributed to solving them. Finally, we discuss key principles that should be considered when designing an effective knowledge representation.
cs.RO cs.AI
1807.02193
Mesonic excited states of magnetic monopoles in quantum spin ice
Spin ice magnetic monopoles are fractionalized emergent excitations in a class of frustrated magnets called spin ices. The classical spin ice model has an extensive number of ground state spin configurations, whereas magnetic monopoles can be thought of as the endpoints of string operators applied to these ground states. Introducing quantum fluctuations into the model induces monopoles with dynamics, which would normally lift the degeneracy of the two-monopole energy level at the linear order of the perturbation. Contrary to this expectation, we find that quantum fluctuations in the form of a locally transverse field term partially preserve the extensive degeneracy of the monopole pairs up to the much weaker splitting of the background monopole-free spin ice configurations. Each of these approximately degenerate excited states, termed mesons, can be represented as a bound monopole-antimonopole pair delocalized in a classical spin ice background.
cond-mat.str-el
1807.02194
An algorithm for enumerating difference sets
The DifSets package for GAP implements an algorithm for enumerating all difference sets in a group up to equivalence and provides access to a library of results. The algorithm functions by finding difference sums, which are potential images of difference sets in quotient groups of the original group, and searching their preimages. In this way, the search space can be dramatically decreased, and searches of groups of relatively large order (such as order 64 or order 96) can be completed.
math.CO
1807.02195
Polynomials in Base $x$ and the Prime-Irreducible Affinity
Arthur Cohn's irreducibility criterion for polynomials with integer coefficients and its generalization connect primes to irreducibles, and integral bases to the variable $x$. As we follow this link, we find that these polynomials are ready to spill two of their secrets: (i) There exists a unique "base-$x$" representation of such polynomials that makes the ring $\mathbb{Z}[x]$ into an ordered domain; and (ii) There is a 1-1 correspondence between positive rational primes $p$ and certain infinite sets of irreducible polynomials $f(x)$ that attain the value $p$ at sufficiently large $x$, each generated in finitely many steps from the $p$th cyclotomic polynomial. The base-$x$ representation provides practical conversion methods among numeric bases (not to mention a polynomial factorization algorithm), while the prime-irreducible correspondence puts a new angle on the Bouniakowsky Conjecture, a generalization of Dirichlet's Theorem on Primes in Arithmetic Progressions.
math.NT
1807.02196
Pairing symmetry and spontaneous vortex-antivortex lattice in superconducting twisted-bilayer graphene: Bogoliubov-de Gennes approach
We study the superconducting pairing symmetry in twisted bilayer graphene by solving the Bogoliubov-de Gennes equation for all electrons in moir\'{e} supercells. With increasing the pairing potential, the system evolves from the mixed nontopological $d+id$ and $p+ip$ phase to the $s+p+d$ phase via the first-order phase transition. In the time-reversal symmetry breaking $d+id$ and $p+ip$ phase, vortex and antivortex lattices accompanying spontaneous supercurrent are induced by the twist. The superconducting order parameter is nonuniform in the moir\'{e} unit cell. Nevertheless, the superconducting gap in the local density of states is identical in the unit cell. The twist-induced vortices and nontopological nature of the mixed $d+id$ and $p+ip$ phase are not captured by the existing effective models. Our results suggest the importance of long-range pairing interaction for effective models.
cond-mat.str-el cond-mat.supr-con
1807.02197
Assessment of the impact of two-dimensional wall deformations' shape on high-speed boundary layer disturbances
Previous experimental and numerical studies showed that two-dimensional roughness elements can stabilize disturbances inside a hypersonic boundary layer, and eventually delay the transition onset. The objective of this paper is to evaluate the response of disturbances propagating inside a high-speed boundary layer to various two-dimensional surface deformations of different shapes. We perform an assessment of the impact of various two-dimensional surface non-uniformities, such as backward or forward steps, combinations of backward and forward steps, wavy surfaces, surface dips, and surface humps. Disturbances inside a Mach 5.92 flat-plate boundary layer are excited using periodic wall blowing and suction at an upstream location. The numerical tools consist of a high-accurate numerical algorithm solving for the unsteady, compressible form of the Navier-Stokes equations in curvilinear coordinates. Results show that all types of surface non-uniformities are able to reduce the amplitude of boundary layer disturbances to a certain degree. The amount of disturbance energy reduction is related to the type of pressure gradients that are posed by the deformation (adverse or favorable). A possible cause (among others) of the disturbance energy reduction inside the boundary layer is presumed to be the result of a partial deviation of the kinetic energy to the external flow, along the discontinuity that is generated by the wall deformation.
physics.flu-dyn
1807.02198
The Radius of Metric Subregularity
There is a basic paradigm, called here the radius of well-posedness, which quantifies the "distance" from a given well-posed problem to the set of ill-posed problems of the same kind. In variational analysis, well-posedness is often understood as a regularity property, which is usually employed to measure the effect of perturbations and approximations of a problem on its solutions. In this paper we focus on evaluating the radius of the property of metric subregularity which, in contrast to its siblings, metric regularity, strong regularity and strong subregularity, exhibits a more complicated behavior under various perturbations. We consider three kinds of perturbations: by Lipschitz continuous functions, by semismooth functions, and by smooth functions, obtaining different expressions/bounds for the radius of subregularity, which involve generalized derivatives of set-valued mappings. We also obtain different expressions when using either Frobenius or Euclidean norm to measure the radius. As an application, we evaluate the radius of subregularity of a general constraint system. Examples illustrate the theoretical findings.
math.OC
1807.02199
Measurements of Degree-Scale B-mode Polarization with the BICEP/Keck Experiments at South Pole
The BICEP and Keck Array experiments are a suite of small-aperture refracting telescopes observing the microwave sky from the South Pole. They target the degree-scale B-mode polarization signal imprinted in the Cosmic Microwave Background (CMB) by primordial gravitational waves. Such a measurement would shed light on the physics of the very early universe. While BICEP2 observed for the first time a B-mode signal at 150 GHz, higher frequencies from the Planck satellite showed that it could be entirely due to the polarized emission from Galactic dust, though uncertainty remained high. Keck Array has been observing the same region of the sky for several years, with an increased detector count, producing the deepest polarized CMB maps to date. New detectors at 95 GHz were installed in 2014, and at 220 GHz in 2015. These observations enable a better constraint of galactic foreground emissions, as presented here. In 2015, BICEP2 was replaced by BICEP3, a 10 times higher throughput telescope observing at 95 GHz, while Keck Array is now focusing on higher frequencies. In the near future, BICEP Array will replace Keck Array, and will allow unprecedented sensitivity to the gravitational wave signal. High resolution observations from the South Pole Telescope (SPT) will also be used to remove the lensing contribution to B-modes.
astro-ph.CO
1807.02200
Natural Language Processing for Music Knowledge Discovery
Today, a massive amount of musical knowledge is stored in written form, with testimonies dated as far back as several centuries ago. In this work, we present different Natural Language Processing (NLP) approaches to harness the potential of these text collections for automatic music knowledge discovery, covering different phases in a prototypical NLP pipeline, namely corpus compilation, text-mining, information extraction, knowledge graph generation and sentiment analysis. Each of these approaches is presented alongside different use cases (i.e., flamenco, Renaissance and popular music) where large collections of documents are processed, and conclusions stemming from data-driven analyses are presented and discussed.
cs.CL
1807.02201
Monte Carlo Methods for Insurance Risk Computation
In this paper we consider the problem of computing tail probabilities of the distribution of a random sum of positive random variables. We assume that the individual variables follow a reproducible natural exponential family (NEF) distribution, and that the random number has a NEF counting distribution with a cubic variance function. This specific modelling is supported by data of the aggregated claim distribution of an insurance company. Large tail probabilities are important as they reflect the risk of large losses, however, analytic or numerical expressions are not available. We propose several simulation algorithms which are based on an asymptotic analysis of the distribution of the counting variable and on the reproducibility property of the claim distribution. The aggregated sum is simulated efficiently by importancesampling using an exponential cahnge of measure. We conclude by numerical experiments of these algorithms.
math.PR
1807.02202
The price of debiasing automatic metrics in natural language evaluation
For evaluating generation systems, automatic metrics such as BLEU cost nothing to run but have been shown to correlate poorly with human judgment, leading to systematic bias against certain model improvements. On the other hand, averaging human judgments, the unbiased gold standard, is often too expensive. In this paper, we use control variates to combine automatic metrics with human evaluation to obtain an unbiased estimator with lower cost than human evaluation alone. In practice, however, we obtain only a 7-13% cost reduction on evaluating summarization and open-response question answering systems. We then prove that our estimator is optimal: there is no unbiased estimator with lower cost. Our theory further highlights the two fundamental bottlenecks---the automatic metric and the prompt shown to human evaluators---both of which need to be improved to obtain greater cost savings.
cs.CL
1807.02203
Colloquium: Quantum skyrmionics
Skyrmions are topological solitons that emerge in many physical contexts. In magnetism, they appear as textures of the spin-density field stabilized by different competing interactions and characterized by a topological charge that counts the number of times the order parameter wraps the sphere. They can behave as classical objects, when the spin texture varies slowly on the scale of the microscopic lattice of the magnet. However, the fast development of experimental tools to create and stabilize skyrmions in thin magnetic films has lead to a rich variety of textures, sometimes of atomistic sizes. In this article, we discuss, in a pedagogical manner, how to introduce quantum interference in the translational dynamics of skyrmion textures, starting from the micromagnetic equations of motion for a classical soliton. We study how the nontrivial topology of the spin texture manifests in the semiclassical regime, when the microscopic lattice potential is treated quantum-mechanically, but the external driving forces are taken as smooth classical perturbations. We highlight close relations to the fields of noncommutative quantum mechanics, Chern-Simons theories, and the quantum Hall effect.
cond-mat.mes-hall
1807.02204
Neutrino masses, cosmological inflation and dark matter in a $U(1)_{B-L}$ model with type II seesaw mechanism
In this work we implement the type II seesaw mechanism into the framework of the $U(1)_{B-L}$ gauge model. As main gain, the right-handed neutrinos of the model get free to play the role of the dark matter of the universe. As side effect, the model realizes Higgs inflation without problem with loss of unitarity.
hep-ph
1807.02205
OSDF: An Intent-based Software Defined Network Programming Framework
Software Defined Networking (SDN) offers flexibility to program a network based on a set of network requirements. Programming the networks using SDN is not completely straightforward because a programmer must deal with low level details. To solve the problem, researchers proposed a set of network programming languages that provide a set of high level abstractions to hide low level hardware details. Most of the proposed languages provide abstractions related to packet processing and flows, and still require a programmer to specify low-level match-action fields to configure and monitor a network. Recently, in an attempt to raise the level at which programmers work, researchers have begun to investigate Intent-based, descriptive northbound interfaces. The work is still in early stages, and further investigation is required before intent-based systems will be adopted by enterprise networks. To help achieve the goal of moving to an intent-based design, we propose an SDN-based network programming framework, the Open Software Defined Framework (OSDF). OSDF provides a high level Application Programming Interface (API) that can be used by managers and network administrators to express network requirements for applications and policies for multiple domains. OSDF also provides a set of high level network operation services that handle common network configuration, monitoring, and Quality of Service (QoS) provisioning. OSDF is equipped with a policy conflict management module to help a network administrator detect and resolve policy conflicts. The paper shows how OSDF can be used and explains application-based policies. Finally, the paper reports the results of both testbed measurements and simulations that are used to evaluate the framework from multiple perspectives, including functionality and performance.
cs.NI
1807.02206
On forcing projective generic absoluteness from strong cardinals
W.H. Woodin showed that if $\kappa_1 < \cdots < \kappa_n$ are strong cardinals then two-step ${\bf\Sigma}^1_{n+3}$ generic absoluteness holds after collapsing $2^{2^{\kappa_n}}$ to be countable. We show that this number can be reduced to $2^{\kappa_n}$, and to $\kappa_n^+$ in the case $n = 1$, but cannot be further reduced to $\kappa_n$.
math.LO
1807.02207
Weakly remarkable cardinals, Erd\H{o}s cardinals, and the generic Vop\v{e}nka principle
We consider a weak version of Schindler's remarkable cardinals that may fail to be $\Sigma_2$-reflecting. We show that the $\Sigma_2$-reflecting weakly remarkable cardinals are exactly the remarkable cardinals, and we show that the existence of a non-$\Sigma_2$-reflecting weakly remarkable cardinal has higher consistency strength: it is equiconsistent with the existence of an $\omega$-Erd\H{o}s cardinal. We give an application involving gVP, the generic Vop\v{e}nka principle defined by Bagaria, Gitman, and Schindler. Namely, we show that gVP + "Ord is not $\Delta_2$-Mahlo" and $\text{gVP}({\bf\Pi}_1)$ + "there is no proper class of remarkable cardinals" are both equiconsistent with the existence of a proper class of $\omega$-Erd\H{o}s cardinals, extending results of Bagaria, Gitman, Hamkins, and Schindler.
math.LO
1807.02208
Generic Vop\v{e}nka cardinals and models of ZF with few $\aleph_1$-Suslin sets
We define a generic Vop\v{e}nka cardinal to be an inaccessible cardinal $\kappa$ such that for every first-order language $\mathcal{L}$ of cardinality less than $\kappa$ and every set $\mathscr{B}$ of $\mathcal{L}$-structures, if $|\mathscr{B}| = \kappa$ and every structure in $\mathscr{B}$ has cardinality less than $\kappa$, then an elementary embedding between two structures in $\mathscr{B}$ exists in some generic extension of $V$. We investigate connections between generic Vop\v{e}nka cardinals in models of ZFC and the number and complexity of $\aleph_1$-Suslin sets of reals in models of ZF. In particular, we show that ZFC + (there is a generic Vop\v{e}nka cardinal) is equiconsistent with ZF + $(2^{\aleph_1} \not\leq |S_{\aleph_1}|)$ where $S_{\aleph_1}$ is the pointclass of all $\aleph_1$-Suslin sets of reals, and also with ZF + $(S_{\aleph_1} = {\bf\Sigma}^1_2)$ + $(\Theta = \aleph_2)$ where $\Theta$ is the least ordinal that is not a surjective image of the reals.
math.LO
1807.02209
Strategic enhancement of anomalous Nernst effect in Co2MnAl1-xSix Heusler compounds
The anomalous Nernst effect (ANE), a thermoelectric phenomenon in magnetic materials, has potential for novel energy harvesting applications because its orthogonal relationship between the temperature gradient and the electric field enables us to utilize the large area of non-flat heat sources. In this study, the required thermopower of ANE for practical energy harvesting applications is evaluated in simulations of electric power obtainable from ANE using a low-temperature heat source. A strategy for finding magnetic materials having large ANEs is proposed, which suggests that a large ANE originates from the constructive relationship between large Seebeck, anomalous Hall, and transverse Peltier effects. This strategy leads us to investigate the electric and thermoelectric properties in Co2MnAl1-xSix. As a result, it is found that Co2MnAl0.63Si0.37 has the largest ANE thermopower ever reported, 6.2 uV/K. A first principles calculation of the anomalous Hall conductivity and transverse Peltier coefficient in B2 and L21-ordered Co2MnAl0.63Si0.37 shows close agreement with the experimental results, indicating that the observed large ANE arises from the intrinsic Berry phase curvature of Co2MnAl1-xSix. This study can be a guide for developing materials for new thermoelectric applications using ANE.
cond-mat.mtrl-sci
1807.02210
Molecular Gas and Dark Neutral Medium in the Outskirts of Chamaeleon
%context More gas is inferred to be present in molecular cloud complexes than can be accounted for by HI and CO emission, a phenomenon known as dark neutral medium (DNM) or CO-dark gas for the molecules. %aims To see if molecular gas can be detected in Chamaeleon when gas column densities in the DNM were inferred and CO emission was not detected. % methods We took 3mm absorption profiles of HCO+ and other molecules toward quasars across Chamaeleon, 1 of which had detectable CO emission. We derived N(H2) assuming N(HC+)/N(H2) = 3x10^{-9}. %results With the possible exception of 1 weak continuum target HCO+ absorption was detected in all directions, \cch\ in 8 and HCN in 4 directions. The sightlines divide in 2 groups according to their DNM content with 1 group of 8 directions having N(DNM) \ga 2x10^{20} \pcc and another group of 5 directions having N(DNM) < .5x10^{20}\pcc. The groups have comparable <N(HI)> in Chamaeleon 6-7 x 10^{20}\pcc and <N(H)/E(B-V) ~ 6-7x10^{21}\pcc/mag. They differ in having quite different <E(B-V)> 0.33 vs 0.18 mag, <N(DNM)> 3.3 vs .14 x 10^{20}\pcc and <2N(H2)> = 5.6 vs 0.8 x 10^{20} \pcc. Gas at more positive velocities is enriched in molecules and DNM. %conclusion Overall the quantity of H2 inferred from HCO+ fully accounts for the previously-inferred DNM along the studied sightlines. H2 is concentrated in the high-DNM group, where the molecular fraction is 46% vs. 13% otherwise and 38% overall. Thus, neutral gas in the outskirts of the complex is mostly atomic but the DNM is mostly molecular. Saturation of the HI emission may occur along 3 of the 4 sightlines having the largest DNM column densities but there is no substantial reservoir of 'dark' atomic or molecular gas that remains undetected as part of the inventory of dark neutral medium.
astro-ph.GA
1807.02211
Scalar resonance in a top partner model
The phenomenology entailed by a scalar resonance in a top partner model is analysed here in a $SO(5)$ Composite Higgs formalism. Heavy scalar resonances production and their decays modes are explored along a benchmark resonance mass range. The production of single-double partner final states has been scanned along the partner mass scale. QCD drives such production, as well as the SM gauge, Higgs, plus the intermediation of the scalar resonance. Non-zero contributions are induced as long as extra fermion-resonance effects are included. Finally, we have excluded regions of the parameter spaces underlying our framework by imposing the recent LHC searches for vector-like quarks production in $pp$-collisions at 13 TeV. Substantial reduction of the allowed regions occurs if extra fermion-resonance effects are accounted for, leading us to test the involved parametric dependence in the shed light of new matter interactions.
hep-ph
1807.02212
On the exact variance of Tsallis entropy in a random pure state
Tsallis entropy is a useful one-parameter generalization of the standard von Neumann entropy in information theory. We study the variance of Tsallis entropy of bipartite quantum systems in a random pure state. The main result is an exact variance formula of Tsallis entropy that involves finite sums of some terminating hypergeometric functions. In the special cases of quadratic entropy and small subsystem dimensions, the main result is further simplified to explicit variance expressions. As a byproduct, we find an independent proof of the recently proved variance formula of von Neumann entropy based on the derived moment relation to the Tsallis entropy.
math-ph math.MP
1807.02213
The consistency strength of the perfect set property for universally Baire sets of reals
We show that the statement "every universally Baire set of reals has the perfect set property" is equiconsistent modulo ZFC with the existence of a cardinal that we call a virtually Shelah cardinal. These cardinals resemble Shelah cardinals but are much weaker: if $0^\sharp$ exists then every Silver indiscernible is virtually Shelah in $L$. We also show that the statement $\text{uB} = {\bf\Delta}^1_2$, where $\text{uB}$ is the pointclass of all universally Baire sets of reals, is equiconsistent modulo ZFC with the existence of a $\Sigma_2$-reflecting virtually Shelah cardinal.
math.LO
1807.02214
Low-Frequency Raman Spectroscopy of Few-Layer 2H-SnS2
We investigated interlayer phonon modes of mechanically exfoliated few-layer 2H-SnS2 samples by using room temperature low-frequency micro-Raman spectroscopy. Raman measurements were performed using laser wavelength of 441.6, 514.4, 532 and 632.8 nm with power below 100 uW and inside a vacuum chamber to avoid photo-oxidation. The intralayer Eg and A1g modes are observed at ~206 cm-1 and ~314 cm-1, respectively, but the Eg mode is much weaker for all excitation energies. The A1g mode exhibits strong resonant enhancement for the 532 nm (2.33 eV) laser. In the low-frequency region, interlayer vibrational modes of shear and breathing modes are observed. These modes show characteristic dependence on the number of layers. The strengths of the interlayer interactions are estimated by fitting the interlayer mode frequencies using the linear chain model and are found to be 1.64 x10^19 N.m-3 and 5.03 x10^19 N.m-3 for the shear and breathing modes, respectively.
cond-mat.mtrl-sci
1807.02215
Empirical distributions of the robustified $t$-test statistics
Based on the median and the median absolute deviation estimators, and the Hodges-Lehmann and Shamos estimators, robustified analogues of the conventional $t$-test statistic are proposed. The asymptotic distributions of these statistics are recently provided. However, when the sample size is small, it is not appropriate to use the asymptotic distribution of the robustified $t$-test statistics for making a statistical inference including hypothesis testing, confidence interval, p-value, etc. In this article, through extensive Monte Carlo simulations, we obtain the empirical distributions of the robustified $t$-test statistics and their quantile values. Then these quantile values can be used for making a statistical inference.
stat.AP
1807.02216
On some novel features of the Kerr-Newman-NUT Spacetime
In this work we have presented a special class of Kerr-Newman-NUT black hole, having its horizon located precisely at $r=2M$, for $Q^{2}=l^{2}-a^{2}$, where $M$, $l$, $a$ and $Q$ are respectively mass, NUT, rotation and electric charge parameters of the black hole. Clearly this choice radically alters the causal structure as there exists no Cauchy horizon indicating spacelike nature of the singularity when it exists. On the other hand, there is no curvature singularity for $l^2 > a^2$, however it may have conical singularities. Furthermore there is no upper bound on specific rotation parameter $a/M$, which could exceed unity without risking destruction of the horizon. To bring out various discerning features of this special member of the Kerr-Newman-NUT family, we study timelike and null geodesics in the equatorial as well as off the equatorial plane, energy extraction through super-radiance and Penrose process, thermodynamical properties and also the quasi-periodic oscillations. It turns out that the black hole under study radiates less energy through the super-radiant modes and Penrose process than the other black holes in this family.
gr-qc hep-th
1807.02217
TTV-determined Masses for Warm Jupiters and their Close Planetary Companions
Although the formation and the properties of hot Jupiters (with orbital periods $P<10\,$d) have attracted a great deal of attention, the origins of warm Jupiters ($10<P<100\,$d) are less well-studied. Using a transit timing analysis, we present the orbital parameters of five planetary systems containing warm Jupiters, Kepler 30, Kepler 117, Kepler 302, Kepler 487 and Kepler 418. Three of them, Kepler-30 c($M_p=549.4{\pm{5.6}}M_{\oplus}$), Kepler-117 c($M_p=702{\pm{63}}M_{\oplus}$) and Kepler 302 c($M_p=933\pm 527M_{\oplus}$), are confirmed to be real warm Jupiters based on their mass. Insights drawn from the radius-temperature relationship lead to the inference that hot Jupiters and warm Jupiters can be roughly separated by ${T_{\rm eff,c}} =1123.7\pm3.3$ K. Also, ${T_{\rm eff,c}}$ provides a good separation for Jupiters with companion fraction consistent with zero($T_{\rm eff}>T_{\rm eff,c}$) and those with companion fraction significantly different from zero ($T_{\rm eff}<T_{\rm eff,c}$).
astro-ph.EP
1807.02218
Sampling basis in reproducing kernel Banach spaces
We present necessary and sufficient conditions to hold true a Kramer type sampling theorem over semi-inner product reproducing kernel Banach spaces. Under some sampling-type hypotheses over a sequence of functions on these Banach spaces it results necessary that such sequence must be a $X_d$-Riesz basis and a sampling basis for the space. These results are a generalization of some already known sampling theorems over reproducing kernel Hilbert spaces.
math.FA
1807.02219
Analysis of Probabilistic and Parametric Reduced Order Models
Stochastic models share many characteristics with generic parametric models. In some ways they can be regarded as a special case. But for stochastic models there is a notion of weak distribution or generalised random variable, and the same arguments can be used to analyse parametric models. Such models in vector spaces are connected to a linear map, and in infinite dimensional spaces are a true gener- alisation. Reproducing kernel Hilbert space and affine- / linear- representations in terms of tensor products are directly related to this linear operator. This linear map leads to a generalised correlation operator, and representations are connected with factorisations of the correlation operator. The fitting counterpart in the stochastic domain to make this point of view as simple as possible are algebras of random variables with a distinguished linear functional, the state, which is interpreted as expectation. The connections of factorisations of the generalised correlation to the spectral decomposition, as well as the associated Karhunen-Lo\`eve- or proper orthogonal decomposition will be sketched. The purpose of this short note is to show the common theoretical background and pull some lose ends together.
math.NA
1807.02220
Arithmetic actions on cyclotomic function fields
We derive the group structure for cyclotomic function fields obtained by applying the Carlitz action for extensions of an initial constant field. The tame and wild structures are isolated to describe the Galois action on differentials. We show that the associated invariant rings are not polynomial.
math.NT
1807.02221
The Data Science of Hollywood: Using Emotional Arcs of Movies to Drive Business Model Innovation in Entertainment Industries
Much of business literature addresses the issues of consumer-centric design: how can businesses design customized services and products which accurately reflect consumer preferences? This paper uses data science natural language processing methodology to explore whether and to what extent emotions shape consumer preferences for media and entertainment content. Using a unique filtered dataset of 6,174 movie scripts, we generate a mapping of screen content to capture the emotional trajectory of each motion picture. We then combine the obtained mappings into clusters which represent groupings of consumer emotional journeys. These clusters are used to predict overall success parameters of the movies including box office revenues, viewer satisfaction levels (captured by IMDb ratings), awards, as well as the number of viewers' and critics' reviews. We find that like books all movie stories are dominated by 6 basic shapes. The highest box offices are associated with the Man in a Hole shape which is characterized by an emotional fall followed by an emotional rise. This shape results in financially successful movies irrespective of genre and production budget. Yet, Man in a Hole succeeds not because it produces most "liked" movies but because it generates most "talked about" movies. Interestingly, a carefully chosen combination of production budget and genre may produce a financially successful movie with any emotional shape. Implications of this analysis for generating on-demand content and for driving business model innovation in entertainment industries are discussed.
cs.CL cs.CY
1807.02222
Digital Geometry, a Survey
This paper provides an overview of modern digital geometry and topology through mathematical principles, algorithms, and measurements. It also covers recent developments in the applications of digital geometry and topology including image processing, computer vision, and data science. Recent research strongly showed that digital geometry has made considerable contributions to modelings and algorithms in image segmentation, algorithmic analysis, and BigData analytics.
cs.DM cs.GR
1807.02223
Existence theory for non-separable mean field games in Sobolev spaces
The mean field games system is a coupled pair of nonlinear partial differential equations arising in differential game theory, as a limit as the number of agents tends to infinity. We prove existence and uniqueness of classical solutions for time-dependent mean field games with Sobolev data. Many works in the literature assume additive separability of the Hamiltonian, as well as further structure such as convexity and monotonicity of the resulting components. Problems arising in practice, however, may not have this separable structure; we therefore consider the non-separable problem. For our existence and uniqueness results, we introduce new smallness constraints which simultaneously consider the size of the time horizon, the size of the data, and the strength of the coupling in the system.
math.AP
1807.02224
Cooperative Adaptive Cruise Control for a Platoon of Connected and Autonomous Vehicles Considering Dynamic Information Flow Topology
Vehicle-to-vehicle communications can be unreliable as interference causes communication failures. Thereby, the information flow topology for a platoon of Connected Autonomous Vehicles (CAVs) can vary dynamically. This limits existing Cooperative Adaptive Cruise Control (CACC) strategies as most of them assume a fixed information flow topology (IFT). To address this problem, we introduce a CACC design that considers a dynamic information flow topology (CACC-DIFT) for CAV platoons. An adaptive Proportional-Derivative (PD) controller under a two-predecessor-following IFT is proposed to reduce the negative effects when communication failures occur. The PD controller parameters are determined to ensure the string stability of the platoon. Further, the designed controller also factors the performance of individual vehicles. Hence, when communication failure occurs, the system will switch to a certain type of CACC instead of degenerating to adaptive cruise control, which improves the control performance considerably. The effectiveness of the proposed CACC-DIFT is validated through numerical experiments based on NGSIM field data. Results indicate that the proposed CACC-DIFT design outperforms a CACC with a predetermined information flow topology.
cs.SY