ID
int64
1
21k
TITLE
stringlengths
7
239
ABSTRACT
stringlengths
7
2.76k
Computer Science
int64
0
1
Physics
int64
0
1
Mathematics
int64
0
1
Statistics
int64
0
1
Quantitative Biology
int64
0
1
Quantitative Finance
int64
0
1
20,301
The interplay between Steinberg algebras and partial skew rings
We study the interplay between Steinberg algebras and partial skew rings: For a partial action of a group in a Hausdorff, locally compact, totally disconnected topological space, we realize the associated partial skew group ring as a Steinberg algebra (over the transformation groupoid attached to the partial action). We then apply this realization to characterize diagonal preserving isomorphisms of partial skew group rings, over commutative algebras, in terms of continuous orbit equivalence of the associated partial actions. Finally, we show that any Steinberg algebra, associated to a Hausdorff ample groupoid, can be seen as a partial skew inverse semigroup ring.
0
0
1
0
0
0
20,302
Type-II Dirac Photons
The Dirac equation for relativistic electron waves is the parent model for Weyl and Majorana fermions as well as topological insulators. Simulation of Dirac physics in three-dimensional photonic crystals, though fundamentally important for topological phenomena at optical frequencies, encounters the challenge of synthesis of both Kramers double degeneracy and parity inversion. Here we show how type-II Dirac points---exotic Dirac relativistic waves yet to be discovered---are robustly realized through the nonsymmorphic screw symmetry. The emergent type-II Dirac points carry nontrivial topology and are the mother states of type-II Weyl points. The proposed all-dielectric architecture enables robust cavity states at photonic-crystal---air interfaces and anomalous refraction, with very low energy dissipation.
0
1
0
0
0
0
20,303
Improving Network Robustness against Adversarial Attacks with Compact Convolution
Though Convolutional Neural Networks (CNNs) have surpassed human-level performance on tasks such as object classification and face verification, they can easily be fooled by adversarial attacks. These attacks add a small perturbation to the input image that causes the network to misclassify the sample. In this paper, we focus on neutralizing adversarial attacks by compact feature learning. In particular, we show that learning features in a closed and bounded space improves the robustness of the network. We explore the effect of L2-Softmax Loss, that enforces compactness in the learned features, thus resulting in enhanced robustness to adversarial perturbations. Additionally, we propose compact convolution, a novel method of convolution that when incorporated in conventional CNNs improves their robustness. Compact convolution ensures feature compactness at every layer such that they are bounded and close to each other. Extensive experiments show that Compact Convolutional Networks (CCNs) neutralize multiple types of attacks, and perform better than existing methods in defending adversarial attacks, without incurring any additional training overhead compared to CNNs.
1
0
0
1
0
0
20,304
From sudden quench to adiabatic dynamics in the attractive Hubbard model
We study the crossover between the sudden quench limit and the adiabatic dynamics of superconducting states in the attractive Hubbard model. We focus on the dynamics induced by the change of the attractive interaction during a finite ramp time which is varied in order to track the evolution of the dynamical phase diagram from the sudden quench to the equilibrium limit. Two different dynamical regimes are realized for quenches towards weak and strong coupling interactions. At weak coupling the dynamics depends only on the energy injected into the system, whereas a dynamics retaining memory of the initial state takes place at strong coupling. We show that this is related to a sharp transition between a weak and a strong coupling quench dynamical regime, which defines the boundaries beyond which a dynamics independent from the initial state is recovered. Comparing the dynamics in the superconducting and non-superconducting phases we argue that this is due to the lack of an adiabatic connection to the equilibrium ground state for non-equilibrium superconducting states in the strong coupling quench regime.
0
1
0
0
0
0
20,305
Strongly exchange-coupled and surface-state-modulated magnetization dynamics in Bi2Se3/YIG heterostructures
We report strong interfacial exchange coupling in Bi2Se3/yttrium iron garnet (YIG) bilayers manifested as large in-plane interfacial magnetic anisotropy (IMA) and enhancement of damping probed by ferromagnetic resonance (FMR). The IMA and spin mixing conductance reached a maximum when Bi2Se3 was around 6 quintuple-layer (QL) thick. The unconventional Bi2Se3 thickness dependence of the IMA and spin mixing conductance are correlated with the evolution of surface band structure of Bi2Se3, indicating that topological surface states play an important role in the magnetization dynamics of YIG. Temperature-dependent FMR of Bi2Se3/YIG revealed signatures of magnetic proximity effect of $T_c$ as high as 180 K, and an effective field parallel to the YIG magnetization direction at low temperature. Our study sheds light on the effects of topological insulators on magnetization dynamics, essential for development of TI-based spintronic devices.
0
1
0
0
0
0
20,306
Survivable Probability of SDN-enabled Cloud Networking with Random Physical Link Failure
Software-driven cloud networking is a new paradigm in orchestrating physical resources (CPU, network bandwidth, energy, storage) allocated to network functions, services, and applications, which is commonly modeled as a cross-layer network. This model carries a physical network representing the physical infrastructure, a logical network showing demands, and logical-to-physical node/link mappings. In such networks, a single failure in the physical network may trigger cascading failures in the logical network and disable network services and connectivity. In this paper, we propose an evaluation metric, survivable probability, to evaluate the reliability of such networks under random physical link failure(s). We propose the concept of base protecting spanning tree and prove the necessary and sufficient conditions for its existence and relation to survivability. We then develop mathematical programming formulations for reliable cross-layer network routing design with the maximal reliable probability. Computation results demonstrate the viability of our approach.
1
0
0
0
0
0
20,307
Model-Based Policy Search for Automatic Tuning of Multivariate PID Controllers
PID control architectures are widely used in industrial applications. Despite their low number of open parameters, tuning multiple, coupled PID controllers can become tedious in practice. In this paper, we extend PILCO, a model-based policy search framework, to automatically tune multivariate PID controllers purely based on data observed on an otherwise unknown system. The system's state is extended appropriately to frame the PID policy as a static state feedback policy. This renders PID tuning possible as the solution of a finite horizon optimal control problem without further a priori knowledge. The framework is applied to the task of balancing an inverted pendulum on a seven degree-of-freedom robotic arm, thereby demonstrating its capabilities of fast and data-efficient policy learning, even on complex real world problems.
1
0
0
1
0
0
20,308
Engineering a Simplified 0-Bit Consistent Weighted Sampling
The Min-Hashing approach to sketching has become an important tool in data analysis, information retrial, and classification. To apply it to real-valued datasets, the ICWS algorithm has become a seminal approach that is widely used, and provides state-of-the-art performance for this problem space. However, ICWS suffers a computational burden as the sketch size K increases. We develop a new Simplified approach to the ICWS algorithm, that enables us to obtain over 20x speedups compared to the standard algorithm. The veracity of our approach is demonstrated empirically on multiple datasets and scenarios, showing that our new Simplified CWS obtains the same quality of results while being an order of magnitude faster.
0
0
0
1
0
0
20,309
Quantifying macroeconomic expectations in stock markets using Google Trends
Among other macroeconomic indicators, the monthly release of U.S. unemployment rate figures in the Employment Situation report by the U.S. Bureau of Labour Statistics gets a lot of media attention and strongly affects the stock markets. I investigate whether a profitable investment strategy can be constructed by predicting the likely changes in U.S. unemployment before the official news release using Google query volumes for related search terms. I find that massive new data sources of human interaction with the Internet not only improves U.S. unemployment rate predictability, but can also enhance market timing of trading strategies when considered jointly with macroeconomic data. My results illustrate the potential of combining extensive behavioural data sets with economic data to anticipate investor expectations and stock market moves.
0
0
0
0
0
1
20,310
Effects of Incomplete Ionization on Beta - Ga2O3 Power Devices: Unintentional Donor with Energy 110 meV
Understanding the origin of unintentional doping in Ga2O3 is key to increasing breakdown voltages of Ga2O3 based power devices. Therefore, transport and capacitance spectroscopy studies have been performed to better understand the origin of unintentional doping in Ga2O3. Previously unobserved unintentional donors in commercially available (-201) Ga2O3 substrates have been electrically characterized via temperature dependent Hall effect measurements up to 1000 K and found to have a donor energy of 110 meV. The existence of the unintentional donor is confirmed by temperature dependent admittance spectroscopy, with an activation energy of 131 meV determined via that technique, in agreement with Hall effect measurements. With the concentration of this donor determined to be in the mid to high 10^16 cm^-3 range, elimination of this donor from the drift layer of Ga2O3 power electronics devices will be key to pushing the limits of device performance. Indeed, analytical assessment of the specific on-resistance (Ronsp) and breakdown voltage of Schottky diodes containing the 110 meV donor indicates that incomplete ionization increases Ronsp and decreases breakdown voltage as compared to Ga2O3 Schottky diodes containing only the shallow donor. The reduced performance due to incomplete ionization occurs in addition to the usual tradeoff between Ronsp and breakdown voltage. To achieve 10 kV operation in Ga2O3 Schottky diode devices, analysis indicates that the concentration of 110 meV donors must be reduced below 5x10^14 cm^-3 to limit the increase in Ronsp to one percent.
0
1
0
0
0
0
20,311
A single coordinate framework for optic flow and binocular disparity
Optic flow is two dimensional, but no special qualities are attached to one or other of these dimensions. For binocular disparity, on the other hand, the terms 'horizontal' and 'vertical' disparities are commonly used. This is odd, since binocular disparity and optic flow describe essentially the same thing. The difference is that, generally, people tend to fixate relatively close to the direction of heading as they move, meaning that fixation is close to the optic flow epipole, whereas, for binocular vision, fixation is close to the head-centric midline, i.e. approximately 90 degrees from the binocular epipole. For fixating animals, some separations of flow may lead to simple algorithms for the judgement of surface structure and the control of action. We consider the following canonical flow patterns that sum to produce overall flow: (i) 'towards' flow, the component of translational flow produced by approaching (or retreating from) the fixated object, which produces pure radial flow on the retina; (ii) 'sideways' flow, the remaining component of translational flow, which is produced by translation of the optic centre orthogonal to the cyclopean line of sight and (iii) 'vergence' flow, rotational flow produced by a counter-rotation of the eye in order to maintain fixation. A general flow pattern could also include (iv) 'cyclovergence' flow, produced by rotation of one eye relative to the other about the line of sight. We consider some practical advantages of dividing up flow in this way when an observer fixates as they move. As in some previous treatments, we suggest that there are certain tasks for which it is sensible to consider 'towards' flow as one component and 'sideways' + 'vergence' flow as another.
0
0
0
0
1
0
20,312
Gain control with A-type potassium current: IA as a switch between divisive and subtractive inhibition
Neurons process information by transforming barrages of synaptic inputs into spiking activity. Synaptic inhibition suppresses the output firing activity of a neuron, and is commonly classified as having a subtractive or divisive effect on a neuron's output firing activity. Subtractive inhibition can narrow the range of inputs that evoke spiking activity by eliminating responses to non-preferred inputs. Divisive inhibition is a form of gain control: it modifies firing rates while preserving the range of inputs that evoke firing activity. Since these two "modes" of inhibition have distinct impacts on neural coding, it is important to understand the biophysical mechanisms that distinguish these response profiles. We use simulations and mathematical analysis of a neuron model to find the specific conditions for which inhibitory inputs have subtractive or divisive effects. We identify a novel role for the A-type Potassium current (IA). In our model, this fast-activating, slowly- inactivating outward current acts as a switch between subtractive and divisive inhibition. If IA is strong (large maximal conductance) and fast (activates on a time-scale similar to spike initiation), then inhibition has a subtractive effect on neural firing. In contrast, if IA is weak or insufficiently fast-activating, then inhibition has a divisive effect on neural firing. We explain these findings using dynamical systems methods to define how a spike threshold condition depends on synaptic inputs and IA. Our findings suggest that neurons can "self-regulate" the gain control effects of inhibition via combinations of synaptic plasticity and/or modulation of the conductance and kinetics of A-type Potassium channels. This novel role for IA would add flexibility to neurons and networks, and may relate to recent observations of divisive inhibitory effects on neurons in the nucleus of the solitary tract.
0
0
0
0
1
0
20,313
Cyclic Datatypes modulo Bisimulation based on Second-Order Algebraic Theories
Cyclic data structures, such as cyclic lists, in functional programming are tricky to handle because of their cyclicity. This paper presents an investigation of categorical, algebraic, and computational foundations of cyclic datatypes. Our framework of cyclic datatypes is based on second-order algebraic theories of Fiore et al., which give a uniform setting for syntax, types, and computation rules for describing and reasoning about cyclic datatypes. We extract the "fold" computation rules from the categorical semantics based on iteration categories of Bloom and Esik. Thereby, the rules are correct by construction. We prove strong normalisation using the General Schema criterion for second-order computation rules. Rather than the fixed point law, we particularly choose Bekic law for computation, which is a key to obtaining strong normalisation. We also prove the property of "Church-Rosser modulo bisimulation" for the computation rules. Combining these results, we have a remarkable decidability result of the equational theory of cyclic data and fold.
1
0
0
0
0
0
20,314
High-order harmonic generation from highly-excited states in acetylene
High-order harmonic generation (HHG) from aligned acetylene molecules interacting with mid infra-red (IR), linearly polarized laser pulses is studied theoretically using a mixed quantum-classical approach in which the electrons are described using time-dependent density functional theory while the ions are treated classically. We find that for molecules aligned perpendicular to the laser polarization axis, HHG arises from the highest-occupied molecular orbital (HOMO) while for molecules aligned along the laser polarization axis, HHG is dominated by the HOMO-1. In the parallel orientation we observe a double plateau with an inner plateau that is produced by ionization from and recombination back to an autoionizing state. Two pieces of evidence support this idea. Firstly, by choosing a suitably tuned vacuum ultraviolet pump pulse that directly excites the autoionizing state we observe a dramatic enhancement of all harmonics in the inner plateau. Secondly, in certain circumstances, the position of the inner plateau cut-off does not agree with the classical three-step model. We show that this discrepancy can be understood in terms of a minimum in the dipole recombination matrix element from the continuum to the autoionizing state. As far as we are aware, this represents the first observation of harmonic enhancement over a wide range of frequencies arising from autoionizing states in molecules.
0
1
0
0
0
0
20,315
Cellulyzer - Automated analysis and interactive visualization/simulation of select cellular processes
Here we report on a set of programs developed at the ZMBH Bio-Imaging Facility for tracking real-life images of cellular processes. These programs perform 1) automated tracking; 2) quantitative and comparative track analyses of different images in different groups; 3) different interactive visualization schemes; and 4) interactive realistic simulation of different cellular processes for validation and optimal problem-specific adjustment of image acquisition parameters (tradeoff between speed, resolution, and quality with feedback from the very final results). The collection of programs is primarily developed for the common bio-image analysis software ImageJ (as a single Java Plugin). Some programs are also available in other languages (C++ and Javascript) and may be run simply with a web-browser; even on a low-end Tablet or Smartphone. The programs are available at this https URL
1
1
0
0
0
0
20,316
Vprop: Variational Inference using RMSprop
Many computationally-efficient methods for Bayesian deep learning rely on continuous optimization algorithms, but the implementation of these methods requires significant changes to existing code-bases. In this paper, we propose Vprop, a method for Gaussian variational inference that can be implemented with two minor changes to the off-the-shelf RMSprop optimizer. Vprop also reduces the memory requirements of Black-Box Variational Inference by half. We derive Vprop using the conjugate-computation variational inference method, and establish its connections to Newton's method, natural-gradient methods, and extended Kalman filters. Overall, this paper presents Vprop as a principled, computationally-efficient, and easy-to-implement method for Bayesian deep learning.
1
0
0
1
0
0
20,317
Hidden symmetries in $N$-layer dielectric stacks
The optical properties of a multilayer system of dielectric media with arbitrary $N$ layers is investigated. Each layer is one of two dielectric media, with thickness one-quarter the wavelength of light in that medium, corresponding to a central frequency. Using the transfer matrix method, the transmittance $T$ is calculated for all possible $2^N$ sequences for small $N$. Unexpectedly, it is found that instead of $2^N$ different values of $T$ at the central frequency ($T_0$), there are either $(N/2+1)$ or $(N+1)$ discrete values of $T_0$ for even or odd $N$, respectively. We explain the high degeneracy in the $T_0$ values by defining new symmetry operations that do not change $T_0$. Analytical formulae were derived for the $T_0$ values and their degeneracy as functions of $N$ and an integer parameter for each sequence we call "charge". Additionally, the bandwidth of the transmission spectra at $f_0$ is investigated, revealing some asymptotic behavior at large $N$.
0
1
0
0
0
0
20,318
Solvability regions of affinely parameterized quadratic equations
Quadratic systems of equations appear in several applications. The results in this paper are motivated by quadratic systems of equations that describe equilibrium behavior of physical infrastructure networks like the power and gas grids. The quadratic systems in infrastructure networks are parameterized- the parameters can represent uncertainty (estimation error in resistance/inductance of a power transmission line, for example)or controllable decision variables (power outputs of generators,for example). It is then of interest to understand conditions on the parameters under which the quadratic system is guaranteed to have a solution within a specified set (for example, bounds on voltages and flows in a power grid). Given nominal values of the parameters at which the quadratic system has a solution and the Jacobian of the quadratic system at the solution i snon-singular, we develop a general framework to construct convex regions around the nominal value such that the system is guaranteed to have a solution within a given distance of the nominal solution. We show that several results from recen tliterature can be recovered as special cases of our framework,and demonstrate our approach on several benchmark power systems.
1
0
1
0
0
0
20,319
Mid-price estimation for European corporate bonds: a particle filtering approach
In most illiquid markets, there is no obvious proxy for the market price of an asset. The European corporate bond market is an archetypal example of such an illiquid market where mid-prices can only be estimated with a statistical model. In this OTC market, dealers / market makers only have access, indeed, to partial information about the market. In real-time, they know the price associated with their trades on the dealer-to-dealer (D2D) and dealer-to-client (D2C) markets, they know the result of the requests for quotes (RFQ) they answered, and they have access to composite prices (e.g., Bloomberg CBBT). This paper presents a Bayesian method for estimating the mid-price of corporate bonds by using the real-time information available to a dealer. This method relies on recent ideas coming from the particle filtering (PF) / sequential Monte-Carlo (SMC) literature.
0
0
0
0
0
1
20,320
Matching RGB Images to CAD Models for Object Pose Estimation
We propose a novel method for 3D object pose estimation in RGB images, which does not require pose annotations of objects in images in the training stage. We tackle the pose estimation problem by learning how to establish correspondences between RGB images and rendered depth images of CAD models. During training, our approach only requires textureless CAD models and aligned RGB-D frames of a subset of object instances, without explicitly requiring pose annotations for the RGB images. We employ a deep quadruplet convolutional neural network for joint learning of suitable keypoints and their associated descriptors in pairs of rendered depth images which can be matched across modalities with aligned RGB-D views. During testing, keypoints are extracted from a query RGB image and matched to keypoints extracted from rendered depth images, followed by establishing 2D-3D correspondences. The object's pose is then estimated using the RANSAC and PnP algorithms. We conduct experiments on the recently introduced Pix3D dataset and demonstrate the efficacy of our proposed approach in object pose estimation as well as generalization to object instances not seen during training.
1
0
0
0
0
0
20,321
A Model for Paired-Multinomial Data and Its Application to Analysis of Data on a Taxonomic Tree
In human microbiome studies, sequencing reads data are often summarized as counts of bacterial taxa at various taxonomic levels specified by a taxonomic tree. This paper considers the problem of analyzing two repeated measurements of microbiome data from the same subjects. Such data are often collected to assess the change of microbial composition after certain treatment, or the difference in microbial compositions across body sites. Existing models for such count data are limited in modeling the covariance structure of the counts and in handling paired multinomial count data. A new probability distribution is proposed for paired-multinomial count data, which allows flexible covariance structure and can be used to model repeatedly measured multivariate count data. Based on this distribution, a test statistic is developed for testing the difference in compositions based on paired multinomial count data. The proposed test can be applied to the count data observed on a taxonomic tree in order to test difference in microbiome compositions and to identify the subtrees with different subcompositions. Simulation results indicate that proposed test has correct type 1 errors and increased power compared to some commonly used methods. An analysis of an upper respiratory tract microbiome data set is used to illustrate the proposed methods.
0
0
0
1
0
0
20,322
Adhesion-induced Discontinuous Transitions and Classifying Social Networks
Transition points mark qualitative changes in the macroscopic properties of large complex systems. Explosive transitions, exhibiting properties of both continuous and discontinuous phase transitions, have recently been uncovered in network growth processes. Real networks not only grow but often also restructure, yet common network restructuring processes, such as small world rewiring, do not exhibit phase transitions. Here, we uncover a class of intrinsically discontinuous transitions emerging in network restructuring processes controlled by \emph{adhesion} -- the preference of a chosen link to remain connected to its end node. Deriving a master equation for the temporal network evolution and working out an analytic solution, we identify genuinely discontinuous transitions in non-growing networks, separating qualitatively distinct phases with monotonic and with peaked degree distributions. Intriguingly, our analysis of heuristic data indicates a separation between the same two forms of degree distributions distinguishing abstract from face-to-face social networks.
1
0
0
0
0
0
20,323
Numerical simulation of polynomial-speed convergence phenomenon
We provide a hybrid method that captures the polynomial speed of convergence and polynomial speed of mixing for Markov processes. The hybrid method that we introduce is based on the coupling technique and renewal theory. We propose to replace some estimates in classical results about the ergodicity of Markov processes by numerical simulations when the corresponding analytical proof is difficult. After that, all remaining conclusions can be derived from rigorous analysis. Then we apply our results to two 1D microscopic heat conduction models. The mixing rate of these two models are expected to be polynomial but very difficult to prove. In both examples, our numerical results match the expected polynomial mixing rate well.
0
0
1
0
0
0
20,324
Efficient determination of optimised multi-arm multi-stage experimental designs with control of generalised error-rates
Primarily motivated by the drug development process, several publications have now presented methodology for the design of multi-arm multi-stage experiments with normally distributed outcome variables of known variance. Here, we extend these past considerations to allow the design of what we refer to as an abcd multi-arm multi-stage experiment. We provide a proof of how strong control of the a-generalised type-I familywise error-rate can be ensured. We then describe how to attain the power to reject at least b out of c false hypotheses, which is related to controlling the b-generalised type-II familywise error-rate. Following this, we detail how a design can be optimised for a scenario in which rejection of any d null hypotheses brings about termination of the experiment. We achieve this by proposing a highly computationally efficient approach for evaluating the performance of a candidate design. Finally, using a real clinical trial as a motivating example, we explore the effect of the design's control parameters on the statistical operating characteristics.
0
0
0
1
0
0
20,325
Application of the Waveform Relaxation Technique to the Co-Simulation of Power Converter Controller and Electrical Circuit Models
In this paper we present the co-simulation of a PID class power converter controller and an electrical circuit by means of the waveform relaxation technique. The simulation of the controller model is characterized by a fixed-time stepping scheme reflecting its digital implementation, whereas a circuit simulation usually employs an adaptive time stepping scheme in order to account for a wide range of time constants within the circuit model. In order to maintain the characteristic of both models as well as to facilitate model replacement, we treat them separately by means of input/output relations and propose an application of a waveform relaxation algorithm. Furthermore, the maximum and minimum number of iterations of the proposed algorithm are mathematically analyzed. The concept of controller/circuit coupling is illustrated by an example of the co-simulation of a PI power converter controller and a model of the main dipole circuit of the Large Hadron Collider.
1
1
0
0
0
0
20,326
Some Investigations about the Properties of Maximum Likelihood Estimations Based on Lower Record Values for a Sub-Family of the Exponential Family
Here, in this paper it has been considered a sub family of exponential family. Maximum likelihood estimations (MLE) for the parameter of this family, probability density function, and cumulative density function based on a sample and based on lower record values have been obtained. It has been considered Mean Square Error (MSE) as a criterion for determining which is better in different situations. Additionally, it has been proved some theories about the relations between MLE based on lower record values and based on a random sample. Also, some interesting asymptotically properties for these estimations have been shown during some theories.
0
0
1
1
0
0
20,327
Some Insights on Synthesizing Optimal Linear Quadratic Controller Using Krotov's Sufficiency Conditions
This paper revisits the problem of optimal control law design for linear systems using the global optimal control framework introduced by Vadim Krotov. Krotov's approach is based on the idea of total decomposition of the original optimal control problem (OCP) with respect to time, by an $ad$ $hoc$ choice of the so-called Krotov's function or solving function, thereby providing sufficient conditions for the existence of global solution based on another optimization problem, which is completely equivalent to the original OCP. It is well known that the solution of this equivalent optimization problem is obtained using an iterative method. In this paper, we propose suitable Krotov's functions for linear quadratic OCP and subsequently, show that by imposing convexity condition on this equivalent optimization problem, there is no need to compute an iterative solution. We also give some key insights into the solution procedure of the linear quadratic OCP using the proposed methodology in contrast to the celebrated Calculus of Variations (CoV) and Hamilton-Jacobi-Bellman (HJB) equation based approach.
1
0
0
0
0
0
20,328
Schumann resonance transients and the search for gravitational waves
Schumann resonance transients which propagate around the globe can potentially generate a correlated background in widely separated gravitational wave detectors. We show that due to the distribution of lightning hotspots around the globe these transients have characteristic time lags, and this feature can be useful to further suppress such a background, especially in searches of the stochastic gravitational-wave background. A brief review of the corresponding literature on Schumann resonances and lightnings is also given.
0
1
0
0
0
0
20,329
Learning to Fly by Crashing
How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see: this https URL
1
0
0
0
0
0
20,330
Estimating Graphlet Statistics via Lifting
Exploratory analysis over network data is often limited by our ability to efficiently calculate graph statistics, which can provide a model-free understanding of macroscopic properties of a network. This work introduces a framework for estimating the graphlet count - the number of occurrences of a small subgraph motif (e.g. a wedge or a triangle) in the network. For massive graphs, where accessing the whole graph is not possible, the only viable algorithms are those which act locally by making a limited number of vertex neighborhood queries. We introduce a Monte Carlo sampling technique for graphlet counts, called lifting, which can simultaneously sample all graphlets of size up to $k$ vertices. We outline three variants of lifted graphlet counts: the ordered, unordered, and shotgun estimators. We prove that our graphlet count updates are unbiased for the true graphlet count, have low correlation between samples, and have a controlled variance. We compare the experimental performance of lifted graphlet counts to the state-of-the art graphlet sampling procedures: Waddling and the pairwise subgraph random walk.
0
0
0
1
0
0
20,331
Calibration for the (Computationally-Identifiable) Masses
As algorithms increasingly inform and influence decisions made about individuals, it becomes increasingly important to address concerns that these algorithms might be discriminatory. The output of an algorithm can be discriminatory for many reasons, most notably: (1) the data used to train the algorithm might be biased (in various ways) to favor certain populations over others; (2) the analysis of this training data might inadvertently or maliciously introduce biases that are not borne out in the data. This work focuses on the latter concern. We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. Multicalibration guarantees accurate (calibrated) predictions for every subpopulation that can be identified within a specified class of computations. We think of the class as being quite rich; in particular, it can contain many overlapping subgroups of a protected group. We show that in many settings this strong notion of protection from discrimination is both attainable and aligned with the goal of obtaining accurate predictions. Along the way, we present new algorithms for learning a multicalibrated predictor, study the computational complexity of this task, and draw new connections to computational learning models such as agnostic learning.
1
0
0
1
0
0
20,332
The distribution of old stars around the Milky Way's central black hole I: Star counts
(abridged) In this paper we revisit the problem of inferring the innermost structure of the Milky Way's nuclear star cluster via star counts, to clarify whether it displays a core or a cusp around the central black hole. Through image stacking and improved PSF fitting we push the completeness limit about one magnitude deeper than in previous, comparable work. Contrary to previous work, we analyse the stellar density in well-defined magnitude ranges in order to be able to constrain stellar masses and ages. The RC and brighter giant stars display a core-like surface density profile within a projected radius R<0.3 pc of the central black hole, in agreement with previous studies, but show a cusp-like surface density distribution at larger R. The surface density of the fainter stars can be described well by a single power-law at R<2 pc. The cusp-like profile of the faint stars persists even if we take into account the possible contamination of stars in this brightness range by young pre-main sequence stars. The data are inconsistent with a core-profile for the faint stars.Finally, we show that a 3D Nuker law provides a very good description of the cluster structure. We conclude that the observed stellar density at the Galactic Centre, as it can be inferred with current instruments, is consistent with the existence of a stellar cusp around the Milky Way's central black hole, Sgr A*. This cusp is well developed inside the influence radius of about 3 pc of Sgr A* and can be described by a single three-dimensional power-law with an exponent gamma=1.23+-0.05. The apparent lack of RC stars and brighter giants at projected distances of R < 0.3 pc (R<8") of the massive black hole may indicate that some mechanism has altered their distribution or intrinsic luminosity.
0
1
0
0
0
0
20,333
On infinite order differential operators in fractional viscoelasticity
In this paper we discuss some general properties of viscoelastic models defined in terms of constitutive equations involving infinitely many derivatives (of integer and fractional order). In particular, we consider as a working example the recently developed Bessel models of linear viscoelasticiy that, for short times, behave like fractional Maxwell bodies of order $1/2$.
0
1
1
0
0
0
20,334
Gap structure of FeSe determined by field-angle-resolved specific heat measurements
Quasiparticle excitations in FeSe were studied by means of specific heat ($C$) measurements on a high-quality single crystal under rotating magnetic fields. The field dependence of $C$ shows three-stage behavior with different slopes, indicating the existence of three gaps ($\Delta_1$, $\Delta_2$, and $\Delta_3$). In the low-temperature and low-field region, the azimuthal-angle ($\phi$) dependence of $C$ shows a four-fold symmetric oscillation with sign change. On the other hand, the polar-angle ($\theta$) dependence manifests as an anisotropy-inverted two-fold symmetry with unusual shoulder behavior. Combining the angle-resolved results and the theoretical calculation, the smaller gap $\Delta_1$ is proved to have two vertical-line nodes or gap minima along the $k_z$ direction, and is determined to reside on the electron-type $\varepsilon$ band. $\Delta_2$ is found to be related to the electron-type $\delta$ band, and is isotropic in the $ab$-plane but largely anisotropic out of the plane. $\Delta_3$ residing on the hole-type $\alpha$ band shows a small out-of-plane anisotropy with a strong Pauli-paramagnetic effect.
0
1
0
0
0
0
20,335
Recurrent Neural Networks for anomaly detection in the Post-Mortem time series of LHC superconducting magnets
This paper presents a model based on Deep Learning algorithms of LSTM and GRU for facilitating an anomaly detection in Large Hadron Collider superconducting magnets. We used high resolution data available in Post Mortem database to train a set of models and chose the best possible set of their hyper-parameters. Using Deep Learning approach allowed to examine a vast body of data and extract the fragments which require further experts examination and are regarded as anomalies. The presented method does not require tedious manual threshold setting and operator attention at the stage of the system setup. Instead, the automatic approach is proposed, which achieves according to our experiments accuracy of 99%. This is reached for the largest dataset of 302 MB and the following architecture of the network: single layer LSTM, 128 cells, 20 epochs of training, look_back=16, look_ahead=128, grid=100 and optimizer Adam. All the experiments were run on GPU Nvidia Tesla K80
0
1
0
0
0
0
20,336
Electro-Oxidation of Ni42 Steel: A highly Active Bifunctional Electrocatalyst
Janus type Water-Splitting Catalysts have attracted highest attention as a tool of choice for solar to fuel conversion. AISI Ni 42 steel was upon harsh anodization converted in a bifunctional electrocatalyst. Oxygen evolution reaction- (OER) and hydrogen evolution reaction (HER) are highly efficiently and steadfast catalyzed at pH 7, 13, 14, 14.6 (OER) respectively at pH 0, 1, 13, 14, 14.6 (HER). The current density taken from long-term OER measurements in pH 7 buffer solution upon the electro activated steel at 491 mV overpotential was around 4 times higher (4 mA/cm2) in comparison with recently developed OER electrocatalysts. The very strong voltage-current behavior of the catalyst shown in OER polarization experiments at both pH 7 and at pH 13 were even superior to those known for IrO2-RuO2. No degradation of the catalyst was detected even when conditions close to standard industrial operations were applied to the catalyst. A stable Ni-, Fe- oxide based passivating layer sufficiently protected the bare metal for further oxidation. Quantitative charge to oxygen- (OER) and charge to hydrogen (HER) conversion was confirmed. High resolution XPS spectra showed that most likely gamma-NiO(OH) and FeO(OH) are the catalytic active OER and NiO is the catalytic active HER species.
0
1
0
0
0
0
20,337
Shape-constrained partial identification of a population mean under unknown probabilities of sample selection
A prevailing challenge in the biomedical and social sciences is to estimate a population mean from a sample obtained with unknown selection probabilities. Using a well-known ratio estimator, Aronow and Lee (2013) proposed a method for partial identification of the mean by allowing the unknown selection probabilities to vary arbitrarily between two fixed extreme values. In this paper, we show how to leverage auxiliary shape constraints on the population outcome distribution, such as symmetry or log-concavity, to obtain tighter bounds on the population mean. We use this method to estimate the performance of Aymara students---an ethnic minority in the north of Chile---in a national educational standardized test. We implement this method in the new statistical software package scbounds for R.
0
0
1
1
0
0
20,338
Warped metrics for location-scale models
This paper argues that a class of Riemannian metrics, called warped metrics, plays a fundamental role in statistical problems involving location-scale models. The paper reports three new results : i) the Rao-Fisher metric of any location-scale model is a warped metric, provided that this model satisfies a natural invariance condition, ii) the analytic expression of the sectional curvature of this metric, iii) the exact analytic solution of the geodesic equation of this metric. The paper applies these new results to several examples of interest, where it shows that warped metrics turn location-scale models into complete Riemannian manifolds of negative sectional curvature. This is a very suitable situation for developing algorithms which solve problems of classification and on-line estimation. Thus, by revealing the connection between warped metrics and location-scale models, the present paper paves the way to the introduction of new efficient statistical algorithms.
0
0
1
1
0
0
20,339
Detecting Changes in Time Series Data using Volatility Filters
This work develops techniques for the sequential detection and location estimation of transient changes in the volatility (standard deviation) of time series data. In particular, we introduce a class of change detection algorithms based on the windowed volatility filter. The first method detects changes by employing a convex combination of two such filters with differing window sizes, such that the adaptively updated convex weight parameter is then used as an indicator for the detection of instantaneous power changes. Moreover, the proposed adaptive filtering based method is readily extended to the multivariate case by using recent advances in distributed adaptive filters, thereby using cooperation between the data channels for more effective detection of change points. Furthermore, this work also develops a novel change point location estimator based on the differenced output of the volatility filter. Finally, the performance of the proposed methods were evaluated on both synthetic and real world data.
1
0
0
0
0
0
20,340
A minimax and asymptotically optimal algorithm for stochastic bandits
We propose the kl-UCB ++ algorithm for regret minimization in stochastic bandit models with exponential families of distributions. We prove that it is simultaneously asymptotically optimal (in the sense of Lai and Robbins' lower bound) and minimax optimal. This is the first algorithm proved to enjoy these two properties at the same time. This work thus merges two different lines of research with simple and clear proofs.
0
0
1
1
0
0
20,341
Eigenfunctions of Periodic Differential Operators Analytic in a Strip
Ordinary differential operators with periodic coefficients analytic in a strip act on a Hardy-Hilbert space of analytic functions with inner product defined by integration over a period on the boundary of the strip. Simple examples show that eigenfunctions may form a complete set for a narrow strip, but completeness may be lost for a wide strip. Completeness of the eigenfunctions in the Hardy-Hilbert space is established for regular second order operators with matrix-valued coefficients when the leading coefficient satisfies a positive real part condition throughout the strip.
0
0
1
0
0
0
20,342
Infinite ergodic index of the ehrenfest wind-tree model
The set of all possible configurations of the Ehrenfest wind-tree model endowed with the Hausdorff topology is a compact metric space. For a typical configuration we show that the wind-tree dynamics has infinite ergodic index in almost every direction. In particular some ergodic theorems can be applied to show that if we start with a large number of initially parallel particles their directions decorrelate as the dynamics evolve answering the question posed by the Ehrenfests.
0
0
1
0
0
0
20,343
Arrays of strongly-coupled atoms in a one-dimensional waveguide
We study the cooperative optical coupling between regularly spaced atoms in a one-dimensional waveguide using decompositions to subradiant and superradiant collective excitation eigenmodes, direct numerical solutions, and analytical transfer-matrix methods. We illustrate how the spectrum of transmitted light through the waveguide including the emergence of narrow Fano resonances can be understood by the resonance features of the eigenmodes. We describe a method based on superradiant and subradiant modes to engineer the optical response of the waveguide and to store light. The stopping of light is obtained by transferring an atomic excitation to a subradiant collective mode with the zero radiative resonance linewidth by controlling the level shift of an atom in the waveguide. Moreover, we obtain an exact analytic solution for the transmitted light through the waveguide for the case of a regular lattice of atoms and provide a simple description how the light transmission may present large resonance shifts when the lattice spacing is close, but not exactly equal, to half of the wavelength of the light. Experimental imperfections such as fluctuations of the positions of the atoms and loss of light from the waveguide are easily quantified in the numerical simulations, which produce the natural result that the optical response of the atomic array tends toward the response of a gas with random atomic positions.
0
1
0
0
0
0
20,344
MEG-Derived Functional Tractography, Results for Normal and Concussed Cohorts
Measures of neuroelectric activity from each of 18 automatically identified white matter tracts were extracted from resting MEG recordings from a normative, n=588, and a chronic TBI, traumatic brain injury, n=63, cohort, 60 of whose TBIs were mild. Activity in the TBI cohort was significantly reduced compared with the norms for ten of the tracts, p < 10-6 for each. Significantly reduced activity (p < 10-3) was seen in more than one tract in seven mTBI individuals and one member of the normative cohort.
0
0
0
0
1
0
20,345
Coincidence of magnetic and valence quantum critical points in CeRhIn5 under pressure
We present accurate electrical resistivity measurements along the two principle crystallographic axes of the pressure-induced heavy-fermion superconductor CeRhIn5 up to 5.63 GPa. For both directions, a valence crossover line is identified in the p-T plane and the extrapolation of this line to zero temperature coincides with the collapse of the magnetic ordering temperature. Furthermore, it is found that the p-T phase diagram of CeRhIn5 in the valence crossover region is very similar to that of CeCu2Si2. These results point to the essential role of Ce-4f electron delocalization in both destroying magnetic order and realizing superconductivity in CeRhIn5 under pressure.
0
1
0
0
0
0
20,346
Sequential Inverse Approximation of a Regularized Sample Covariance Matrix
One of the goals in scaling sequential machine learning methods pertains to dealing with high-dimensional data spaces. A key related challenge is that many methods heavily depend on obtaining the inverse covariance matrix of the data. It is well known that covariance matrix estimation is problematic when the number of observations is relatively small compared to the number of variables. A common way to tackle this problem is through the use of a shrinkage estimator that offers a compromise between the sample covariance matrix and a well-conditioned matrix, with the aim of minimizing the mean-squared error. We derived sequential update rules to approximate the inverse shrinkage estimator of the covariance matrix. The approach paves the way for improved large-scale machine learning methods that involve sequential updates.
0
0
0
1
0
0
20,347
Sesqui-arrays, a generalisation of triple arrays
A triple array is a rectangular array containing letters, each letter occurring equally often with no repeats in rows or columns, such that the number of letters common to two rows, two columns, or a row and a column are (possibly different) non-zero constants. Deleting the condition on the letters common to a row and a column gives a double array. We propose the term \emph{sesqui-array} for such an array when only the condition on pairs of columns is deleted. Thus all triple arrays are sesqui-arrays. In this paper we give three constructions for sesqui-arrays. The first gives $(n+1)\times n^2$ arrays on $n(n+1)$ letters for $n\geq 2$. (Such an array for $n=2$ was found by Bagchi.) This construction uses Latin squares. The second uses the \emph{Sylvester graph}, a subgraph of the Hoffman--Singleton graph, to build a good block design for $36$ treatments in $42$ blocks of size~$6$, and then uses this in a $7\times 36$ sesqui-array for $42$ letters. We also give a construction for $K\times(K-1)(K-2)/2$ sesqui-arrays on $K(K-1)/2$ letters. This construction uses biplanes. It starts with a block of a biplane and produces an array which satisfies the requirements for a sesqui-array except possibly that of having no repeated letters in a row or column. We show that this condition holds if and only if the \emph{Hussain chains} for the selected block contain no $4$-cycles. A sufficient condition for the construction to give a triple array is that each Hussain chain is a union of $3$-cycles; but this condition is not necessary, and we give a few further examples. We also discuss the question of which of these arrays provide good designs for experiments.
0
0
1
1
0
0
20,348
Investigating how well contextual features are captured by bi-directional recurrent neural network models
Learning algorithms for natural language processing (NLP) tasks traditionally rely on manually defined relevant contextual features. On the other hand, neural network models using an only distributional representation of words have been successfully applied for several NLP tasks. Such models learn features automatically and avoid explicit feature engineering. Across several domains, neural models become a natural choice specifically when limited characteristics of data are known. However, this flexibility comes at the cost of interpretability. In this paper, we define three different methods to investigate ability of bi-directional recurrent neural networks (RNNs) in capturing contextual features. In particular, we analyze RNNs for sequence tagging tasks. We perform a comprehensive analysis on general as well as biomedical domain datasets. Our experiments focus on important contextual words as features, which can easily be extended to analyze various other feature types. We also investigate positional effects of context words and show how the developed methods can be used for error analysis.
1
0
0
0
0
0
20,349
A Hybridizable Discontinuous Galerkin solver for the Grad-Shafranov equation
In axisymmetric fusion reactors, the equilibrium magnetic configuration can be expressed in terms of the solution to a semi-linear elliptic equation known as the Grad-Shafranov equation, the solution of which determines the poloidal component of the magnetic field. When the geometry of the confinement region is known, the problem becomes an interior Dirichlet boundary value problem. We propose a high order solver based on the Hybridizable Discontinuous Galerkin method. The resulting algorithm (1) provides high order of convergence for the flux function and its gradient, (2) incorporates a novel method for handling piecewise smooth geometries by extension from polygonal meshes, (3) can handle geometries with non-smooth boundaries and x-points, and (4) deals with the semi-linearity through an accelerated two-grid fixed-point iteration. The effectiveness of the algorithm is verified with computations for cases where analytic solutions are known on configurations similar to those of actual devices (ITER with single null and double null divertor, NSTX, ASDEX upgrade, and Field Reversed Configurations).
0
1
0
0
0
0
20,350
Sketching Linear Classifiers over Data Streams
We introduce a new sub-linear space sketch---the Weight-Median Sketch---for learning compressed linear classifiers over data streams while supporting the efficient recovery of large-magnitude weights in the model. This enables memory-limited execution of several statistical analyses over streams, including online feature selection, streaming data explanation, relative deltoid detection, and streaming estimation of pointwise mutual information. Unlike related sketches that capture the most frequently-occurring features (or items) in a data stream, the Weight-Median Sketch captures the features that are most discriminative of one stream (or class) compared to another. The Weight-Median Sketch adopts the core data structure used in the Count-Sketch, but, instead of sketching counts, it captures sketched gradient updates to the model parameters. We provide a theoretical analysis that establishes recovery guarantees for batch and online learning, and demonstrate empirical improvements in memory-accuracy trade-offs over alternative memory-budgeted methods, including count-based sketches and feature hashing.
1
0
0
1
0
0
20,351
Tamed to compatible when b^(2+) = 1 and b^1 = 2
Weiyi Zhang noticed recently a gap in the proof of the main theorem of the authors article "Tamed to compatible: Symplectic forms via moduli space integration" [T] for the case when the symplectic 4-manifold in question has first Betti number 2 (and necessarily self-dual second Betti number 1). This note explains how to fill this gap.
0
0
1
0
0
0
20,352
The Mass Transference Principle: Ten Years On
In this article we discuss the Mass Transference Principle due to Beresnevich and Velani and survey several generalisations and variants, both deterministic and random. Using a Hausdorff measure analogue of the inhomogeneous Khintchine-Groshev Theorem, proved recently via an extension of the Mass Transference Principle to systems of linear forms, we give an alternative proof of a general inhomogeneous Jarn\'{\i}k-Besicovitch Theorem which was originally proved by Levesley. We additionally show that without monotonicity Levesley's theorem no longer holds in general. Thereafter, we discuss recent advances by Wang, Wu and Xu towards mass transference principles where one transitions from $\limsup$ sets defined by balls to $\limsup$ sets defined by rectangles (rather than from "balls to balls" as is the case in the original Mass Transference Principle). Furthermore, we consider mass transference principles for transitioning from rectangles to rectangles and extend known results using a slicing technique. We end this article with a brief survey of random analogues of the Mass Transference Principle.
0
0
1
0
0
0
20,353
Ground-state properties of Ca$_2$ from narrow line two-color photoassociation
By two-color photoassociation of $^{40}$Ca four weakly bound vibrational levels in the Ca$_2$ \Xpot ground state potential were measured, using highly spin-forbidden transitions to intermediate states of the coupled system $^3\Pi_{u}$ and $^3\Sigma^+ _{u}$ near the ${^3P_1}$+${^1S_0}$ asymptote. From the observed binding energies, including the least bound state, the long range dispersion coefficients $\mathrm{C}_6, \mathrm{C}_8,\mathrm{C}_{10}$ and a precise value for the s-wave scattering length of 308.5(50)~$a_0$ were derived. From mass scaling we also calculated the corresponding scattering length for other natural isotopes. From the Autler-Townes splitting of the spectra, the molecular Rabi frequency has been determined as function of the laser intensity for one bound-bound transition. The observed value for the Rabi-frequency is in good agreement with calculated transition moments based on the derived potentials, assuming a dipole moment being independent of internuclear separation for the atomic pair model.
0
1
0
0
0
0
20,354
On the semisimplicity of the cyclotomic quiver Hecke algebra of type C
We provide criteria for the cyclotomic quiver Hecke algebras of type C to be semisimple. In the semisimple case, we construct the irreducible modules.
0
0
1
0
0
0
20,355
On the origin of the shallow and "replica" bands in FeSe monolayer superconductors
We compare electronic structures of single FeSe layer films on SrTiO$_3$ substrate (FeSe/STO) and K$_x$Fe$_{2-y}$Se$_{2}$ superconductors obtained from extensive LDA and LDA+DMFT calculations with the results of ARPES experiments. It is demonstrated that correlation effects on Fe-3d states are sufficient in principle to explain the formation of the shallow electron -- like bands at the M(X)-point. However, in FeSe/STO these effects alone are apparently insufficient for the simultaneous elimination of the hole -- like Fermi surface around the $\Gamma$-point which is not observed in ARPES experiments. Detailed comparison of ARPES detected and calculated quasiparticle bands shows reasonable agreement between theory and experiment. Analysis of the bands with respect to their origin and orbital composition shows, that for FeSe/STO system the experimentally observed "replica" quasiparticle band at the M-point (usually attributed to forward scattering interactions with optical phonons in SrTiO$_3$ substrate) can be reasonably understood just as the LDA calculated Fe-3d$_{xy}$ band, renormalized by electronic correlations. The only manifestation of the substrate reduces to lifting the degeneracy between Fe-3d$_{xz}$ and Fe-3d$_{yz}$ bands in the vicinity of M-point. For the case of K$_x$Fe$_{2-y}$Se$_{2}$ most bands observed in ARPES can also be understood as correlation renormalized Fe-3d LDA calculated bands, with overall semi -- quantitative agreement with LDA+DMFT calculations.
0
1
0
0
0
0
20,356
A Matrix Factorization Approach for Learning Semidefinite-Representable Regularizers
Regularization techniques are widely employed in optimization-based approaches for solving ill-posed inverse problems in data analysis and scientific computing. These methods are based on augmenting the objective with a penalty function, which is specified based on prior domain-specific expertise to induce a desired structure in the solution. We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available. Previous work under the title of `dictionary learning' or `sparse coding' may be viewed as learning a regularization function that can be computed via linear programming. We describe generalizations of these methods to learn regularizers that can be computed and optimized via semidefinite programming. Our framework for learning such semidefinite regularizers is based on obtaining structured factorizations of data matrices, and our algorithmic approach for computing these factorizations combines recent techniques for rank minimization problems along with an operator analog of Sinkhorn scaling. Under suitable conditions on the input data, our algorithm provides a locally linearly convergent method for identifying the correct regularizer that promotes the type of structure contained in the data. Our analysis is based on the stability properties of Operator Sinkhorn scaling and their relation to geometric aspects of determinantal varieties (in particular tangent spaces with respect to these varieties). The regularizers obtained using our framework can be employed effectively in semidefinite programming relaxations for solving inverse problems.
1
0
1
1
0
0
20,357
A Bootstrap Method for Goodness of Fit and Model Selection with a Single Observed Network
Network models are applied in numerous domains where data can be represented as a system of interactions among pairs of actors. While both statistical and mechanistic network models are increasingly capable of capturing various dependencies amongst these actors, these dependencies imply the lack of independence. This poses statistical challenges for analyzing such data, especially when there is only a single observed network, and often leads to intractable likelihoods regardless of the modeling paradigm, which limit the application of existing statistical methods for networks. We explore a subsampling bootstrap procedure to serve as the basis for goodness of fit and model selection with a single observed network that circumvents the intractability of such likelihoods. Our approach is based on flexible resampling distributions formed from the single observed network, allowing for finer and higher dimensional comparisons than simply point estimates of quantities of interest. We include worked examples for model selection, with simulation, and assessment of goodness of fit, with duplication-divergence model fits for yeast (S.cerevisiae) protein-protein interaction data from the literature. The proposed procedure produces a flexible resampling distribution that can be based on any statistics of one's choosing and can be employed regardless of choice of model.
0
0
0
1
0
0
20,358
Nonlinear Acceleration of Stochastic Algorithms
Extrapolation methods use the last few iterates of an optimization algorithm to produce a better estimate of the optimum. They were shown to achieve optimal convergence rates in a deterministic setting using simple gradient iterates. Here, we study extrapolation methods in a stochastic setting, where the iterates are produced by either a simple or an accelerated stochastic gradient algorithm. We first derive convergence bounds for arbitrary, potentially biased perturbations, then produce asymptotic bounds using the ratio between the variance of the noise and the accuracy of the current point. Finally, we apply this acceleration technique to stochastic algorithms such as SGD, SAGA, SVRG and Katyusha in different settings, and show significant performance gains.
0
0
1
0
0
0
20,359
Motion of Massive Particles in Rindler Space and the Problem of Fall at the Centre
The motion of a massive particle in Rindler space has been studied and obtained the geodesics of motion. The orbits in Rindler space are found to be quite different from that of Schwarzschild case. The paths are not like the Perihelion Precession type. Further we have set up the non-relativistic Schrodinger equation for the particle in the quantum mechanical scenario in presence of background constant gravitational field and investigated the problem of fall of the particle at the center. This problem is also treated classically. Unlike the conventional scenario, here the fall occurs at the surface of a sphere of unit radius.
0
1
0
0
0
0
20,360
Measuring Sample Quality with Kernels
Approximate Markov chain Monte Carlo (MCMC) offers the promise of more rapid sampling at the cost of more biased inference. Since standard MCMC diagnostics fail to detect these biases, researchers have developed computable Stein discrepancy measures that provably determine the convergence of a sample to its target distribution. This approach was recently combined with the theory of reproducing kernels to define a closed-form kernel Stein discrepancy (KSD) computable by summing kernel evaluations across pairs of sample points. We develop a theory of weak convergence for KSDs based on Stein's method, demonstrate that commonly used KSDs fail to detect non-convergence even for Gaussian targets, and show that kernels with slowly decaying tails provably determine convergence for a large class of target distributions. The resulting convergence-determining KSDs are suitable for comparing biased, exact, and deterministic sample sequences and simpler to compute and parallelize than alternative Stein discrepancies. We use our tools to compare biased samplers, select sampler hyperparameters, and improve upon existing KSD approaches to one-sample hypothesis testing and sample quality improvement.
1
0
0
1
0
0
20,361
Diversity of preferences can increase collective welfare in sequential exploration problems
In search engines, online marketplaces and other human-computer interfaces large collectives of individuals sequentially interact with numerous alternatives of varying quality. In these contexts, trial and error (exploration) is crucial for uncovering novel high-quality items or solutions, but entails a high cost for individual users. Self-interested decision makers, are often better off imitating the choices of individuals who have already incurred the costs of exploration. Although imitation makes sense at the individual level, it deprives the group of additional information that could have been gleaned by individual explorers. In this paper we show that in such problems, preference diversity can function as a welfare enhancing mechanism. It leads to a consistent increase in the quality of the consumed alternatives that outweighs the increased cost of search for the users.
1
0
0
0
0
0
20,362
Epidemic spread in interconnected directed networks
In the real world, many complex systems interact with other systems. In addition, the intra- or inter-systems for the spread of information about infectious diseases and the transmission of infectious diseases are often not random, but with direction. Hence, in this paper, we build epidemic model based on an interconnected directed network, which can be considered as the generalization of undirected networks and bipartite networks. By using the mean-field approach, we establish the Susceptible-Infectious-Susceptible model on this network. We theoretically analyze the model, and obtain the basic reproduction number, which is also the generalization of the critical number corresponding to undirected or bipartite networks. And we prove the global stability of disease-free and endemic equilibria via the basic reproduction number as a forward bifurcation parameter. We also give a condition for epidemic prevalence only on a single subnetwork. Furthermore, we carry out numerical simulations, and find that the independence between each node's in- and out-degrees greatly reduce the impact of the network's topological structure on disease spread.
0
1
0
0
0
0
20,363
Controllability and maximum matchings of complex networks
Previously, the controllability problem of a linear time-invariant dynamical system was mapped to the maximum matching (MM) problem on the bipartite representation of the underlying directed graph, and the sizes of MMs on random bipartite graphs were calculated analytically with the cavity method at zero temperature limit. Here we present an alternative theory to estimate MM sizes based on the core percolation theory and the perfect matching of cores. Our theory is much more simplified and easily interpreted, and can estimate MM sizes on random graphs with or without symmetry between out- and in-degree distributions. Our result helps to illuminate the fundamental connection between the controllability problem and the underlying structure of complex systems.
1
0
0
0
0
0
20,364
Dimers, crystals and quantum Kostka numbers
We relate the counting of honeycomb dimer configurations on the cylinder to the counting of certain vertices in Kirillov-Reshetikhin crystal graphs. We show that these dimer configurations yield the quantum Kostka numbers of the small quantum cohomology ring of the Grassmannian, i.e. the expansion coefficients when multiplying a Schubert class repeatedly with different Chern classes. This allows one to derive sum rules for Gromov-Witten invariants.
0
0
1
0
0
0
20,365
Generalized Task-Parameterized Skill Learning
Programming by demonstration has recently gained much attention due to its user-friendly and natural way to transfer human skills to robots. In order to facilitate the learning of multiple demonstrations and meanwhile generalize to new situations, a task-parameterized Gaussian mixture model (TP-GMM) has been recently developed. This model has achieved reliable performance in areas such as human-robot collaboration and dual-arm manipulation. However, the crucial task frames and associated parameters in this learning framework are often set by the human teacher, which renders three problems that have not been addressed yet: (i) task frames are treated equally, without considering their individual importance, (ii) task parameters are defined without taking into account additional task constraints, such as robot joint limits and motion smoothness, and (iii) a fixed number of task frames are pre-defined regardless of whether some of them may be redundant or even irrelevant for the task at hand. In this paper, we generalize the task-parameterized learning by addressing the aforementioned problems. Moreover, we provide a novel learning perspective which allows the robot to refine and adapt previously learned skills in a low dimensional space. Several examples are studied in both simulated and real robotic systems, showing the applicability of our approach.
1
0
0
0
0
0
20,366
Nonparametric Bayesian volatility learning under microstructure noise
Aiming at financial applications, we study the problem of learning the volatility under market microstructure noise. Specifically, we consider noisy discrete time observations from a stochastic differential equation and develop a novel computational method to learn the diffusion coefficient of the equation. We take a nonparametric Bayesian approach, where we model the volatility function a priori as piecewise constant. Its prior is specified via the inverse Gamma Markov chain. Sampling from the posterior is accomplished by incorporating the Forward Filtering Backward Simulation algorithm in the Gibbs sampler. Good performance of the method is demonstrated on two representative synthetic data examples. Finally, we apply the method on the EUR/USD exchange rate dataset.
0
0
0
0
0
1
20,367
Multi-Speaker DOA Estimation Using Deep Convolutional Networks Trained with Noise Signals
Supervised learning based methods for source localization, being data driven, can be adapted to different acoustic conditions via training and have been shown to be robust to adverse acoustic environments. In this paper, a convolutional neural network (CNN) based supervised learning method for estimating the direction-of-arrival (DOA) of multiple speakers is proposed. Multi-speaker DOA estimation is formulated as a multi-class multi-label classification problem, where the assignment of each DOA label to the input feature is treated as a separate binary classification problem. The phase component of the short-time Fourier transform (STFT) coefficients of the received microphone signals are directly fed into the CNN, and the features for DOA estimation are learnt during training. Utilizing the assumption of disjoint speaker activity in the STFT domain, a novel method is proposed to train the CNN with synthesized noise signals. Through experimental evaluation with both simulated and measured acoustic impulse responses, the ability of the proposed DOA estimation approach to adapt to unseen acoustic conditions and its robustness to unseen noise type is demonstrated. Through additional empirical investigation, it is also shown that with an array of M microphones our proposed framework yields the best localization performance with M-1 convolution layers. The ability of the proposed method to accurately localize speakers in a dynamic acoustic scenario with varying number of sources is also shown.
1
0
0
0
0
0
20,368
Follow-up of eROSITA and Euclid Galaxy Clusters with XMM-Newton
A revolution in galaxy cluster science is only a few years away. The survey machines eROSITA and Euclid will provide cluster samples of never-before-seen statistical quality. XMM-Newton will be the key instrument to exploit these rich datasets in terms of detailed follow-up of the cluster hot gas content, systematically characterizing sub-samples as well as exotic new objects.
0
1
0
0
0
0
20,369
Effect of viscosity ratio on the self-sustained instabilities in planar immiscible jets
Previous studies have shown that intermediate surface tension has a counterintuitive destabilizing effect on 2-phase planar jets. Here, the transition process in confined 2D jets of two fluids with varying viscosity ratio is investigated using DNS. Neutral curves for persistent oscillations are found by recording the norm of the velocity residuals in DNS for over 1000 nondimensional time units, or until the signal has reached a constant level in a logarithmic scale - either a converged steady state, or a "statistically steady" oscillatory state. Oscillatory final states are found for all viscosity ratios (0.1-10). For uniform viscosity (m=1), the first bifurcation is through a surface tension-driven global instability. For low viscosity of the outer fluid, there is a mode competition between a steady asymmetric Coanda-type attachment mode and the surface tension-induced mode. At moderate surface tension, the Coanda-type attachment dominates and eventually triggers time-dependent convective bursts. At high surface tension, the surface tension-dominated mode dominates. For high viscosity of the outer fluid, persistent oscillations appear due to a strong convective instability. Finally, the m=1 jet remains unstable far from the inlet when the shear profile is nearly constant. Comparing this to a parallel Couette flow (without inflection points), we show that in both flows, a hidden interfacial mode brought out by surface tension becomes temporally and absolutely unstable in an intermediate Weber and Reynolds regime. An energy analysis of the Couette setup shows that surface tension, although dissipative, induces a velocity field near the interface which extracts energy from the flow through a viscous mechanism. This study highlights the rich dynamics of immiscible planar uniform-density jets, where several self-sustained and convective mechanisms compete depending on the exact parameters.
0
1
0
0
0
0
20,370
On the Effects of Batch and Weight Normalization in Generative Adversarial Networks
Generative adversarial networks (GANs) are highly effective unsupervised learning frameworks that can generate very sharp data, even for data such as images with complex, highly multimodal distributions. However GANs are known to be very hard to train, suffering from problems such as mode collapse and disturbing visual artifacts. Batch normalization (BN) techniques have been introduced to address the training. Though BN accelerates the training in the beginning, our experiments show that the use of BN can be unstable and negatively impact the quality of the trained model. The evaluation of BN and numerous other recent schemes for improving GAN training is hindered by the lack of an effective objective quality measure for GAN models. To address these issues, we first introduce a weight normalization (WN) approach for GAN training that significantly improves the stability, efficiency and the quality of the generated samples. To allow a methodical evaluation, we introduce squared Euclidean reconstruction error on a test set as a new objective measure, to assess training performance in terms of speed, stability, and quality of generated samples. Our experiments with a standard DCGAN architecture on commonly used datasets (CelebA, LSUN bedroom, and CIFAR-10) indicate that training using WN is generally superior to BN for GANs, achieving 10% lower mean squared loss for reconstruction and significantly better qualitative results than BN. We further demonstrate the stability of WN on a 21-layer ResNet trained with the CelebA data set. The code for this paper is available at this https URL
1
0
0
1
0
0
20,371
A noise-immune cavity-assisted non-destructive detection for an optical lattice clock in the quantum regime
We present and implement a non-destructive detection scheme for the transition probability readout of an optical lattice clock. The scheme relies on a differential heterodyne measurement of the dispersive properties of lattice-trapped atoms enhanced by a high finesse cavity. By design, this scheme offers a 1st order rejection of the technical noise sources, an enhanced signal-to-noise ratio, and an homogeneous atom-cavity coupling. We theoretically show that this scheme is optimal with respect to the photon shot noise limit. We experimentally realize this detection scheme in an operational strontium optical lattice clock. The resolution is on the order of a few atoms with a photon scattering rate low enough to keep the atoms trapped after detection. This scheme opens the door to various different interrogations protocols, which reduce the frequency instability, including atom recycling, zero-dead time clocks with a fast repetition rate, and sub quantum projection noise frequency stability.
0
1
0
0
0
0
20,372
Remarks on planar edge-chromatic critical graphs
The only open case of Vizing's conjecture that every planar graph with $\Delta\geq 6$ is a class 1 graph is $\Delta = 6$. We give a short proof of the following statement: there is no 6-critical plane graph $G$, such that every vertex of $G$ is incident to at most three 3-faces. A stronger statement without restriction to critical graphs is stated in \cite{Wang_Xu_2013}. However, the proof given there works only for critical graphs. Furthermore, we show that every 5-critical plane graph has a 3-face which is adjacent to a $k$-face $(k\in \{3,4\})$. For $\Delta = 5$ our result gives insights into the structure of planar $5$-critical graphs, and the result for $\Delta=6$ gives support for the truth of Vizing's planar graph conjecture.
0
0
1
0
0
0
20,373
The LOFAR window on star-forming galaxies and AGN - curved radio SEDs and IR-radio correlation at $0 < z < 2.5$
We present a study of the low-frequency radio properties of star forming (SF) galaxies and active galactic nuclei (AGN) up to redshift $z=2.5$. The new spectral window probed by the Low Frequency Array (LOFAR) allows us to reconstruct the radio continuum emission from 150 MHz to 1.4 GHz to an unprecedented depth for a radio-selected sample of $1542$ galaxies in $\sim 7~ \rm{deg}^2$ of the LOFAR Boötes field. Using the extensive multi-wavelength dataset available in Boötes and detailed modelling of the FIR to UV spectral energy distribution (SED), we are able to separate the star-formation (N=758) and the AGN (N=784) dominated populations. We study the shape of the radio SEDs and their evolution across cosmic time and find significant differences in the spectral curvature between the SF galaxy and AGN populations. While the radio spectra of SF galaxies exhibit a weak but statistically significant flattening, AGN SEDs show a clear trend to become steeper towards lower frequencies. No evolution of the spectral curvature as a function of redshift is found for SF galaxies or AGN. We investigate the redshift evolution of the infrared-radio correlation (IRC) for SF galaxies and find that the ratio of total infrared to 1.4 GHz radio luminosities decreases with increasing redshift: $ q_{\rm 1.4GHz} = (2.45 \pm 0.04) \times (1+z)^{-0.15 \pm 0.03} $. Similarly, $q_{\rm 150MHz}$ shows a redshift evolution following $ q_{\rm 150GHz} = (1.72 \pm 0.04) \times (1+z)^{-0.22 \pm 0.05}$. Calibration of the 150 MHz radio luminosity as a star formation rate tracer suggests that a single power-law extrapolation from $q_{\rm 1.4GHz}$ is not an accurate approximation at all redshifts.
0
1
0
0
0
0
20,374
Interplay between the Inverse Scattering Method and Fokas's Unified Transform with an Application
It is known that the initial-boundary value problem for certain integrable partial differential equations (PDEs) on the half-line with integrable boundary conditions can be mapped to a special case of the Inverse Scattering Method (ISM) on the full-line. This can also be established within the so-called Unified Transform (UT) for initial-boundary value problems with linearizable boundary conditions. In this paper, we show a converse to this statement within the Ablowitz-Kaup-Newell-Segur (AKNS) scheme: the ISM on the full-line can be mapped to an initial-boundary value problem with linearizable boundary conditions. To achieve this, we need a matrix version of the UT that was introduced by the author to study integrable PDEs on star-graphs. As an application of the result, we show that the new, nonlocal reduction of the AKNS scheme introduced by Ablowitz and Musslimani to obtain the nonlocal Nonlinear Schrödinger (NLS) equation can be recast as an old, local reduction, thus putting the nonlocal NLS and the NLS equations on equal footing from the point of view of the reduction group theory of Mikhailov.
0
1
1
0
0
0
20,375
GNC of the SphereX Robot for Extreme Environment Exploration on Mars
Wheeled ground robots are limited from exploring extreme environments such as caves, lava tubes and skylights. Small robots that can utilize unconventional mobility through hopping, flying or rolling can overcome these limitations. Mul-tiple robots operating as a team offer significant benefits over a single large ro-bot, as they are not prone to single-point failure, enable distributed command and control and enable execution of tasks in parallel. These robots can complement large rovers and landers, helping to explore inaccessible sites, obtaining samples and for planning future exploration missions. Our robots, the SphereX, are 3-kg in mass, spherical and contain computers equivalent to current smartphones. They contain an array of guidance, navigation and control sensors and electronics. SphereX contains room for a 1-kg science payload, including for sample return. Our work in this field has recognized the need for miniaturized chemical mobility systems that provide power and propulsion. Our research explored the use of miniature rockets, including solid rockets, bi-propellants including RP1/hydrogen-peroxide and polyurethane/ammonium-perchlorate. These propulsion options provide maximum flight times of 10 minutes on Mars. Flying, especially hovering consumes significant fuel; hence, we have been developing our robots to perform ballistic hops that enable the robots to travel efficiently over long distances. Techniques are being developed to enable mid-course correction during a ballistic hop. Using multiple cameras, it is possible to reconstitute an image scene from motion blur. Hence our approach is to enable photo mapping as the robots travel on a ballistic hop. The same images would also be used for navigation and path planning. Using our proposed design approach, we are developing low-cost methods for surface exploration of planetary bodies using a network of small robots.
1
1
0
0
0
0
20,376
Stable Signatures for Dynamic Graphs and Dynamic Metric Spaces via Zigzag Persistence
When studying flocking/swarming behaviors in animals one is interested in quantifying and comparing the dynamics of the clustering induced by the coalescence and disbanding of animals in different groups. In a similar vein, studying the dynamics of social networks leads to the problem of characterizing groups/communities as they form and disperse throughout time. Motivated by this, we study the problem of obtaining persistent homology based summaries of time-dependent data. Given a finite dynamic graph (DG), we first construct a zigzag persistence module arising from linearizing the dynamic transitive graph naturally induced from the input DG. Based on standard results, we then obtain a persistence diagram or barcode from this zigzag persistence module. We prove that these barcodes are stable under perturbations in the input DG under a suitable distance between DGs that we identify. More precisely, our stability theorem can be interpreted as providing a lower bound for the distance between DGs. Since it relies on barcodes, and their bottleneck distance, this lower bound can be computed in polynomial time from the DG inputs. Since DGs can be given rise by applying the Rips functor (with a fixed threshold) to dynamic metric spaces, we are also able to derive related stable invariants for these richer class of dynamic objects. Along the way, we propose a summarization of dynamic graphs that captures their time-dependent clustering features which we call formigrams. These set-valued functions generalize the notion of dendrogram, a prevalent tool for hierarchical clustering. In order to elucidate the relationship between our distance between two DGs and the bottleneck distance between their associated barcodes, we exploit recent advances in the stability of zigzag persistence due to Botnan and Lesnick, and to Bjerkevik.
0
0
1
0
0
0
20,377
A Data-driven Approach Towards Human-robot Collaborative Problem Solving in a Shared Space
We are developing a system for human-robot communication that enables people to communicate with robots in a natural way and is focused on solving problems in a shared space. Our strategy for developing this system is fundamentally data-driven: we use data from multiple input sources and train key components with various machine learning techniques. We developed a web application that is collecting data on how two humans communicate to accomplish a task, as well as a mobile laboratory that is instrumented to collect data on how two humans communicate to accomplish a task in a physically shared space. The data from these systems will be used to train and fine-tune the second stage of our system, in which the robot will be simulated through software. A physical robot will be used in the final stage of our project. We describe these instruments, a test-suite and performance metrics designed to evaluate and automate the data gathering process as well as evaluate an initial data set.
1
0
0
0
0
0
20,378
Discursive Landscapes and Unsupervised Topic Modeling in IR: A Validation of Text-As-Data Approaches through a New Corpus of UN Security Council Speeches on Afghanistan
The recent turn towards quantitative text-as-data approaches in IR brought new ways to study the discursive landscape of world politics. Here seen as complementary to qualitative approaches, quantitative assessments have the advantage of being able to order and make comprehensible vast amounts of text. However, the validity of unsupervised methods applied to the types of text available in large quantities needs to be established before they can speak to other studies relying on text and discourse as data. In this paper, we introduce a new text corpus of United Nations Security Council (UNSC) speeches on Afghanistan between 2001 and 2017; we study this corpus through unsupervised topic modeling (LDA) with the central aim to validate the topic categories that the LDA identifies; and we discuss the added value, and complementarity, of quantitative text-as-data approaches. We set-up two tests using mixed- method approaches. Firstly, we evaluate the identified topics by assessing whether they conform with previous qualitative work on the development of the situation in Afghanistan. Secondly, we use network analysis to study the underlying social structures of what we will call 'speaker-topic relations' to see whether they correspondent to know divisions and coalitions in the UNSC. In both cases we find that the unsupervised LDA indeed provides valid and valuable outputs. In addition, the mixed-method approaches themselves reveal interesting patterns deserving future qualitative research. Amongst these are the coalition and dynamics around the 'women and human rights' topic as part of the UNSC debates on Afghanistan.
1
0
0
0
0
0
20,379
Lattice Gas with Molecular Dynamics Collision Operator
We introduce a lattice gas implementation that is based on coarse-graining a Molecular Dynamics (MD) simulation. Such a lattice gas is similar to standard lattice gases, but its collision operator is informed by an underlying MD simulation. This can be considered an optimal lattice gas implementation because it allows for the representation of any system that can be simulated with MD. We show here that equilibrium behavior of the popular lattice Boltzmann algorithm is consistent with this optimal lattice gas. This comparison allows us to make a more accurate identification of the expressions for temperature and pressure in lattice Boltzmann simulations which turn out to be related not only to the physical temperature and pressure but also to the lattice discretization. We show that for any spatial discretization we need to choose a particular temporal discretization to recover the lattice Boltzmann equilibrium.
0
1
0
0
0
0
20,380
From dynamical systems with time-varying delay to circle maps and Koopmanism
In the present paper we investigate the influence of the retarded access by a time-varying delay on the dynamics of delay systems. We show that there are two universality classes of delays, which lead to fundamental differences in dynamical quantities such as the Lyapunov spectrum. Therefore we introduce an operator theoretic framework, where the solution operator of the delay system is decomposed into the Koopman operator describing the delay access and an operator similar to the solution operator known from systems with constant delay. The Koopman operator corresponds to an iterated map, called access map, which is defined by the iteration of the delayed argument of the delay equation. The dynamics of this one-dimensional iterated map determines the universality classes of the infinite-dimensional state dynamics governed by the delay differential equation. In this way, we connect the theory of time-delay systems with the theory of circle maps and the framework of the Koopman operator. In the present paper we extend our previous work [Otto, Müller, and Radons, Phys. Rev. Lett. 118, 044104 (2017)], by elaborating the mathematical details and presenting further results also on the Lyapunov vectors.
0
1
1
0
0
0
20,381
Engineering Frequency-dependent Superfluidity in Bose-Fermi Mixtures
Unconventional superconductivity or superfluidity are among the most exciting and fascinating quantum states in condensed matter physics. Usually these states are characterized by non-trivial spatial symmetry of the pairing order parameter, such as in $^{3}He$ and high-$T_{c}$ cuprates. Besides spatial dependence the order parameter could have unconventional frequency dependence, which is also allowed by Fermi-Dirac statistics. For instance, odd-frequency pairing is an exciting paradigm when discussing exotic superfluidity or superconductivity and is yet to be realized in the experiments. In this paper we propose a symmetry-based method of controlling frequency dependence of the pairing order parameter via manipulating the inversion symmetry of the system. First, a toy model is introduced to illustrate that frequency dependence of the order parameter can be adjusted by controlling the inversion symmetry of the system. Second, taking advantage of the recent rapid developments of shaken optical lattices in ultracold gases, we propose a Bose-Fermi mixture to realize such frequency dependent superfluids. The key idea is introducing the frequency-dependent attraction between Fermions mediated by Bogoliubov phonons with asymmetric dispersion. Our proposal should pave an alternative way for exploring frequency-dependent superconductors or superfluids with cold atoms.
0
1
0
0
0
0
20,382
In-Silico Proportional-Integral Moment Control of Stochastic Reaction Networks with Applications to Gene Expression (with Dimerization)
The problem of controlling the mean and the variance of a species of interest in a simple gene expression is addressed. It is shown that the protein mean level can be globally and robustly tracked to any desired value using a simple PI controller that satisfies certain sufficient conditions. Controlling both the mean and variance however requires an additional control input, e.g. the mRNA degradation rate, and local robust tracking of mean and variance is proved to be achievable using multivariable PI control, provided that the reference point satisfies necessary conditions imposed by the system. Even more importantly, it is shown that there exist PI controllers that locally, robustly and simultaneously stabilize all the equilibrium points inside the admissible region. The results are then extended to the mean control of a gene expression with protein dimerization. It is shown that the moment closure problem can be circumvented without invoking any moment closure technique. Local stabilization and convergence of the average dimer population to any desired reference value is ensured using a pure integral control law. Explicit bounds on the controller gain are provided and shown to be valid for any reference value. As a byproduct, an explicit upper-bound of the variance of the monomer species, acting on the system as unknown input due to the moment openness, is obtained. The results are illustrated by simulation.
1
0
0
0
1
0
20,383
Big enterprise registration data imputation: Supporting spatiotemporal analysis of industries in China
Big, fine-grained enterprise registration data that includes time and location information enables us to quantitatively analyze, visualize, and understand the patterns of industries at multiple scales across time and space. However, data quality issues like incompleteness and ambiguity, hinder such analysis and application. These issues become more challenging when the volume of data is immense and constantly growing. High Performance Computing (HPC) frameworks can tackle big data computational issues, but few studies have systematically investigated imputation methods for enterprise registration data in this type of computing environment. In this paper, we propose a big data imputation workflow based on Apache Spark as well as a bare-metal computing cluster, to impute enterprise registration data. We integrated external data sources, employed Natural Language Processing (NLP), and compared several machine-learning methods to address incompleteness and ambiguity problems found in enterprise registration data. Experimental results illustrate the feasibility, efficiency, and scalability of the proposed HPC-based imputation framework, which also provides a reference for other big georeferenced text data processing. Using these imputation results, we visualize and briefly discuss the spatiotemporal distribution of industries in China, demonstrating the potential applications of such data when quality issues are resolved.
1
0
0
0
0
0
20,384
Fast Simulation of Vehicles with Non-deformable Tracks
This paper presents a novel technique that allows for both computationally fast and sufficiently plausible simulation of vehicles with non-deformable tracks. The method is based on an effect we have called Contact Surface Motion. A comparison with several other methods for simulation of tracked vehicle dynamics is presented with the aim to evaluate methods that are available off-the-shelf or with minimum effort in general-purpose robotics simulators. The proposed method is implemented as a plugin for the open-source physics-based simulator Gazebo using the Open Dynamics Engine.
1
0
0
0
0
0
20,385
Persistent Hidden States and Nonlinear Transformation for Long Short-Term Memory
Recurrent neural networks (RNNs) have been drawing much attention with great success in many applications like speech recognition and neural machine translation. Long short-term memory (LSTM) is one of the most popular RNN units in deep learning applications. LSTM transforms the input and the previous hidden states to the next states with the affine transformation, multiplication operations and a nonlinear activation function, which makes a good data representation for a given task. The affine transformation includes rotation and reflection, which change the semantic or syntactic information of dimensions in the hidden states. However, considering that a model interprets the output sequence of LSTM over the whole input sequence, the dimensions of the states need to keep the same type of semantic or syntactic information regardless of the location in the sequence. In this paper, we propose a simple variant of the LSTM unit, persistent recurrent unit (PRU), where each dimension of hidden states keeps persistent information across time, so that the space keeps the same meaning over the whole sequence. In addition, to improve the nonlinear transformation power, we add a feedforward layer in the PRU structure. In the experiment, we evaluate our proposed methods with three different tasks, and the results confirm that our methods have better performance than the conventional LSTM.
0
0
0
1
0
0
20,386
Satellite Image-based Localization via Learned Embeddings
We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.
1
0
0
0
0
0
20,387
Optimality of codes with respect to error probability in Gaussian noise
We consider geometrical optimization problems related to optimizing the error probability in the presence of a Gaussian noise. One famous questions in the field is the "weak simplex conjecture". We discuss possible approaches to it, and state related conjectures about the Gaussian measure, in particular, the conjecture about minimizing of the Gaussian measure of a simplex. We also consider antipodal codes, apply the Šidák inequality and establish some theoretical and some numerical results about their optimality.
1
0
1
0
0
0
20,388
Improved TDNNs using Deep Kernels and Frequency Dependent Grid-RNNs
Time delay neural networks (TDNNs) are an effective acoustic model for large vocabulary speech recognition. The strength of the model can be attributed to its ability to effectively model long temporal contexts. However, current TDNN models are relatively shallow, which limits the modelling capability. This paper proposes a method of increasing the network depth by deepening the kernel used in the TDNN temporal convolutions. The best performing kernel consists of three fully connected layers with a residual (ResNet) connection from the output of the first to the output of the third. The addition of spectro-temporal processing as the input to the TDNN in the form of a convolutional neural network (CNN) and a newly designed Grid-RNN was investigated. The Grid-RNN strongly outperforms a CNN if different sets of parameters for different frequency bands are used and can be further enhanced by using a bi-directional Grid-RNN. Experiments using the multi-genre broadcast (MGB3) English data (275h) show that deep kernel TDNNs reduces the word error rate (WER) by 6% relative and when combined with the frequency dependent Grid-RNN gives a relative WER reduction of 9%.
0
0
0
1
0
0
20,389
A Structured Self-attentive Sentence Embedding
This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification, and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.
1
0
0
0
0
0
20,390
When Neurons Fail
We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase. We give tight bounds on the number of neurons that can fail without harming the result of a computation. To determine our bounds, we leverage the fact that neural activation functions are Lipschitz-continuous. Our bound is on a quantity, we call the \textit{Forward Error Propagation}, capturing how much error is propagated by a neural network when a given number of components is failing, computing this quantity only requires looking at the topology of the network, while experimentally assessing the robustness of a network requires the costly experiment of looking at all the possible inputs and testing all the possible configurations of the network corresponding to different failure situations, facing a discouraging combinatorial explosion. We distinguish the case of neurons that can fail and stop their activity (crashed neurons) from the case of neurons that can fail by transmitting arbitrary values (Byzantine neurons). Interestingly, as we show in the paper, our bound can easily be extended to the case where synapses can fail. We show how our bound can be leveraged to quantify the effect of memory cost reduction on the accuracy of a neural network, to estimate the amount of information any neuron needs from its preceding layer, enabling thereby a boosting scheme that prevents neurons from waiting for unnecessary signals. We finally discuss the trade-off between neural networks robustness and learning cost.
0
0
0
1
0
0
20,391
Heavy Traffic Limit for a Tandem Queue with Identical Service Times
We consider a two-node tandem queueing network in which the upstream queue is M/G/1 and each job reuses its upstream service requirement when moving to the downstream queue. Both servers employ the first-in-first-out policy. We investigate the amount of work in the second queue at certain embedded arrival time points, namely when the upstream queue has just emptied. We focus on the case of infinite-variance service times and obtain a heavy traffic process limit for the embedded Markov chain.
0
0
1
0
0
0
20,392
Autonomous Sweet Pepper Harvesting for Protected Cropping Systems
In this letter, we present a new robotic harvester (Harvey) that can autonomously harvest sweet pepper in protected cropping environments. Our approach combines effective vision algorithms with a novel end-effector design to enable successful harvesting of sweet peppers. Initial field trials in protected cropping environments, with two cultivar, demonstrate the efficacy of this approach achieving a 46% success rate for unmodified crop, and 58% for modified crop. Furthermore, for the more favourable cultivar we were also able to detach 90% of sweet peppers, indicating that improvements in the grasping success rate would result in greatly improved harvesting performance.
1
0
0
0
0
0
20,393
AMI SZ observation of galaxy-cluster merger CIZA J2242+5301: perpendicular flows of gas and dark matter
AMI observations towards CIZA J2242+5301, in comparison with observations of weak gravitational lensing and X-ray emission from the literature, are used to investigate the behaviour of non-baryonic dark matter (NBDM) and gas during the merger. Analysis of the Sunyaev-Zel'dovich (SZ) signal indicates the presence of high pressure gas elongated perpendicularly to the X-ray and weak-lensing morphologies which, given the merger-axis constraints in the literature, implies that high pressure gas is pushed out into a linear structure during core passing. Simulations in the literature closely matching the inferred merger scenario show the formation of gas density and temperature structures perpendicular to the merger axis. These SZ observations are challenging for modified gravity theories in which NBDM is not the dominant contributor to galaxy-cluster gravity.
0
1
0
0
0
0
20,394
Development of a computer-aided design software for dental splint in orthognathic surgery
In the orthognathic surgery, dental splints are important and necessary to help the surgeon reposition the maxilla or mandible. However, the traditional methods of manual design of dental splints are difficult and time-consuming. The research on computer-aided design software for dental splints is rarely reported. Our purpose is to develop a novel special software named EasySplint to design the dental splints conveniently and efficiently. The design can be divided into two steps, which are the generation of initial splint base and the Boolean operation between it and the maxilla-mandibular model. The initial splint base is formed by ruled surfaces reconstructed using the manually picked points. Then, a method to accomplish Boolean operation based on the distance filed of two meshes is proposed. The interference elimination can be conducted on the basis of marching cubes algorithm and Boolean operation. The accuracy of the dental splint can be guaranteed since the original mesh is utilized to form the result surface. Using EasySplint, the dental splints can be designed in about 10 minutes and saved as a stereo lithography (STL) file for 3D printing in clinical applications. Three phantom experiments were conducted and the efficiency of our method was demonstrated.
1
1
0
0
0
0
20,395
Energy-transport systems for optical lattices: derivation, analysis, simulation
Energy-transport equations for the transport of fermions in optical lattices are formally derived from a Boltzmann transport equation with a periodic lattice potential in the diffusive limit. The limit model possesses a formal gradient-flow structure like in the case of the energy-transport equations for semiconductors. At the zeroth-order high temperature limit, the energy-transport equations reduce to the whole-space logarithmic diffusion equation which has some unphysical properties. Therefore, the first-order expansion is derived and analyzed. The existence of weak solutions to the time-discretized system for the particle and energy densities with periodic boundary conditions is proved. The difficulties are the nonstandard degeneracy and the quadratic gradient term. The main tool of the proof is a result on the strong convergence of the gradients of the approximate solutions. Numerical simulations in one space dimension show that the particle density converges to a constant steady state if the initial energy density is sufficiently large, otherwise the particle density converges to a nonconstant steady state.
0
0
1
0
0
0
20,396
First principles study of structural, magnetic and electronic properties of CrAs
We report ab initio density functional calculations of the structural and magnetic properties, and the electronic structure of CrAs. To simulate the observed pressure-driven experimental results, we perform our analysis for different volumes of the unit cell, showing that the structural, magnetic and electronic properties strongly depend on the size of the cell. We find that the calculated quantities are in good agreement with the experimental data, and we review our results in terms of the observed superconductivity.
0
1
0
0
0
0
20,397
Subspace Robust Wasserstein distances
Making sense of Wasserstein distances between discrete measures in high-dimensional settings remains a challenge. Recent work has advocated a two-step approach to improve robustness and facilitate the computation of optimal transport, using for instance projections on random real lines, or a preliminary quantization to reduce the number of points. We propose in this work a new robust variant of the Wasserstein distance. This quantity captures the maximal possible distance that can be realized between these two measures, after they have been projected orthogonally on a lower k dimensional subspace. We show that this distance inherits several favorably properties of OT, and that computing it can be cast as a convex problem involving the top k eigenvalues of the second order moment matrix of the displacements induced by a transport plan. We provide algorithms to approximate the computation of this saddle point using entropic regularization, and illustrate the interest of this approach empirically.
1
0
0
1
0
0
20,398
Wind accretion onto compact objects
X-ray emission associated to accretion onto compact objects displays important levels of photometric and spectroscopic time-variability. When the accretor orbits a Supergiant star, it captures a fraction of the supersonic radiatively-driven wind which forms shocks in its vicinity. The amplitude and stability of this gravitational beaming of the flow conditions the mass accretion rate responsible, in fine, for the X-ray luminosity of those Supergiant X-ray Binaries. The capacity of this low angular momentum inflow to form a disc-like structure susceptible to be the stage of well-known instabilities remains at stake. Using state-of-the-art numerical setups, we characterized the structure of a Bondi-Hoyle-Lyttleton flow onto a compact object, from the shock down to the vicinity of the accretor, typically five orders of magnitude smaller. The evolution of the mass accretion rate and of the bow shock which forms around the accretor (transverse structure, opening angle, stability, temperature profile) with the Mach number of the incoming flow is described in detail. The robustness of those simulations based on the High Performance Computing MPI-AMRVAC code is supported by the topology of the inner sonic surface, in agreement with theoretical expectations. We developed a synthetic model of mass transfer in Supergiant X-ray Binaries which couples the launching of the wind accordingly to the stellar parameters, the orbital evolution of the streamlines in a modified Roche potential and the accretion process. We show that the shape of the permanent flow is entirely determined by the mass ratio, the filling factor, the Eddington factor and the alpha-force multiplier. Provided scales such as the orbital period are known, we can trace back the observables to evaluate the mass accretion rates, the accretion mechanism (stream or wind-dominated) and the shearing of the inflow.
0
1
0
0
0
0
20,399
Riemannian Stein Variational Gradient Descent for Bayesian Inference
We develop Riemannian Stein Variational Gradient Descent (RSVGD), a Bayesian inference method that generalizes Stein Variational Gradient Descent (SVGD) to Riemann manifold. The benefits are two-folds: (i) for inference tasks in Euclidean spaces, RSVGD has the advantage over SVGD of utilizing information geometry, and (ii) for inference tasks on Riemann manifolds, RSVGD brings the unique advantages of SVGD to the Riemannian world. To appropriately transfer to Riemann manifolds, we conceive novel and non-trivial techniques for RSVGD, which are required by the intrinsically different characteristics of general Riemann manifolds from Euclidean spaces. We also discover Riemannian Stein's Identity and Riemannian Kernelized Stein Discrepancy. Experimental results show the advantages over SVGD of exploring distribution geometry and the advantages of particle-efficiency, iteration-effectiveness and approximation flexibility over other inference methods on Riemann manifolds.
0
0
0
1
0
0
20,400
Suspension-thermal noise in spring-antispring systems for future gravitational-wave detectors
Spring-antispring systems have been investigated as possible low-frequency seismic isolation in high-precision optical experiments. These systems provide the possibility to tune the fundamental resonance frequency to, in principle, arbitrarily low values, and at the same time maintain a compact design of the isolation system. It was argued though that thermal noise in spring-antispring systems would not be as small as one may naively expect from lowering the fundamental resonance frequency. In this paper, we present a detailed calculation of the suspension thermal noise for a specific spring-antispring system, namely the Roberts linkage. We find a concise expression of the suspension thermal noise spectrum, which assumes a form very similar to the well-known expression for a simple pendulum. It is found that while the Roberts linkage can provide strong seismic isolation due to a very low fundamental resonance frequency, its thermal noise is rather determined by the dimension of the system. We argue that this is true for all horizontal mechanical isolation systems with spring-antispring dynamics. This imposes strict requirements on mechanical spring-antispring systems for the seismic isolation in potential future low-frequency gravitational-wave detectors as we discuss for the four main concepts: atom-interferometric, superconducting, torsion-bars, and conventional laser interferometer.
0
1
0
0
0
0