ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
20,701 | Evidence for universality in the initial planetesimal mass function | Planetesimals may form from the gravitational collapse of dense particle
clumps initiated by the streaming instability. We use simulations of
aerodynamically coupled gas-particle mixtures to investigate whether the
properties of planetesimals formed in this way depend upon the sizes of the
particles that participate in the instability. Based on three high resolution
simulations that span a range of dimensionless stopping time $6 \times 10^{-3}
\leq \tau \leq 2$ no statistically significant differences in the initial
planetesimal mass function are found. The mass functions are fit by a
power-law, ${\rm d}N / {\rm d}M_p \propto M_p^{-p}$, with $p=1.5-1.7$ and
errors of $\Delta p \approx 0.1$. Comparing the particle density fields prior
to collapse, we find that the high wavenumber power spectra are similarly
indistinguishable, though the large-scale geometry of structures induced via
the streaming instability is significantly different between all three cases.
We interpret the results as evidence for a near-universal slope to the mass
function, arising from the small-scale structure of streaming-induced
turbulence.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,702 | On covering systems of integers | A covering system of the integers is a finite collection of modular residue
classes $\{a_m \bmod{m}\}_{m \in S}$ whose union is all integers. Given a
finite set $S$ of moduli, it is often difficult to tell whether there is a
choice of residues modulo elements of $S$ covering the integers. Hough has
shown that if the smallest modulus in $S$ is at least $10^{16}$, then there is
none. However, the question of whether there is a covering of the integers with
all odd moduli remains open. We consider multiplicative restrictions on the set
of moduli to generalize Hough's negative solution to the minimum modulus
problem. In particular, we find that every covering system of the integers has
a modulus divisible by a prime number less than or equal to $19$. Hough and
Nielsen have shown that every covering system has a modulus divisible by either
$2$ or $3$.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,703 | Asymptotically safe cosmology - a status report | Asymptotic Safety, based on a non-Gaussian fixed point of the gravitational
renormalization group flow, provides an elegant mechanism for completing the
gravitational force at sub-Planckian scales. At high energies the fixed point
controls the scaling of couplings such that unphysical divergences are absent
while the emergence of classical low-energy physics is linked to a crossover
between two renormalization group fixed points. These features make Asymptotic
Safety an attractive framework for cosmological model building. The resulting
scenarios may naturally give rise to a quantum gravity driven inflationary
phase in the very early universe and an almost scale-free fluctuation spectrum.
Moreover, effective descriptions arising from an renormalization group
improvement permit a direct comparison to cosmological observations as, e.g.
Planck data.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,704 | Generalized Earley Parser: Bridging Symbolic Grammars and Sequence Data for Future Prediction | Future predictions on sequence data (e.g., videos or audios) require the
algorithms to capture non-Markovian and compositional properties of high-level
semantics. Context-free grammars are natural choices to capture such
properties, but traditional grammar parsers (e.g., Earley parser) only take
symbolic sentences as inputs. In this paper, we generalize the Earley parser to
parse sequence data which is neither segmented nor labeled. This generalized
Earley parser integrates a grammar parser with a classifier to find the optimal
segmentation and labels, and makes top-down future predictions. Experiments
show that our method significantly outperforms other approaches for future
human activity prediction.
| 0 | 0 | 0 | 1 | 0 | 0 |
20,705 | COCrIP: Compliant OmniCrawler In-pipeline Robot | This paper presents a modular in-pipeline climbing robot with a novel
compliant foldable OmniCrawler mechanism. The circular cross-section of the
OmniCrawler module enables a holonomic motion to facilitate the alignment of
the robot in the direction of bends. Additionally, the crawler mechanism
provides a fair amount of traction, even on slippery surfaces. These advantages
of crawler modules have been further supplemented by incorporating active
compliance in the module itself which helps to negotiate sharp bends in small
diameter pipes. The robot has a series of 3 such compliant foldable modules
interconnected by the links via passive joints. For the desirable pipe diameter
and curvature of the bends, the spring stiffness value for each passive joint
is determined by formulating a constrained optimization problem using the
quasi-static model of the robot. Moreover, a minimum friction coefficient value
between the module-pipe surface which can be vertically climbed by the robot
without slipping is estimated. The numerical simulation results have further
been validated by experiments on real robot prototype.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,706 | An Inversion-Based Learning Approach for Improving Impromptu Trajectory Tracking of Robots with Non-Minimum Phase Dynamics | This paper presents a learning-based approach for impromptu trajectory
tracking for non-minimum phase systems, i.e., systems with unstable inverse
dynamics. Inversion-based feedforward approaches are commonly used for
improving tracking performance; however, these approaches are not directly
applicable to non-minimum phase systems due to their inherent instability. In
order to resolve the instability issue, existing methods have assumed that the
system model is known and used pre-actuation or inverse approximation
techniques. In this work, we propose an approach for learning a stable,
approximate inverse of a non-minimum phase baseline system directly from its
input-output data. Through theoretical discussions, simulations, and
experiments on two different platforms, we show the stability of our proposed
approach and its effectiveness for high-accuracy, impromptu tracking. Our
approach also shows that including more information in the training, as is
commonly assumed to be useful, does not lead to better performance but may
trigger instability and impact the effectiveness of the overall approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,707 | Decay Estimates for 1-D Parabolic PDEs with Boundary Disturbances | In this work decay estimates are derived for the solutions of 1-D linear
parabolic PDEs with disturbances at both boundaries and distributed
disturbances. The decay estimates are given in the L2 and H1 norms of the
solution and discontinuous disturbances are allowed. Although an eigenfunction
expansion for the solution is exploited for the proof of the decay estimates,
the estimates do not require knowledge of the eigenvalues and the
eigenfunctions of the corresponding Sturm-Liouville operator. Examples show
that the obtained results can be applied for the stability analysis of
parabolic PDEs with nonlocal terms.
| 1 | 0 | 1 | 0 | 0 | 0 |
20,708 | GEANT4 Simulation of Nuclear Interaction Induced Soft Errors in Digital Nanoscale Electronics: Interrelation Between Proton and Heavy Ion Impacts | A simple and self-consistent approach has been proposed for simulation of the
proton-induced soft error rate based on the heavy ion induced single event
upset cross-section data and vice versa. The approach relies on the GEANT4
assisted Monte Carlo simulation of the secondary particle LET spectra produced
by nuclear interactions. The method has been validated with the relevant
in-flight soft error rate data for space protons and heavy ions. An approximate
analytical relation is proposed and validated for a fast recalculation between
the two types of experimental data.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,709 | Analysis and Measurement of the Transfer Matrix of a 9-cell 1.3-GHz Superconducting Cavity | Superconducting linacs are capable of producing intense, stable, high-quality
electron beams that have found widespread applications in science and industry.
The 9-cell 1.3-GHz superconducting standing-wave accelerating RF cavity
originally developed for $e^+/e^-$ linear-collider applications [B. Aunes, {\em
et al.} Phys. Rev. ST Accel. Beams {\bf 3}, 092001 (2000)] has been broadly
employed in various superconducting-linac designs. In this paper we discuss the
transfer matrix of such a cavity and present its measurement performed at the
Fermilab Accelerator Science and Technology (FAST) facility. The experimental
results are found to be in agreement with analytical calculations and numerical
simulations.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,710 | Size-aware Sharding For Improving Tail Latencies in In-memory Key-value Stores | This paper introduces the concept of size-aware sharding to improve tail
latencies for in-memory key-value stores, and describes its implementation in
the Minos key-value store. Tail latencies are crucial in distributed
applications with high fan-out ratios, because overall response time is
determined by the slowest response. Size-aware sharding distributes requests
for keys to cores according to the size of the item associated with the key. In
particular, requests for small and large items are sent to disjoint subsets of
cores. Size-aware sharding improves tail latencies by avoiding head-of-line
blocking, in which a request for a small item gets queued behind a request for
a large item. Alternative size-unaware approaches to sharding, such as
keyhash-based sharding, request dispatching and stealing do not avoid
head-of-line blocking, and therefore exhibit worse tail latencies. The
challenge in implementing size-aware sharding is to maintain high throughput by
avoiding the cost of software dispatching and by achieving load balancing
between different cores. Minos uses hardware dispatch for all requests for
small items, which form the very large majority of all requests. It achieves
load balancing by adapting the number of cores handling requests for small and
large items to their relative presence in the workload. We compare Minos to
three state-of-the-art designs of in-memory KV stores. Compared to its closest
competitor, Minos achieves a 99th percentile latency that is up to two orders
of magnitude lower. Put differently, for a given value for the 99th percentile
latency equal to 10 times the mean service time, Minos achieves a throughput
that is up to 7.4 times higher.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,711 | Abell 2744 may be a supercluster aligned along the sightline | To explain the unusual richness and compactness of the Abell 2744, we propose
a hypothesis that it may be a rich supercluster aligned along the sightline,
and present a supporting evidence obtained numerically from the MultiDark
Planck 2 simulations with a linear box size of $1\,h^{-1}$Gpc. Applying the
friends-of-friends (FoF) algorithm with a linkage length of $0.33$ to a sample
of the cluster-size halos from the simulations, we identify the superclusters
and investigate how many superclusters have filamentary branches that would
appear to be similar to the Abell 2744 if the filamentary axis is aligned with
the sightline. Generating randomly a unit vector as a sightline at the position
of the core member of each supercluster and projecting the positions of the
members onto the plane perpendicular to the direction of the sightline, we
measure two dimensional distances ($R_{2d}$) of the member halos from the core
for each supercluster. Defining a Abell 2744-like spuercluster as the one
having a filamentary branch composed of eight or more members with $R_{2d}\le
1\,$Mpc and masses comparable to those of the observed Abell 2744
substructures, we find one Abell 2744-like supercluster at $z=0.3$ and two at
$z=0$. Repeating the same analysis but with the data from the Big MultiDark
Planck simulations performed on a larger box of linear size of
$2.5\,h^{-1}$Mpc, we find that the number of the Abell 2744-like superclusters
at $z=0$ increases up to eighteen, among which three are found more massive
than $5\times 10^{15}\,M_{\odot}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,712 | Classification of Pressure Gradient of Human Common Carotid Artery and Ascending Aorta on the Basis of Age and Gender | The current work is done to see which artery has more chance of having
cardiovascular diseases by measuring value of pressure gradient in the common
carotid artery (CCA) and ascending aorta according to age and gender. Pressure
gradient is determined in the CCA and ascending aorta of presumed healthy
volunteers, having age between 10 and 60 years. A real 2D model of both aorta
and common carotid artery is constructed for different age groups using
computational fluid dynamics (CFD). Pressure gradient of both the arteries are
calculated and compared for different age groups and gender. It is found that
with increase in diameter of common carotid artery and ascending aorta with
advancing age pressure gradient decreases. The value of pressure gradient of
aorta is found less than common carotid artery in both cases of age and gender.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,713 | Second-order and local characteristics of network intensity functions | The last decade has witnessed an increase of interest in the spatial analysis
of structured point patterns over networks whose analysis is challenging
because of geometrical complexities and unique methodological problems. In this
context, it is essential to incorporate the network specificity into the
analysis as the locations of events are restricted to areas covered by line
segments. Relying on concepts originating from graph theory, we extend the
notions of first-order network intensity functions to second-order and local
network intensity functions. We consider two types of local indicators of
network association functions which can be understood as adaptations of the
primary ideas of local analysis on the plane. We develop the node-wise and
cross-hierarchical type of local functions. A real dataset on urban
disturbances is also presented.
| 0 | 0 | 0 | 1 | 0 | 0 |
20,714 | Modeling of hysteresis loop and its applications in ferroelectric materials | In order to understand the physical hysteresis loops clearly, we constructed
a novel model, which is combined with the electric field, the temperature, and
the stress as one synthetically parameter. This model revealed the shape of
hysteresis loop was determined by few variables in ferroelectric materials: the
saturation of polarization, the coercive field, the electric susceptibility and
the equivalent field. Comparison with experimental results revealed the model
can retrace polarization versus electric field and temperature. As a
applications of this model, the calculate formula of energy storage efficiency,
the electrocaloric effect, and the P(E,T) function have also been included in
this article.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,715 | Online Factorization and Partition of Complex Networks From Random Walks | Finding the reduced-dimensional structure is critical to understanding
complex networks. Existing approaches such as spectral clustering are
applicable only when the full network is explicitly observed. In this paper, we
focus on the online factorization and partition of implicit large-scale
networks based on observations from an associated random walk. We formulate
this into a nonconvex stochastic factorization problem and propose an efficient
and scalable stochastic generalized Hebbian algorithm. The algorithm is able to
process dependent state-transition data dynamically generated by the underlying
network and learn a low-dimensional representation for each vertex. By applying
a diffusion approximation analysis, we show that the continuous-time limiting
process of the stochastic algorithm converges globally to the "principal
components" of the Markov chain and achieves a nearly optimal sample
complexity. Once given the learned low-dimensional representations, we further
apply clustering techniques to recover the network partition. We show that when
the associated Markov process is lumpable, one can recover the partition
exactly with high probability. We apply the proposed approach to model the
traffic flow of Manhattan as city-wide random walks. By using our algorithm to
analyze the taxi trip data, we discover a latent partition of the Manhattan
city that closely matches the traffic dynamics.
| 1 | 0 | 1 | 1 | 0 | 0 |
20,716 | Powerful genome-wide design and robust statistical inference in two-sample summary-data Mendelian randomization | Two-sample summary-data Mendelian randomization (MR) has become a popular
research design to estimate the causal effect of risk exposures. With the
sample size of GWAS continuing to increase, it is now possible to utilize
genetic instruments that are only weakly associated with the exposure. To
maximize the statistical power of MR, we propose a genome-wide design where
more than a thousand genetic instruments are used. For the statistical
analysis, we use an empirical partially Bayes approach where instruments are
weighted according to their strength, thus weak instruments bring less
variation to the estimator. The estimator is highly efficient with many weak
genetic instruments and is robust to balanced and/or sparse pleiotropy. We
apply our method to estimate the causal effect of body mass index (BMI) and
major blood lipids on cardiovascular disease outcomes and obtain substantially
shorter confidence intervals. Some new and statistically significant findings
are: the estimated causal odds ratio of BMI on ischemic stroke is 1.19 (95% CI:
1.07--1.32, p-value < 0.001); the estimated causal odds ratio of high-density
lipoprotein cholesterol (HDL-C) on coronary artery disease (CAD) is 0.78 (95%
CI 0.73--0.84, p-value < 0.001). However, the estimated effect of HDL-C becomes
substantially smaller and statistically non-significant when we only use the
strong instruments. By employing a genome-wide design and robust statistical
methods, the statistical power of MR studies can be greatly improved. Our
empirical results suggest that, even though the relationship between HDL-C and
CAD appears to be highly heterogeneous, it may be too soon to completely
dismiss the HDL hypothesis.
| 0 | 0 | 0 | 1 | 0 | 0 |
20,717 | A Benchmark on Reliability of Complex Discrete Systems: Emergency Power Supply of a Nuclear Power Plant | This paper contains two parts: the description of a real electrical system,
with many redundancies, reconfigurations and repairs, then the description of a
reliability model of this system, based on the BDMP (Boolean logic Driven
Markov Processes) formalism and partial results of a reliability and
availability calculation made from this model.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,718 | Riemannian Gaussian distributions on the space of positive-definite quaternion matrices | Recently, Riemannian Gaussian distributions were defined on spaces of
positive-definite real and complex matrices. The present paper extends this
definition to the space of positive-definite quaternion matrices. In order to
do so, it develops the Riemannian geometry of the space of positive-definite
quaternion matrices, which is shown to be a Riemannian symmetric space of
non-positive curvature. The paper gives original formulae for the Riemannian
metric of this space, its geodesics, and distance function. Then, it develops
the theory of Riemannian Gaussian distributions, including the exact expression
of their probability density, their sampling algorithm and statistical
inference.
| 0 | 0 | 1 | 1 | 0 | 0 |
20,719 | Analisis of the power flow in Low Voltage DC grids | Power flow in a low voltage direct current grid (LVDC) is a non-linear
problem just as its counterpart ac. This paper demonstrates that, unlike in ac
grids, convergence and uniqueness of the solution can be guaranteed in this
type of grids. The result is not a linearization nor an approximation, but an
analysis of the set of non-linear algebraic equations, which is valid for any
LVDC grid regardless its size, topology or load condition. Computer simulation
corroborate the theoretical analysis.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,720 | Multi-scale bilinear restriction estimates for general phases | We prove (adjoint) bilinear restriction estimates for general phases at
different scales in the full non-endpoint mixed norm range, and give bounds
with a sharp and explicit dependence on the phases. These estimates have
applications to high-low frequency interactions for solutions to partial
differential equations, as well as to the linear restriction problem for
surfaces with degenerate curvature. As a consequence, we obtain new bilinear
restriction estimates for elliptic phases and wave/Klein-Gordon interactions in
the full bilinear range, and give a refined Strichartz inequality for the
Klein-Gordon equation. In addition, we extend these bilinear estimates to hold
in adapted function spaces by using a transference type principle which holds
for vector valued waves.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,721 | Program algebra for Turing-machine programs | This note presents an algebraic theory of instruction sequences with
instructions for Turing tapes as basic instructions, the behaviours produced by
the instruction sequences concerned under execution, and the interaction
between such behaviours and the Turing tapes provided by an execution
environment. This theory provides a setting for investigating issues relating
to computability and computational complexity that is more general than the
closely related Turing-machine models of computation. The theory is essentially
an instantiation of a parameterized algebraic theory which is the basis of a
line of research in which issues relating to a wide variety of subjects from
computer science have been rigorously investigated thinking in terms of
instruction sequences.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,722 | CTD: Fast, Accurate, and Interpretable Method for Static and Dynamic Tensor Decompositions | How can we find patterns and anomalies in a tensor, or multi-dimensional
array, in an efficient and directly interpretable way? How can we do this in an
online environment, where a new tensor arrives each time step? Finding patterns
and anomalies in a tensor is a crucial problem with many applications,
including building safety monitoring, patient health monitoring, cyber
security, terrorist detection, and fake user detection in social networks.
Standard PARAFAC and Tucker decomposition results are not directly
interpretable. Although a few sampling-based methods have previously been
proposed towards better interpretability, they need to be made faster, more
memory efficient, and more accurate.
In this paper, we propose CTD, a fast, accurate, and directly interpretable
tensor decomposition method based on sampling. CTD-S, the static version of
CTD, provably guarantees a high accuracy that is 17 ~ 83x more accurate than
that of the state-of-the-art method. Also, CTD-S is made 5 ~ 86x faster, and 7
~ 12x more memory-efficient than the state-of-the-art method by removing
redundancy. CTD-D, the dynamic version of CTD, is the first interpretable
dynamic tensor decomposition method ever proposed. Also, it is made 2 ~ 3x
faster than already fast CTD-S by exploiting factors at previous time step and
by reordering operations. With CTD, we demonstrate how the results can be
effectively interpreted in the online distributed denial of service (DDoS)
attack detection.
| 1 | 0 | 0 | 1 | 0 | 0 |
20,723 | Memory Augmented Control Networks | Planning problems in partially observable environments cannot be solved
directly with convolutional networks and require some form of memory. But, even
memory networks with sophisticated addressing schemes are unable to learn
intelligent reasoning satisfactorily due to the complexity of simultaneously
learning to access memory and plan. To mitigate these challenges we introduce
the Memory Augmented Control Network (MACN). The proposed network architecture
consists of three main parts. The first part uses convolutions to extract
features and the second part uses a neural network-based planning module to
pre-plan in the environment. The third part uses a network controller that
learns to store those specific instances of past information that are necessary
for planning. The performance of the network is evaluated in discrete grid
world environments for path planning in the presence of simple and complex
obstacles. We show that our network learns to plan and can generalize to new
environments.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,724 | Community Recovery in a Preferential Attachment Graph | A message passing algorithm is derived for recovering communities within a
graph generated by a variation of the Barabási-Albert preferential
attachment model. The estimator is assumed to know the arrival times, or order
of attachment, of the vertices. The derivation of the algorithm is based on
belief propagation under an independence assumption. Two precursors to the
message passing algorithm are analyzed: the first is a degree thresholding (DT)
algorithm and the second is an algorithm based on the arrival times of the
children (C) of a given vertex, where the children of a given vertex are the
vertices that attached to it. Comparison of the performance of the algorithms
shows it is beneficial to know the arrival times, not just the number, of the
children. The probability of correct classification of a vertex is
asymptotically determined by the fraction of vertices arriving before it. Two
extensions of Algorithm C are given: the first is based on joint likelihood of
the children of a fixed set of vertices; it can sometimes be used to seed the
message passing algorithm. The second is the message passing algorithm.
Simulation results are given.
| 1 | 0 | 0 | 1 | 0 | 0 |
20,725 | The descriptive look at the size of subsets of groups | We explore the Borel complexity of some basic families of subsets of a
countable group (large, small, thin, sparse and other) defined by the size of
their elements. Applying the obtained results to the Stone-Čech
compactification $\beta G$ of $G$, we prove, in particular, that the closure of
the minimal ideal of $\beta G$ is of type $F_{\sigma\delta}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,726 | The Impact of Antenna Height Difference on the Performance of Downlink Cellular Networks | Capable of significantly reducing cell size and enhancing spatial reuse,
network densification is shown to be one of the most dominant approaches to
expand network capacity. Due to the scarcity of available spectrum resources,
nevertheless, the over-deployment of network infrastructures, e.g., cellular
base stations (BSs), would strengthen the inter-cell interference as well, thus
in turn deteriorating the system performance. On this account, we investigate
the performance of downlink cellular networks in terms of user coverage
probability (CP) and network spatial throughput (ST), aiming to shed light on
the limitation of network densification. Notably, it is shown that both CP and
ST would be degraded and even diminish to be zero when BS density is
sufficiently large, provided that practical antenna height difference (AHD)
between BSs and users is involved to characterize pathloss. Moreover, the
results also reveal that the increase of network ST is at the expense of the
degradation of CP. Therefore, to balance the tradeoff between user and network
performance, we further study the critical density, under which ST could be
maximized under the CP constraint. Through a special case study, it follows
that the critical density is inversely proportional to the square of AHD. The
results in this work could provide helpful guideline towards the application of
network densification in the next-generation wireless networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,727 | Spin-wave propagation in cubic anisotropic materials | The information carrier of modern technologies is the electron charge whose
transport inevitably generates Joule heating. Spin-waves, the collective
precessional motion of electron spins, do not involve moving charges and thus
avoid Joule heating. In this respect, magnonic devices in which the information
is carried by spin-waves attract interest for low-power computing. However
implementation of magnonic devices for practical use suffers from low spin-wave
signal and on/off ratio. Here we demonstrate that cubic anisotropic materials
can enhance spin-wave signals by improving spin-wave amplitude as well as group
velocity and attenuation length. Furthermore, cubic anisotropic material shows
an enhanced on/off ratio through a laterally localized edge mode, which closely
mimics the gate-controlled conducting channel in traditional field-effect
transistors. These attractive features of cubic anisotropic materials will
invigorate magnonics research towards wave-based functional devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,728 | Relaxing Exclusive Control in Boolean Games | In the typical framework for boolean games (BG) each player can change the
truth value of some propositional atoms, while attempting to make her goal
true. In standard BG goals are propositional formulas, whereas in iterated BG
goals are formulas of Linear Temporal Logic. Both notions of BG are
characterised by the fact that agents have exclusive control over their set of
atoms, meaning that no two agents can control the same atom. In the present
contribution we drop the exclusivity assumption and explore structures where an
atom can be controlled by multiple agents. We introduce Concurrent Game
Structures with Shared Propositional Control (CGS-SPC) and show that they ac-
count for several classes of repeated games, including iterated boolean games,
influence games, and aggregation games. Our main result shows that, as far as
verification is concerned, CGS-SPC can be reduced to concurrent game structures
with exclusive control. This result provides a polynomial reduction for the
model checking problem of specifications in Alternating-time Temporal Logic on
CGS-SPC.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,729 | Representing de Rham cohomology classes on an open Riemann surface by holomorphic forms | Let $X$ be a connected open Riemann surface. Let $Y$ be an Oka domain in the
smooth locus of an analytic subvariety of $\mathbb C^n$, $n\geq 1$, such that
the convex hull of $Y$ is all of $\mathbb C^n$. Let $\mathscr O_*(X, Y)$ be the
space of nondegenerate holomorphic maps $X\to Y$. Take a holomorphic $1$-form
$\theta$ on $X$, not identically zero, and let $\pi:\mathscr O_*(X,Y) \to
H^1(X,\mathbb C^n)$ send a map $g$ to the cohomology class of $g\theta$. Our
main theorem states that $\pi$ is a Serre fibration. This result subsumes the
1971 theorem of Kusunoki and Sainouchi that both the periods and the divisor of
a holomorphic form on $X$ can be prescribed arbitrarily. It also subsumes two
parametric h-principles in minimal surface theory proved by Forstneric and
Larusson in 2016.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,730 | Unidirectional control of optically induced spin waves | Unidirectional control of optically induced spin waves in a rare-earth iron
garnet crystal is demonstrated. We observed the interference of two spin-wave
packets with different initial phases generated by circularly polarized light
pulses. This interference results in unidirectional propagation if the
spin-wave sources are spaced apart at 1/4 of the wavelength of the spin waves
and the initial phase difference is set to pi/2. The propagating direction of
the spin wave is switched by the polarization helicity of the light pulses.
Moreover, in a numerical simulation, applying more than two spin-wave sources
with a suitable polarization and spot shape, arbitrary manipulation of the spin
wave by the phased array method was replicated.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,731 | Value Prediction Network | This paper proposes a novel deep reinforcement learning (RL) architecture,
called Value Prediction Network (VPN), which integrates model-free and
model-based RL methods into a single neural network. In contrast to typical
model-based RL methods, VPN learns a dynamics model whose abstract states are
trained to make option-conditional predictions of future values (discounted sum
of rewards) rather than of future observations. Our experimental results show
that VPN has several advantages over both model-free and model-based baselines
in a stochastic environment where careful planning is required but building an
accurate observation-prediction model is difficult. Furthermore, VPN
outperforms Deep Q-Network (DQN) on several Atari games even with
short-lookahead planning, demonstrating its potential as a new way of learning
a good state representation.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,732 | Scalable Generalized Linear Bandits: Online Computation and Hashing | Generalized Linear Bandits (GLBs), a natural extension of the stochastic
linear bandits, has been popular and successful in recent years. However,
existing GLBs scale poorly with the number of rounds and the number of arms,
limiting their utility in practice. This paper proposes new, scalable solutions
to the GLB problem in two respects. First, unlike existing GLBs, whose
per-time-step space and time complexity grow at least linearly with time $t$,
we propose a new algorithm that performs online computations to enjoy a
constant space and time complexity. At its heart is a novel Generalized Linear
extension of the Online-to-confidence-set Conversion (GLOC method) that takes
\emph{any} online learning algorithm and turns it into a GLB algorithm. As a
special case, we apply GLOC to the online Newton step algorithm, which results
in a low-regret GLB algorithm with much lower time and memory complexity than
prior work. Second, for the case where the number $N$ of arms is very large, we
propose new algorithms in which each next arm is selected via an inner product
search. Such methods can be implemented via hashing algorithms (i.e.,
"hash-amenable") and result in a time complexity sublinear in $N$. While a
Thompson sampling extension of GLOC is hash-amenable, its regret bound for
$d$-dimensional arm sets scales with $d^{3/2}$, whereas GLOC's regret bound
scales with $d$. Towards closing this gap, we propose a new hash-amenable
algorithm whose regret bound scales with $d^{5/4}$. Finally, we propose a fast
approximate hash-key computation (inner product) with a better accuracy than
the state-of-the-art, which can be of independent interest. We conclude the
paper with preliminary experimental results confirming the merits of our
methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
20,733 | Dome of magnetic order inside the nematic phase of sulfur-substituted FeSe under pressure | The pressure dependence of the structural, magnetic and superconducting
transitions and of the superconducting upper critical field were studied in
sulfur-substituted Fe(Se$_{1-x}$S$_{x}$). Resistance measurements were
performed on single crystals with three substitution levels ($x$=0.043, 0.096,
0.12) under hydrostatic pressures up to 1.8 GPa and in magnetic fields up to 9
T, and compared to data on pure FeSe. Our results illustrate the effects of
chemical and physical pressure on Fe(Se$_{1-x}$S$_{x}$). On increasing sulfur
content, magnetic order in the low-pressure range is strongly suppressed to a
small dome-like region in the phase diagrams. However, $T_s$ is much less
suppressed by sulfur substitution and $T_c$ of Fe(Se$_{1-x}$S$_{x}$) exhibits
similar non-monotonic pressure dependence with a local maximum and a local
minimum present in the low pressure range for all $x$. The local maximum in
$T_c$ coincides with the emergence of the magnetic order above $T_c$. At this
pressure the slope of the upper critical field decreases abruptly. The minimum
of $T_c$ correlates with a broad maximum of the upper critical field slope
normalized by $T_c$.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,734 | ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge | This paper describes Luminoso's participation in SemEval 2017 Task 2,
"Multilingual and Cross-lingual Semantic Word Similarity", with a system based
on ConceptNet. ConceptNet is an open, multilingual knowledge graph that focuses
on general knowledge that relates the meanings of words and phrases. Our
submission to SemEval was an update of previous work that builds high-quality,
multilingual word embeddings from a combination of ConceptNet and
distributional semantics. Our system took first place in both subtasks. It
ranked first in 4 out of 5 of the separate languages, and also ranked first in
all 10 of the cross-lingual language pairs.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,735 | The symmetrized topological complexity of the circle | We determine the symmetrized topological complexity of the circle, using
primarily just general topology.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,736 | Intertwining operators among twisted modules associated to not-necessarily-commuting automorphisms | We introduce intertwining operators among twisted modules or twisted
intertwining operators associated to not-necessarily-commuting automorphisms of
a vertex operator algebra. Let $V$ be a vertex operator algebra and let
$g_{1}$, $g_{2}$ and $g_{3}$ be automorphisms of $V$. We prove that for
$g_{1}$-, $g_{2}$- and $g_{3}$-twisted $V$-modules $W_{1}$, $W_{2}$ and
$W_{3}$, respectively, such that the vertex operator map for $W_{3}$ is
injective, if there exists a twisted intertwining operator of type
${W_{3}\choose W_{1}W_{2}}$ such that the images of its component operators
span $W_{3}$, then $g_{3}=g_{1}g_{2}$. We also construct what we call the
skew-symmetry and contragredient isomorphisms between spaces of twisted
intertwining operators among twisted modules of suitable types. The proofs of
these results involve careful analysis of the analytic extensions corresponding
to the actions of the not-necessarily-commuting automorphisms of the vertex
operator algebra.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,737 | Enabling Visual Design Verification Analytics - From Prototype Visualizations to an Analytics Tool using the Unity Game Engine | The ever-increasing architectural complexity in contemporary ASIC projects
turns Design Verification (DV) into a highly advanced endeavor. Pressing needs
for short time-to-market has made automation a key solution in DV. However,
recurring execution of large regression suites inevitably leads to challenging
amounts of test results. Following the design science paradigm, we present an
action research study to introduce visual analytics in a commercial ASIC
project. We develop a cityscape visualization tool using the game engine Unity.
Initial evaluations are promising, suggesting that the tool offers a novel
approach to identify error-prone parts of the design, as well as coverage
holes.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,738 | Self-regulation promotes cooperation in social networks | Cooperative behavior in real social dilemmas is often perceived as a
phenomenon emerging from norms and punishment. To overcome this paradigm, we
highlight the interplay between the influence of social networks on
individuals, and the activation of spontaneous self-regulating mechanisms,
which may lead them to behave cooperatively, while interacting with others and
taking conflicting decisions over time. By extending Evolutionary game theory
over networks, we prove that cooperation partially or fully emerges whether
self-regulating mechanisms are sufficiently stronger than social pressure.
Interestingly, even few cooperative individuals act as catalyzing agents for
the cooperation of others, thus activating a recruiting mechanism, eventually
driving the whole population to cooperate.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,739 | Magnetic properties of nanoparticles compacts with controlled broadening of the particle size distribution | Binary random compacts with different proportions of small (volume V) and
large (volume 2V) bare maghemite nanoparticles (NPs) are used to investigate
the effect of controllably broadening the particle size distribution on the
magnetic properties of magnetic NP assemblies with strong dipolar interaction.
A series of eight random mixtures of highly uniform 9.0 and 11.5 nm diameter
maghemite particles prepared by thermal decomposition are studied. In spite of
severely broadened size distributions in the mixed samples, well defined
superspin glass transition temperatures are observed across the series, their
values increasing linearly with the weight fraction of large particles.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,740 | A Generative Model for Exploring Structure Regularities in Attributed Networks | Many real-world networks known as attributed networks contain two types of
information: topology information and node attributes. It is a challenging task
on how to use these two types of information to explore structural
regularities. In this paper, by characterizing potential relationship between
link communities and node attributes, a principled statistical model named
PSB_PG that generates link topology and node attributes is proposed. This model
for generating links is based on the stochastic blockmodels following a Poisson
distribution. Therefore, it is capable of detecting a wide range of network
structures including community structures, bipartite structures and other
mixture structures. The model for generating node attributes assumes that node
attributes are high dimensional and sparse and also follow a Poisson
distribution. This makes the model be uniform and the model parameters can be
directly estimated by expectation-maximization (EM) algorithm. Experimental
results on artificial networks and real networks containing various structures
have shown that the proposed model PSB_PG is not only competitive with the
state-of-the-art models, but also provides good semantic interpretation for
each community via the learned relationship between the community and its
related attributes.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,741 | On the maximal halfspace depth of permutation-invariant distributions on the simplex | We compute the maximal halfspace depth for a class of permutation-invariant
distributions on the probability simplex. The derivations are based on
stochastic ordering results that so far were only showed to be relevant for the
Behrens-Fisher problem.
| 0 | 0 | 1 | 1 | 0 | 0 |
20,742 | Theoretical Evaluation of Li et al.'s Approach for Improving a Binary Watermark-Based Scheme in Remote Sensing Data Communications | This letter is about a principal weakness of the published article by Li et
al. in 2014. It seems that the mentioned work has a terrible conceptual mistake
while presenting its theoretical approach. In fact, the work has tried to
design a new attack and its effective solution for a basic watermarking
algorithm by Zhu et al. published in 2013, however in practice, we show the Li
et al.'s approach is not correct to obtain the aim. For disproof of the
incorrect approach, we only apply a numerical example as the counterexample of
the Li et al.'s approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,743 | Bioinformatics and Medicine in the Era of Deep Learning | Many of the current scientific advances in the life sciences have their
origin in the intensive use of data for knowledge discovery. In no area this is
so clear as in bioinformatics, led by technological breakthroughs in data
acquisition technologies. It has been argued that bioinformatics could quickly
become the field of research generating the largest data repositories, beating
other data-intensive areas such as high-energy physics or astroinformatics.
Over the last decade, deep learning has become a disruptive advance in machine
learning, giving new live to the long-standing connectionist paradigm in
artificial intelligence. Deep learning methods are ideally suited to
large-scale data and, therefore, they should be ideally suited to knowledge
discovery in bioinformatics and biomedicine at large. In this brief paper, we
review key aspects of the application of deep learning in bioinformatics and
medicine, drawing from the themes covered by the contributions to an ESANN 2018
special session devoted to this topic.
| 0 | 0 | 0 | 1 | 1 | 0 |
20,744 | A Weakly Supervised Approach to Train Temporal Relation Classifiers and Acquire Regular Event Pairs Simultaneously | Capabilities of detecting temporal relations between two events can benefit
many applications. Most of existing temporal relation classifiers were trained
in a supervised manner. Instead, we explore the observation that regular event
pairs show a consistent temporal relation despite of their various contexts,
and these rich contexts can be used to train a contextual temporal relation
classifier, which can further recognize new temporal relation contexts and
identify new regular event pairs. We focus on detecting after and before
temporal relations and design a weakly supervised learning approach that
extracts thousands of regular event pairs and learns a contextual temporal
relation classifier simultaneously. Evaluation shows that the acquired regular
event pairs are of high quality and contain rich commonsense knowledge and
domain specific knowledge. In addition, the weakly supervised trained temporal
relation classifier achieves comparable performance with the state-of-the-art
supervised systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,745 | Privacy with Estimation Guarantees | We study the central problem in data privacy: how to share data with an
analyst while providing both privacy and utility guarantees to the user that
owns the data. In this setting, we present an estimation-theoretic analysis of
the privacy-utility trade-off (PUT). Here, an analyst is allowed to reconstruct
(in a mean-squared error sense) certain functions of the data (utility), while
other private functions should not be reconstructed with distortion below a
certain threshold (privacy). We demonstrate how $\chi^2$-information captures
the fundamental PUT in this case and provide bounds for the best PUT. We
propose a convex program to compute privacy-assuring mappings when the
functions to be disclosed and hidden are known a priori and the data
distribution is known. We derive lower bounds on the minimum mean-squared error
of estimating a target function from the disclosed data and evaluate the
robustness of our approach when an empirical distribution is used to compute
the privacy-assuring mappings instead of the true data distribution. We
illustrate the proposed approach through two numerical experiments.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,746 | A line of CFTs: from generalized free fields to SYK | We point out that there is a simple variant of the SYK model, which we call
cSYK, that is $SL(2,R)$ invariant for all values of the coupling. The
modification consists of replacing the UV part of the SYK action with a
quadratic bilocal term. The corresponding bulk dual is a non-gravitational
theory in a rigid AdS$_2$ background. At weak coupling cSYK is a generalized
free field theory; at strong coupling, it approaches the infrared of SYK. The
existence of this line of fixed points explains the previously found connection
between the three-point function of bilinears in these two theories at large
$q$.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,747 | Miura transformations for discrete Painlevé equations coming from the affine E$_8$ Weyl group | We derive integrable equations starting from autonomous mappings with a
general form inspired by the additive systems associated to the affine Weyl
group E$_8^{(1)}$. By deautonomisation we obtain two hitherto unknown systems,
one of which turns out to be a linearisable one, and we show that both these
systems arise from the deautonomisation of a non-QRT mapping. In order to
unambiguously prove the integrability of these nonautonomous systems, we
introduce a series of Miura transformations which allows us to prove that one
of these systems is indeed a discrete Painlevé equation, related to the
affine Weyl group E$_7^{(1)}$, and to cast it in canonical form. A similar
sequence of Miura transformations allows us to effectively linearise the second
system we obtain. An interesting off-shoot of our calculations is that the
series of Miura transformations, when applied at the autonomous limit, allows
one to transform a non-QRT invariant into a QRT one.
| 0 | 1 | 1 | 0 | 0 | 0 |
20,748 | Virtual Molecular Dynamics | Molecular dynamics is based on solving Newton's equations for many-particle
systems that evolve along complex, highly fluctuating trajectories. The orbital
instability and short-time complexity of Newtonian orbits is in sharp contrast
to the more coherent behavior of collective modes such as density profiles. The
notion of virtual molecular dynamics is introduced here based on temporal
coarse-graining via Pade approximants and the Ito formula for stochastic
processes. It is demonstrated that this framework leads to significant
efficiency over traditional molecular dynamics and avoids the need to introduce
coarse-grained variables and phenomenological equations for their evolution. In
this framework, an all-atom trajectory is represented by a Markov chain of
virtual atomic states at a discrete sequence of timesteps, transitions between
which are determined by an integration of conventional molecular dynamics with
Pade approximants and a microstate energy annealing methodology. The latter is
achieved by a conventional and an MD NVE energy minimization schemes. This
multiscale framework is demonstrated for a pertussis toxin subunit undergoing a
structural transition, a T=1 capsid-like structure of HPV16 L1 protein, and two
coalescing argon droplets.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,749 | Regression with genuinely functional errors-in-covariates | Contamination of covariates by measurement error is a classical problem in
multivariate regression, where it is well known that failing to account for
this contamination can result in substantial bias in the parameter estimators.
The nature and degree of this effect on statistical inference is also
understood to crucially depend on the specific distributional properties of the
measurement error in question. When dealing with functional covariates,
measurement error has thus far been modelled as additive white noise over the
observation grid. Such a setting implicitly assumes that the error arises
purely at the discrete sampling stage, otherwise the model can only be viewed
in a weak (stochastic differential equation) sense, white noise not being a
second-order process. Departing from this simple distributional setting can
have serious consequences for inference, similar to the multivariate case, and
current methodology will break down. In this paper, we consider the case when
the additive measurement error is allowed to be a valid stochastic process. We
propose a novel estimator of the slope parameter in a functional linear model,
for scalar as well as functional responses, in the presence of this general
measurement error specification. The proposed estimator is inspired by the
multivariate regression calibration approach, but hinges on recent advances on
matrix completion methods for functional data in order to handle the nontrivial
(and unknown) error covariance structure. The asymptotic properties of the
proposed estimators are derived. We probe the performance of the proposed
estimator of slope using simulations and observe that it substantially improves
upon the spectral truncation estimator based on the erroneous observations,
i.e., ignoring measurement error. We also investigate the behaviour of the
estimators on a real dataset on hip and knee angle curves during a gait cycle.
| 0 | 0 | 0 | 1 | 0 | 0 |
20,750 | The distance between a naive cumulative estimator and its least concave majorant | We consider the process $\widehat\Lambda_n-\Lambda_n$, where $\Lambda_n$ is a
cadlag step estimator for the primitive $\Lambda$ of a nonincreasing function
$\lambda$ on $[0,1]$, and $\widehat\Lambda_n$ is the least concave majorant of
$\Lambda_n$. We extend the results in Kulikov and Lopuhaä (2006, 2008) to the
general setting considered in Durot (2007). Under this setting we prove that a
suitably scaled version of $\widehat\Lambda_n-\Lambda_n$ converges in
distribution to the corresponding process for two-sided Brownian motion with
parabolic drift and we establish a central limit theorem for the $L_p$-distance
between $\widehat\Lambda_n$ and $\Lambda_n$.
| 0 | 0 | 1 | 1 | 0 | 0 |
20,751 | Controllability of impulse controlled systems of heat equations coupled by constant matrices | This paper studies the approximate and null controllability for impulse
controlled systems of heat equations coupled by a pair (A,B) of constant
matrices. We present a necessary and sufficient condition for the approximate
controllability, which is exactly Kalman's controllability rank condition of
(A,B). We prove that when such a system is approximately controllable, the
approximate controllability over an interval [0,T] can be realized by adding
controls at arbitrary n different control instants
0<\tau_1<\tau_2<\cdots<\tau_n<T, provided that \tau_n-\tau_1<d_A, where
d_A=\min\{\pi/|Im \lambda| : \lambda\in \sigma(A)\}. We also show that in
general, such systems are not null controllable.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,752 | Nearly Optimal Constructions of PIR and Batch Codes | In this work we study two families of codes with availability, namely private
information retrieval (PIR) codes and batch codes. While the former requires
that every information symbol has $k$ mutually disjoint recovering sets, the
latter asks this property for every multiset request of $k$ information
symbols. The main problem under this paradigm is to minimize the number of
redundancy symbols. We denote this value by $r_P(n,k), r_B(n,k)$, for PIR,
batch codes, respectively, where $n$ is the number of information symbols.
Previous results showed that for any constant $k$, $r_P(n,k) =
\Theta(\sqrt{n})$ and $r_B(n,k)=O(\sqrt{n}\log(n)$. In this work we study the
asymptotic behavior of these codes for non-constant $k$ and specifically for
$k=\Theta(n^\epsilon)$. We also study the largest value of $k$ such that the
rate of the codes approaches 1, and show that for all $\epsilon<1$,
$r_P(n,n^\epsilon) = o(n)$, while for batch codes, this property holds for all
$\epsilon< 0.5$.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,753 | Submillimeter Array CO(2-1) Imaging of the NGC 6946 Giant Molecular Clouds | We present a CO(2-1) mosaic map of the spiral galaxy NGC 6946 by combining
data from the Submillimeter Array and the IRAM 30 m telescope. We identify 390
giant molecular clouds (GMCs) from the nucleus to 4.5 kpc in the disk. GMCs in
the inner 1 kpc are generally more luminous and turbulent, some of which have
luminosities >10^6 K km/s pc^2 and velocity dispersions >10 km/s. Large-scale
bar-driven dynamics likely regulate GMC properties in the nuclear region.
Similar to the Milky Way and other disk galaxies, GMC mass function of NGC 6946
has a shallower slope (index>-2) in the inner region, and a steeper slope
(index<-2) in the outer region. This difference in mass spectra may be
indicative of different cloud formation pathways: gravitational instabilities
might play a major role in the nuclear region, while cloud coalescence might be
dominant in the outer disk. Finally, the NGC 6946 clouds are similar to those
in M33 in terms of statistical properties, but they are generally less luminous
and turbulent than the M51 clouds.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,754 | Optical and Near-Infrared Spectra of sigma Orionis Isolated Planetary-mass Objects | We have obtained low-resolution optical (0.7-0.98 micron) and near-infrared
(1.11-1.34 micron and 0.8-2.5 micron) spectra of twelve isolated planetary-mass
candidates (J = 18.2-19.9 mag) of the 3-Myr sigma Orionis star cluster with a
view to determining the spectroscopic properties of very young, substellar
dwarfs and assembling a complete cluster mass function. We have classified our
targets by visual comparison with high- and low-gravity standards and by
measuring newly defined spectroscopic indices. We derived L0-L4.5 and M9-L2.5
using high- and low-gravity standards, respectively. Our targets reveal clear
signposts of youth, thus corroborating their cluster membership and planetary
masses (6-13 Mjup). These observations complete the sigma Orionis mass function
by spectroscopically confirming the planetary-mass domain to a confidence level
of $\sim$75 percent. The comparison of our spectra with BT-Settl solar
metallicity model atmospheres yields a temperature scale of 2350-1800 K and a
low surface gravity of log g ~ 4.0 [cm/s2], as would be expected for young
planetary-mass objects. We discuss the properties of the cluster least-massive
population as a function of spectral type. We have also obtained the first
optical spectrum of S Ori 70, a T dwarf in the direction of sigma Orionis. Our
data provide reference optical and near-infrared spectra of very young L dwarfs
and a mass function that may be used as templates for future studies of
low-mass substellar objects and exoplanets. The extrapolation of the sigma
Orionis mass function to the solar neighborhood may indicate that isolated
planetary-mass objects with temperatures of 200-300 K and masses in the
interval 6-13-Mjup may be as numerous as very low-mass stars.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,755 | The Blackbird Dataset: A large-scale dataset for UAV perception in aggressive flight | The Blackbird unmanned aerial vehicle (UAV) dataset is a large-scale,
aggressive indoor flight dataset collected using a custom-built quadrotor
platform for use in evaluation of agile perception.Inspired by the potential of
future high-speed fully-autonomous drone racing, the Blackbird dataset contains
over 10 hours of flight data from 168 flights over 17 flight trajectories and 5
environments at velocities up to $7.0ms^-1$. Each flight includes sensor data
from 120Hz stereo and downward-facing photorealistic virtual cameras, 100Hz
IMU, $\sim190Hz$ motor speed sensors, and 360Hz millimeter-accurate motion
capture ground truth. Camera images for each flight were photorealistically
rendered using FlightGoggles across a variety of environments to facilitate
easy experimentation of high performance perception algorithms. The dataset is
available for download at this http URL
| 1 | 0 | 0 | 0 | 0 | 0 |
20,756 | An integration of fast alignment and maximum-likelihood methods for electron subtomogram averaging and classification | Motivation: Cellular Electron CryoTomography (CECT) is an emerging 3D imaging
technique that visualizes subcellular organization of single cells at
submolecular resolution and in near-native state. CECT captures large numbers
of macromolecular complexes of highly diverse structures and abundances.
However, the structural complexity and imaging limits complicate the systematic
de novo structural recovery and recognition of these macromolecular complexes.
Efficient and accurate reference-free subtomogram averaging and classification
represent the most critical tasks for such analysis. Existing subtomogram
alignment based methods are prone to the missing wedge effects and low
signal-to-noise ratio (SNR). Moreover, existing maximum-likelihood based
methods rely on integration operations, which are in principle computationally
infeasible for accurate calculation.
Results: Built on existing works, we propose an integrated method, Fast
Alignment Maximum Likelihood method (FAML), which uses fast subtomogram
alignment to sample sub-optimal rigid transformations. The transformations are
then used to approximate integrals for maximum-likelihood update of subtomogram
averages through expectation-maximization algorithm. Our tests on simulated and
experimental subtomograms showed that, compared to our previously developed
fast alignment method (FA), FAML is significantly more robust to noise and
missing wedge effects with moderate increases of computation cost.Besides, FAML
performs well with significantly fewer input subtomograms when the FA method
fails. Therefore, FAML can serve as a key component for improved construction
of initial structural models from macromolecules captured by CECT.
| 0 | 0 | 0 | 1 | 1 | 0 |
20,757 | Local and global existence of solutions to a strongly damped wave equation of the $p$-Laplacian type | This article focuses on a quasilinear wave equation of $p$-Laplacian type: $$
u_{tt} - \Delta_p u - \Delta u_t=0$$ in a bounded domain
$\Omega\subset\mathbb{R}^3$ with a sufficiently smooth boundary
$\Gamma=\partial\Omega$ subject to a generalized Robin boundary condition
featuring boundary damping and a nonlinear source term. The operator
$\Delta_p$, $2 < p < 3$, denotes the classical $p$-Laplacian. The nonlinear
boundary term $f (u)$ is a source feedback that is allowed to have a
supercritical exponent, in the sense that the associated Nemytskii operator is
not locally Lipschitz from $W^{1,p}(\Omega)$ into $L^2(\Gamma)$. Under suitable
assumptions on the parameters we provide a rigorous proof of existence of a
local weak solution which can be extended globally in time provided the source
term satisfies an appropriate growth condition.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,758 | Quasi-flat representations of uniform groups and quantum groups | Given a discrete group $\Gamma=<g_1,\ldots,g_M>$ and a number $K\in\mathbb
N$, a unitary representation $\rho:\Gamma\to U_K$ is called quasi-flat when the
eigenvalues of each $\rho(g_i)\in U_K$ are uniformly distributed among the
$K$-th roots of unity. The quasi-flat representations of $\Gamma$ form
altogether a parametric matrix model $\pi:\Gamma\to C(X,U_K)$.
We compute here the universal model space $X$ for various classes of discrete
groups, notably with results in the case where $\Gamma$ is metabelian. We are
particularly interested in the case where $X$ is a union of compact homogeneous
spaces, and where the induced representation $\tilde{\pi}:C^*(\Gamma)\to
C(X,U_K)$ is stationary in the sense that it commutes with the Haar
functionals. We present several positive and negative results on this subject.
We also discuss similar questions for the discrete quantum groups, proving a
stationarity result for the discrete dual of the twisted orthogonal group
$O_2^{-1}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,759 | Nonparametric Preference Completion | We consider the task of collaborative preference completion: given a pool of
items, a pool of users and a partially observed item-user rating matrix, the
goal is to recover the \emph{personalized ranking} of each user over all of the
items. Our approach is nonparametric: we assume that each item $i$ and each
user $u$ have unobserved features $x_i$ and $y_u$, and that the associated
rating is given by $g_u(f(x_i,y_u))$ where $f$ is Lipschitz and $g_u$ is a
monotonic transformation that depends on the user. We propose a $k$-nearest
neighbors-like algorithm and prove that it is consistent. To the best of our
knowledge, this is the first consistency result for the collaborative
preference completion problem in a nonparametric setting. Finally, we
demonstrate the performance of our algorithm with experiments on the Netflix
and Movielens datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
20,760 | Real-Time Reconstruction of Counting Process through Queues | For the emerging Internet of Things (IoT), one of the most critical problems
is the real-time reconstruction of signals from a set of aged measurements.
During the reconstruction, distortion occurs between the observed signal and
the reconstructed signal due to sampling and transmission. In this paper, we
focus on minimizing the average distortion defined as the 1-norm of the
difference of the two signals under the scenario that a Poisson counting
process is reconstructed in real-time on a remote monitor. Especially, we
consider the reconstruction under uniform sampling policy and two non-uniform
sampling policies, i.e., the threshold-based policy and the zero-wait policy.
For each of the policy, we derive the closed-form expression of the average
distortion by dividing the overall distortion area into polygons and analyzing
their structures. It turns out that the polygons are built up by sub-polygons
that account for distortions caused by sampling and transmission. The
closed-form expressions of the average distortion help us find the optimal
sampling parameters that achieve the minimum distortion. Simulation results are
provided to validate our conclusion.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,761 | Discrepancy-Based Algorithms for Non-Stationary Rested Bandits | We study the multi-armed bandit problem where the rewards are realizations of
general non-stationary stochastic processes, a setting that generalizes many
existing lines of work and analyses. In particular, we present a theoretical
analysis and derive regret guarantees for rested bandits in which the reward
distribution of each arm changes only when we pull that arm. Remarkably, our
regret bounds are logarithmic in the number of rounds under several natural
conditions. We introduce a new algorithm based on classical UCB ideas combined
with the notion of weighted discrepancy, a useful tool for measuring the
non-stationarity of a stochastic process. We show that the notion of
discrepancy can be used to design very general algorithms and a unified
framework for the analysis of multi-armed rested bandit problems with
non-stationary rewards. In particular, we show that we can recover the regret
guarantees of many specific instances of bandit problems with non-stationary
rewards that have been studied in the literature. We also provide experiments
demonstrating that our algorithms can enjoy a significant improvement in
practice compared to standard benchmarks.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,762 | Kondo lattice heavy fermion behavior in CeRh2Ga2 | The physical properties of an intermetallic compound CeRh2Ga2 have been
investigated by magnetic susceptibility \chi(T), isothermal magnetization M(H),
heat capacity C_p(T), electrical resistivity \rho(T), thermal conductivity
\kappa(T) and thermopower S(T) measurements. CeRh2Ga2 is found to crystallize
with CaBe2Ge2-type primitive tetragonal structure (space group P4/nmm). No
evidence of long range magnetic order is seen down to 1.8 K. The \chi(T) data
show paramagnetic behavior with an effective moment \mu_eff ~ 2.5 \mu_B/Ce
indicating Ce^3+ valence state of Ce ions. The \rho(T) data exhibit Kondo
lattice behavior with a metallic ground state. The low-T C_p(T) data yield an
enhanced Sommerfeld coefficient \gamma = 130(2) mJ/mol K^2 characterizing
CeRh2Ga2 as a moderate heavy fermion system. The high-T C_p(T) and \rho(T) show
an anomaly near 255 K, reflecting a phase transition. The \kappa(T) suggests
phonon dominated thermal transport with considerably higher values of Lorenz
number L(T) compared to the theoretical Sommerfeld value L_0.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,763 | Khovanov homology and periodic links | Based on the results of the second author, we define an equivariant version
of Lee and Bar-Natan homology for periodic links and show that there exists an
equivariant spectral sequence from the equivariant Khovanov homology to
equivariant Lee homology. As a result we obtain new obstructions for a link to
be periodic. These obstructions generalize previous results of Przytycki and of
the second author.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,764 | A Note on Spectral Clustering and SVD of Graph Data | Spectral clustering and Singular Value Decomposition (SVD) are both widely
used technique for analyzing graph data. In this note, I will present their
connections using simple linear algebra, aiming to provide some in-depth
understanding for future research.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,765 | An explicit analysis of the entropic penalty in linear programming | Solving linear programs by using entropic penalization has recently attracted
new interest in the optimization community, since this strategy forms the basis
for the fastest-known algorithms for the optimal transport problem, with many
applications in modern large-scale machine learning. Crucial to these
applications has been an analysis of how quickly solutions to the penalized
program approach true optima to the original linear program. More than 20 years
ago, Cominetti and San Martín showed that this convergence is exponentially
fast; however, their proof is asymptotic and does not give any indication of
how accurately the entropic program approximates the original program for any
particular choice of the penalization parameter. We close this long-standing
gap in the literature regarding entropic penalization by giving a new proof of
the exponential convergence, valid for any linear program. Our proof is
non-asymptotic, yields explicit constants, and has the virtue of being
extremely simple. We provide matching lower bounds and show that the entropic
approach does not lead to a near-linear time approximation scheme for the
linear assignment problem.
| 0 | 0 | 0 | 1 | 0 | 0 |
20,766 | Deconstructing the Tail at Scale Effect Across Network Protocols | Network latencies have become increasingly important for the performance of
web servers and cloud computing platforms. Identifying network-related tail
latencies and reasoning about their potential causes is especially important to
gauge application run-time in online data-intensive applications, where the
99th percentile latency of individual operations can significantly affect the
the overall latency of requests.
This paper deconstructs the "tail at scale" effect across TCP-IP, UDP-IP, and
RDMA network protocols. Prior scholarly works have analyzed tail latencies
caused by extrinsic network parameters like network congestion and flow
fairness. Contrary to existing literature, we identify surprising rare tails in
TCP-IP round-trip measurements that are as enormous as 110x higher than the
median latency. Our experimental design eliminates network congestion as a
tail-inducing factor. Moreover, we observe similar extreme tails in UDP-IP
packet exchanges, ruling out additional TCP-IP protocol operations as the root
cause of tail latency. However, we are unable to reproduce similar tail
latencies in RDMA packet exchanges, which leads us to conclude that the TCP/UDP
protocol stack within the operating system kernel is likely the primary source
of extreme latency tails.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,767 | Deep Neural Generative Model of Functional MRI Images for Psychiatric Disorder Diagnosis | Accurate diagnosis of psychiatric disorders plays a critical role in
improving quality of life for patients and potentially supports the development
of new treatments. Many studies have been conducted on machine learning
techniques that seek brain imaging data for specific biomarkers of disorders.
These studies have encountered the following dilemma: An end-to-end
classification overfits to a small number of high-dimensional samples but
unsupervised feature-extraction has the risk of extracting a signal of no
interest. In addition, such studies often provided only diagnoses for patients
without presenting the reasons for these diagnoses. This study proposed a deep
neural generative model of resting-state functional magnetic resonance imaging
(fMRI) data. The proposed model is conditioned by the assumption of the
subject's state and estimates the posterior probability of the subject's state
given the imaging data, using Bayes' rule. This study applied the proposed
model to diagnose schizophrenia and bipolar disorders. Diagnosis accuracy was
improved by a large margin over competitive approaches, namely a support vector
machine, logistic regression, and multilayer perceptron with or without
unsupervised feature-extractors in addition to a Gaussian mixture model. The
proposed model visualizes brain regions largely related to the disorders, thus
motivating further biological investigation.
| 1 | 0 | 0 | 1 | 0 | 0 |
20,768 | Deep Learning for Patient-Specific Kidney Graft Survival Analysis | An accurate model of patient-specific kidney graft survival distributions can
help to improve shared-decision making in the treatment and care of patients.
In this paper, we propose a deep learning method that directly models the
survival function instead of estimating the hazard function to predict survival
times for graft patients based on the principle of multi-task learning. By
learning to jointly predict the time of the event, and its rank in the cox
partial log likelihood framework, our deep learning approach outperforms, in
terms of survival time prediction quality and concordance index, other common
methods for survival analysis, including the Cox Proportional Hazards model and
a network trained on the cox partial log-likelihood.
| 1 | 0 | 0 | 1 | 0 | 0 |
20,769 | When is a Network a Network? Multi-Order Graphical Model Selection in Pathways and Temporal Networks | We introduce a framework for the modeling of sequential data capturing
pathways of varying lengths observed in a network. Such data are important,
e.g., when studying click streams in information networks, travel patterns in
transportation systems, information cascades in social networks, biological
pathways or time-stamped social interactions. While it is common to apply graph
analytics and network analysis to such data, recent works have shown that
temporal correlations can invalidate the results of such methods. This raises a
fundamental question: when is a network abstraction of sequential data
justified? Addressing this open question, we propose a framework which combines
Markov chains of multiple, higher orders into a multi-layer graphical model
that captures temporal correlations in pathways at multiple length scales
simultaneously. We develop a model selection technique to infer the optimal
number of layers of such a model and show that it outperforms previously used
Markov order detection techniques. An application to eight real-world data sets
on pathways and temporal networks shows that it allows to infer graphical
models which capture both topological and temporal characteristics of such
data. Our work highlights fallacies of network abstractions and provides a
principled answer to the open question when they are justified. Generalizing
network representations to multi-order graphical models, it opens perspectives
for new data mining and knowledge discovery algorithms.
| 1 | 1 | 0 | 0 | 0 | 0 |
20,770 | Outlier-robust moment-estimation via sum-of-squares | We develop efficient algorithms for estimating low-degree moments of unknown
distributions in the presence of adversarial outliers. The guarantees of our
algorithms improve in many cases significantly over the best previous ones,
obtained in recent works of Diakonikolas et al, Lai et al, and Charikar et al.
We also show that the guarantees of our algorithms match information-theoretic
lower-bounds for the class of distributions we consider. These improved
guarantees allow us to give improved algorithms for independent component
analysis and learning mixtures of Gaussians in the presence of outliers.
Our algorithms are based on a standard sum-of-squares relaxation of the
following conceptually-simple optimization problem: Among all distributions
whose moments are bounded in the same way as for the unknown distribution, find
the one that is closest in statistical distance to the empirical distribution
of the adversarially-corrupted sample.
| 1 | 0 | 0 | 1 | 0 | 0 |
20,771 | Clustering High Dimensional Dynamic Data Streams | We present data streaming algorithms for the $k$-median problem in
high-dimensional dynamic geometric data streams, i.e. streams allowing both
insertions and deletions of points from a discrete Euclidean space $\{1, 2,
\ldots \Delta\}^d$. Our algorithms use $k \epsilon^{-2} poly(d \log \Delta)$
space/time and maintain with high probability a small weighted set of points (a
coreset) such that for every set of $k$ centers the cost of the coreset
$(1+\epsilon)$-approximates the cost of the streamed point set. We also provide
algorithms that guarantee only positive weights in the coreset with additional
logarithmic factors in the space and time complexities. We can use this
positively-weighted coreset to compute a $(1+\epsilon)$-approximation for the
$k$-median problem by any efficient offline $k$-median algorithm. All previous
algorithms for computing a $(1+\epsilon)$-approximation for the $k$-median
problem over dynamic data streams required space and time exponential in $d$.
Our algorithms can be generalized to metric spaces of bounded doubling
dimension.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,772 | Edge contact angle and modified Kelvin equation for condensation in open pores | We consider capillary condensation transitions occurring in open slits of
width $L$ and finite height $H$ immersed in a reservoir of vapour. In this case
the pressure at which condensation occurs is closer to saturation compared to
that occurring in an infinite slit ($H=\infty$) due to the presence of two
menisci which are pinned near the open ends. Using macroscopic arguments we
derive a modified Kelvin equation for the pressure, $p_{cc}(L;H)$, at which
condensation occurs and show that the two menisci are characterised by an edge
contact angle $\theta_e$ which is always larger than the equilibrium contact
angle $\theta$, only equal to it in the limit of macroscopic $H$. For walls
which are completely wet ($\theta=0$) the edge contact angle depends only on
the aspect ratio of the capillary and is well described by $\theta_e\approx
\sqrt{\pi L/2H}$ for large $H$. Similar results apply for condensation in
cylindrical pores of finite length. We have tested these predictions against
numerical results obtained using a microscopic density functional model where
the presence of an edge contact angle characterising the shape of the menisci
is clearly visible from the density profiles. Below the wetting temperature
$T_w$ we find very good agreement for slit pores of widths of just a few tens
of molecular diameters while above $T_w$ the modified Kelvin equation only
becomes accurate for much larger systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,773 | Thresholds for hanger slackening and cable shortening in the Melan equation for suspension bridges | The Melan equation for suspension bridges is derived by assuming small
displacements of the deck and inextensible hangers. We determine the thresholds
for the validity of the Melan equation when the hangers slacken, thereby
violating the inextensibility assumption. To this end, we preliminarily study
the possible shortening of the cables: it turns out that there is a striking
difference between even and odd vibrating modes since the former never shorten
the cables. These problems are studied both on beams and plates.
| 0 | 1 | 1 | 0 | 0 | 0 |
20,774 | Bendable Cuboid Robot Path Planning with Collision Avoidance using Generalized $L_p$ Norms | Optimal path planning problems for rigid and deformable (bendable) cuboid
robots are considered by providing an analytic safety constraint using
generalized $L_p$ norms. For regular cuboid robots, level sets of weighted
$L_p$ norms generate implicit approximations of their surfaces. For bendable
cuboid robots a weighted $L_p$ norm in polar coordinates implicitly
approximates the surface boundary through a specified level set. Obstacle
volumes, in the environment to navigate within, are presumed to be
approximately described as sub-level sets of weighted $L_p$ norms. Using these
approximate surface models, the optimal safe path planning problem is
reformulated as a two stage optimization problem, where the safety constraint
depends on a point on the robot which is closest to the obstacle in the
obstacle's distance metric. A set of equality and inequality constraints are
derived to replace the closest point problem, which is then defines additional
analytic constraints on the original path planning problem. Combining all the
analytic constraints with logical AND operations leads to a general optimal
safe path planning problem. Numerically solving the problem involve conversion
to a nonlinear programing problem. Simulations for rigid and bendable cuboid
robot verify the proposed method.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,775 | Quotients of triangulated categories and Equivalences of Buchweitz, Orlov and Amiot--Guo--Keller | We give a sufficient condition for a Verdier quotient $\ct/\cs$ of a
triangulated category $\ct$ by a thick subcategory $\cs$ to be realized inside
of $\ct$ as an ideal quotient. As applications, we deduce three significant
results by Buchweitz, Orlov and Amiot--Guo--Keller.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,776 | Sorting sums of binary decision summands | A sum where each of the $N$ summands can be independently chosen from two
choices yields $2^N$ possible summation outcomes. There is an
$\mathcal{O}(K^2)$-algorithm that finds the $K$ smallest/largest of these sums
by evading the enumeration of all sums.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,777 | A Pursuit of Temporal Accuracy in General Activity Detection | Detecting activities in untrimmed videos is an important but challenging
task. The performance of existing methods remains unsatisfactory, e.g., they
often meet difficulties in locating the beginning and end of a long complex
action. In this paper, we propose a generic framework that can accurately
detect a wide variety of activities from untrimmed videos. Our first
contribution is a novel proposal scheme that can efficiently generate
candidates with accurate temporal boundaries. The other contribution is a
cascaded classification pipeline that explicitly distinguishes between
relevance and completeness of a candidate instance. On two challenging temporal
activity detection datasets, THUMOS14 and ActivityNet, the proposed framework
significantly outperforms the existing state-of-the-art methods, demonstrating
superior accuracy and strong adaptivity in handling activities with various
temporal structures.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,778 | Adjusting systematic bias in high dimensional principal component scores | Principal component analysis continues to be a powerful tool in dimension
reduction of high dimensional data. We assume a variance-diverging model and
use the high-dimension, low-sample-size asymptotics to show that even though
the principal component directions are not consistent, the sample and
prediction principal component scores can be useful in revealing the population
structure. We further show that these scores are biased, and the bias is
asymptotically decomposed into rotation and scaling parts. We propose methods
of bias-adjustment that are shown to be consistent and work well in the finite
but high dimensional situations with small sample sizes. The potential
advantage of bias-adjustment is demonstrated in a classification setting.
| 0 | 0 | 1 | 1 | 0 | 0 |
20,779 | Dynamical Exploration of Amplitude Bistability in Engineered Quantum Systems | Nonlinear systems, whose outputs are not directly proportional to their
inputs, are well known to exhibit many interesting and important phenomena
which have profoundly changed our technological landscape over the last 50
years. Recently the ability to engineer quantum metamaterials through
hybridisation has allowed to explore these nonlinear effects in systems with no
natural analogue. Here we investigate amplitude bistability, which is one of
the most fundamental nonlinear phenomena, in a hybrid system composed of a
superconducting resonator inductively coupled to an ensemble of
nitrogen-vacancy centres. One of the exciting properties of this spin system is
its extremely long spin life-time, more than ten orders of magnitude longer
than other relevant timescales of the hybrid system. This allows us to
dynamically explore this nonlinear regime of cavity quantum electrodynamics
(cQED) and demonstrate a critical slowing down of the cavity population on the
order of several tens of thousands of seconds - a timescale much longer than
observed so far for this effect. Our results provide the foundation for future
quantum technologies based on nonlinear phenomena.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,780 | Map Memorization and Forgetting in the IARA Autonomous Car | In this work, we present a novel strategy for correcting imperfections in
occupancy grid maps called map decay. The objective of map decay is to correct
invalid occupancy probabilities of map cells that are unobservable by sensors.
The strategy was inspired by an analogy between the memory architecture
believed to exist in the human brain and the maps maintained by an autonomous
vehicle. It consists in merging sensory information obtained during runtime
(online) with a priori data from a high-precision map constructed offline. In
map decay, cells observed by sensors are updated using traditional occupancy
grid mapping techniques and unobserved cells are adjusted so that their
occupancy probabilities tend to the values found in the offline map. This
strategy is grounded in the idea that the most precise information available
about an unobservable cell is the value found in the high-precision offline
map. Map decay was successfully tested and is still in use in the IARA
autonomous vehicle from Universidade Federal do Espírito Santo.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,781 | Effective Subgroup Separability of Finitely Generated Nilpotent Groups | This paper studies effective separability for subgroups of finitely generated
nilpotent groups and more broadly effective subgroup separability of finitely
generated nilpotent groups. We provide upper and lower bounds that are
polynomial with respect to the logarithm of the word length for infinite index
subgroups of nilpotent groups. In the case of normal subgroups, we provide an
exact computation generalizing work of the second author. We introduce a
function that quantifies subgroup separability, and we provide polynomial upper
and lower bounds. We finish by demonstrating that our results extend to
virtually nilpotent groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,782 | External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising | Most of existing image denoising methods learn image priors from either
external data or the noisy image itself to remove noise. However, priors
learned from external data may not be adaptive to the image to be denoised,
while priors learned from the given noisy image may not be accurate due to the
interference of corrupted noise. Meanwhile, the noise in real-world noisy
images is very complex, which is hard to be described by simple distributions
such as Gaussian distribution, making real-world noisy image denoising a very
challenging problem. We propose to exploit the information in both external
data and the given noisy image, and develop an external prior guided internal
prior learning method for real-world noisy image denoising. We first learn
external priors from an independent set of clean natural images. With the aid
of learned external priors, we then learn internal priors from the given noisy
image to refine the prior model. The external and internal priors are
formulated as a set of orthogonal dictionaries to efficiently reconstruct the
desired image. Extensive experiments are performed on several real-world noisy
image datasets. The proposed method demonstrates highly competitive denoising
performance, outperforming state-of-the-art denoising methods including those
designed for real-world noisy images.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,783 | A Study of MAC Address Randomization in Mobile Devices and When it Fails | MAC address randomization is a privacy technique whereby mobile devices
rotate through random hardware addresses in order to prevent observers from
singling out their traffic or physical location from other nearby devices.
Adoption of this technology, however, has been sporadic and varied across
device manufacturers. In this paper, we present the first wide-scale study of
MAC address randomization in the wild, including a detailed breakdown of
different randomization techniques by operating system, manufacturer, and model
of device.
We then identify multiple flaws in these implementations which can be
exploited to defeat randomization as performed by existing devices. First, we
show that devices commonly make improper use of randomization by sending
wireless frames with the true, global address when they should be using a
randomized address. We move on to extend the passive identification techniques
of Vanhoef et al. to effectively defeat randomization in ~96% of Android
phones. Finally, we show a method that can be used to track 100% of devices
using randomization, regardless of manufacturer, by exploiting a previously
unknown flaw in the way existing wireless chipsets handle low-level control
frames.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,784 | Completion of the integrable coupling systems | In this paper, we proposed an procedure to construct the completion of the
integrable system by adding a perturbation to the generalized matrix problem,
which can be used to continuous integrable couplings, discrete integrable
couplings and super integrable couplings. As example, we construct the
completion of the Kaup-Newell (KN) integrable coupling, the
Wadati-Konno-Ichikawa (WKI) integrable couplingsis, vector
Ablowitz-Kaup-Newell-Segur (vAKNS) integrable couplings, the Volterra
integrable couplings, Dirac type integrable couplings and NLS-mKdV type
integrable couplings.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,785 | On the wildness of cambrian lattices | In this note, we investigate the representation type of the cambrian lattices
and some other related lattices. The result is expressed as a very simple
trichotomy. When the rank of the underlined Coxeter group is at most 2, the
lattices are of finite representation type. When the Coxeter group is a
reducible group of type A 3 1 , the lattices are of tame representation type.
In all the other cases they are of wild representation type.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,786 | Random Projections For Large-Scale Regression | Fitting linear regression models can be computationally very expensive in
large-scale data analysis tasks if the sample size and the number of variables
are very large. Random projections are extensively used as a dimension
reduction tool in machine learning and statistics. We discuss the applications
of random projections in linear regression problems, developed to decrease
computational costs, and give an overview of the theoretical guarantees of the
generalization error. It can be shown that the combination of random
projections with least squares regression leads to similar recovery as ridge
regression and principal component regression. We also discuss possible
improvements when averaging over multiple random projections, an approach that
lends itself easily to parallel implementation.
| 0 | 0 | 1 | 1 | 0 | 0 |
20,787 | Properties of linear groups with restricted unipotent elements | We consider linear groups which do not contain unipotent elements of infinite
order, which includes all linear groups in positive characteristic, and show
that this class of groups has good properties which resemble those held by
groups of non positive curvature and which do not hold for arbitrary
characteristic zero linear groups. In particular if such a linear group is
finitely generated then centralisers virtually split and all finitely generated
abelian subgroups are undistorted. If further the group is virtually torsion
free (which always holds in characteristic zero) then we have a strong property
on small subgroups: any subgroup either contains a non abelian free group or is
finitely generated and virtually abelian, hence also undistorted. We present
applications, including that the mapping class group of a surface having genus
at least 3 has no faithful linear representation which is complex unitary or
over any field of positive characteristic.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,788 | Sparse Approximation of 3D Meshes using the Spectral Geometry of the Hamiltonian Operator | The discrete Laplace operator is ubiquitous in spectral shape analysis, since
its eigenfunctions are provably optimal in representing smooth functions
defined on the surface of the shape. Indeed, subspaces defined by its
eigenfunctions have been utilized for shape compression, treating the
coordinates as smooth functions defined on the given surface. However, surfaces
of shapes in nature often contain geometric structures for which the general
smoothness assumption may fail to hold. At the other end, some explicit mesh
compression algorithms utilize the order by which vertices that represent the
surface are traversed, a property which has been ignored in spectral
approaches. Here, we incorporate the order of vertices into an operator that
defines a novel spectral domain. We propose a method for representing 3D meshes
using the spectral geometry of the Hamiltonian operator, integrated within a
sparse approximation framework. We adapt the concept of a potential function
from quantum physics and incorporate vertex ordering information into the
potential, yielding a novel data-dependent operator. The potential function
modifies the spectral geometry of the Laplacian to focus on regions with finer
details of the given surface. By sparsely encoding the geometry of the shape
using the proposed data-dependent basis, we improve compression performance
compared to previous results that use the standard Laplacian basis and spectral
graph wavelets.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,789 | Improved Convergence Rates for Distributed Resource Allocation | In this paper, we develop a class of decentralized algorithms for solving a
convex resource allocation problem in a network of $n$ agents, where the agent
objectives are decoupled while the resource constraints are coupled. The agents
communicate over a connected undirected graph, and they want to collaboratively
determine a solution to the overall network problem, while each agent only
communicates with its neighbors. We first study the connection between the
decentralized resource allocation problem and the decentralized consensus
optimization problem. Then, using a class of algorithms for solving consensus
optimization problems, we propose a novel class of decentralized schemes for
solving resource allocation problems in a distributed manner. Specifically, we
first propose an algorithm for solving the resource allocation problem with an
$o(1/k)$ convergence rate guarantee when the agents' objective functions are
generally convex (could be nondifferentiable) and per agent local convex
constraints are allowed; We then propose a gradient-based algorithm for solving
the resource allocation problem when per agent local constraints are absent and
show that such scheme can achieve geometric rate when the objective functions
are strongly convex and have Lipschitz continuous gradients. We have also
provided scalability/network dependency analysis. Based on these two
algorithms, we have further proposed a gradient projection-based algorithm
which can handle smooth objective and simple constraints more efficiently.
Numerical experiments demonstrates the viability and performance of all the
proposed algorithms.
| 1 | 0 | 1 | 0 | 0 | 0 |
20,790 | Sensory Metrics of Neuromechanical Trust | Today digital sources supply an unprecedented component of human sensorimotor
data, the consumption of which is correlated with poorly understood maladies
such as Internet Addiction Disorder and Internet Gaming Disorder. This paper
offers a mathematical understanding of human sensorimotor processing as
multiscale, continuous-time vibratory interaction. We quantify human
informational needs using the signal processing metrics of entropy, noise,
dimensionality, continuity, latency, and bandwidth. Using these metrics, we
define the trust humans experience as a primitive statistical algorithm
processing finely grained sensorimotor data from neuromechanical interaction.
This definition of neuromechanical trust implies that artificial sensorimotor
inputs and interactions that attract low-level attention through frequent
discontinuities and enhanced coherence will decalibrate a brain's
representation of its world over the long term by violating the implicit
statistical contract for which self-calibration evolved. This approach allows
us to model addiction in general as the result of homeostatic regulation gone
awry in novel environments and digital dependency as a sub-case in which the
decalibration caused by digital sensorimotor data spurs yet more consumption of
them. We predict that institutions can use these sensorimotor metrics to
quantify media richness to improve employee well-being; that dyads and
family-size groups will bond and heal best through low-latency, high-resolution
multisensory interaction such as shared meals and reciprocated touch; and that
individuals can improve sensory and sociosensory resolution through deliberate
sensory reintegration practices. We conclude that we humans are the victims of
our own success, our hands so skilled they fill the world with captivating
things, our eyes so innocent they follow eagerly.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,791 | Quantum algorithms for training Gaussian Processes | Gaussian processes (GPs) are important models in supervised machine learning.
Training in Gaussian processes refers to selecting the covariance functions and
the associated parameters in order to improve the outcome of predictions, the
core of which amounts to evaluating the logarithm of the marginal likelihood
(LML) of a given model. LML gives a concrete measure of the quality of
prediction that a GP model is expected to achieve. The classical computation of
LML typically carries a polynomial time overhead with respect to the input
size. We propose a quantum algorithm that computes the logarithm of the
determinant of a Hermitian matrix, which runs in logarithmic time for sparse
matrices. This is applied in conjunction with a variant of the quantum linear
system algorithm that allows for logarithmic time computation of the form
$\mathbf{y}^TA^{-1}\mathbf{y}$, where $\mathbf{y}$ is a dense vector and $A$ is
the covariance matrix. We hence show that quantum computing can be used to
estimate the LML of a GP with exponentially improved efficiency under certain
conditions.
| 0 | 0 | 0 | 1 | 0 | 0 |
20,792 | Detecting Oriented Text in Natural Images by Linking Segments | Most state-of-the-art text detection methods are specific to horizontal Latin
text and are not fast enough for real-time applications. We introduce Segment
Linking (SegLink), an oriented text detection method. The main idea is to
decompose text into two locally detectable elements, namely segments and links.
A segment is an oriented box covering a part of a word or text line; A link
connects two adjacent segments, indicating that they belong to the same word or
text line. Both elements are detected densely at multiple scales by an
end-to-end trained, fully-convolutional neural network. Final detections are
produced by combining segments connected by links. Compared with previous
methods, SegLink improves along the dimensions of accuracy, speed, and ease of
training. It achieves an f-measure of 75.0% on the standard ICDAR 2015
Incidental (Challenge 4) benchmark, outperforming the previous best by a large
margin. It runs at over 20 FPS on 512x512 images. Moreover, without
modification, SegLink is able to detect long lines of non-Latin text, such as
Chinese.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,793 | Factorizable Module Algebras | The aim of this paper is to introduce and study a large class of
$\mathfrak{g}$-module algebras which we call factorizable by generalizing the
Gauss factorization of (square or rectangular) matrices. This class includes
coordinate algebras of corresponding reductive groups $G$, their parabolic
subgroups, basic affine spaces and many others. It turns out that tensor
products of factorizable algebras are also factorizable and it is easy to
create a factorizable algebra out of virtually any $\mathfrak{g}$-module
algebra. We also have quantum versions of all these constructions in the
category of $U_q(\mathfrak{g})$-module algebras. Quite surprisingly, our
quantum factorizable algebras are naturally acted on by the quantized
enveloping algebra $U_q(\mathfrak{g}^*)$ of the dual Lie bialgebra
$\mathfrak{g}^*$ of $\mathfrak{g}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,794 | Simultaneous Feature and Body-Part Learning for Real-Time Robot Awareness of Human Behaviors | Robot awareness of human actions is an essential research problem in robotics
with many important real-world applications, including human-robot
collaboration and teaming. Over the past few years, depth sensors have become a
standard device widely used by intelligent robots for 3D perception, which can
also offer human skeletal data in 3D space. Several methods based on skeletal
data were designed to enable robot awareness of human actions with satisfactory
accuracy. However, previous methods treated all body parts and features equally
important, without the capability to identify discriminative body parts and
features. In this paper, we propose a novel simultaneous Feature And Body-part
Learning (FABL) approach that simultaneously identifies discriminative body
parts and features, and efficiently integrates all available information
together to enable real-time robot awareness of human behaviors. We formulate
FABL as a regression-like optimization problem with structured
sparsity-inducing norms to model interrelationships of body parts and features.
We also develop an optimization algorithm to solve the formulated problem,
which possesses a theoretical guarantee to find the optimal solution. To
evaluate FABL, three experiments were performed using public benchmark
datasets, including the MSR Action3D and CAD-60 datasets, as well as a Baxter
robot in practical assistive living applications. Experimental results show
that our FABL approach obtains a high recognition accuracy with a processing
speed of the order-of-magnitude of 10e4 Hz, which makes FABL a promising method
to enable real-time robot awareness of human behaviors in practical robotics
applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,795 | Off-diagonal estimates of some Bergman-type operators on tube domains over symmetric cones | We obtain some necessary and sufficient conditions for the boundedness of a
family of positive operators defined on symmetric cones, we then deduce
off-diagonal boundedness of associated Bergman-type operators in tube domains
over symmetric cones.
| 0 | 0 | 1 | 0 | 0 | 0 |
20,796 | Provenance and Pseudo-Provenance for Seeded Learning-Based Automated Test Generation | Many methods for automated software test generation, including some that
explicitly use machine learning (and some that use ML more broadly conceived)
derive new tests from existing tests (often referred to as seeds). Often, the
seed tests from which new tests are derived are manually constructed, or at
least simpler than the tests that are produced as the final outputs of such
test generators. We propose annotation of generated tests with a provenance
(trail) showing how individual generated tests of interest (especially failing
tests) derive from seed tests, and how the population of generated tests
relates to the original seed tests. In some cases, post-processing of generated
tests can invalidate provenance information, in which case we also propose a
method for attempting to construct "pseudo-provenance" describing how the tests
could have been (partly) generated from seeds.
| 1 | 0 | 0 | 1 | 0 | 0 |
20,797 | Learning Solving Procedure for Artificial Neural Network | It is expected that progress toward true artificial intelligence will be
achieved through the emergence of a system that integrates representation
learning and complex reasoning (LeCun et al. 2015). In response to this
prediction, research has been conducted on implementing the symbolic reasoning
of a von Neumann computer in an artificial neural network (Graves et al. 2016;
Graves et al. 2014; Reed et al. 2015). However, these studies have many
limitations in realizing neural-symbolic integration (Jaeger. 2016). Here, we
present a new learning paradigm: a learning solving procedure (LSP) that learns
the procedure for solving complex problems. This is not accomplished merely by
learning input-output data, but by learning algorithms through a solving
procedure that obtains the output as a sequence of tasks for a given input
problem. The LSP neural network system not only learns simple problems of
addition and multiplication, but also the algorithms of complicated problems,
such as complex arithmetic expression, sorting, and Hanoi Tower. To realize
this, the LSP neural network structure consists of a deep neural network and
long short-term memory, which are recursively combined. Through
experimentation, we demonstrate the efficiency and scalability of LSP and its
validity as a mechanism of complex reasoning.
| 1 | 0 | 0 | 0 | 0 | 0 |
20,798 | On primordial black holes from an inflection point | Recently, it has been claimed that inflationary models with an inflection
point in the scalar potential can produce a large resonance in the power
spectrum of curvature perturbation. In this paper however we show that the
previous analyses are incorrect. The reason is twofold: firstly, the inflaton
is over-shot from a stage of standard inflation and so deviates from the
slow-roll attractor before reaching the inflection. Secondly, on the (or close
to) the inflection point, the ultra-slow-roll trajectory supersede the
slow-roll one and thus, the slow-roll approximations used in the literature
cannot be used. We then reconsider the model and provide a recipe for how to
produce nevertheless a large peak in the matter power spectrum via fine-tuning
of parameters.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,799 | Dropout as a Low-Rank Regularizer for Matrix Factorization | Regularization for matrix factorization (MF) and approximation problems has
been carried out in many different ways. Due to its popularity in deep
learning, dropout has been applied also for this class of problems. Despite its
solid empirical performance, the theoretical properties of dropout as a
regularizer remain quite elusive for this class of problems. In this paper, we
present a theoretical analysis of dropout for MF, where Bernoulli random
variables are used to drop columns of the factors. We demonstrate the
equivalence between dropout and a fully deterministic model for MF in which the
factors are regularized by the sum of the product of squared Euclidean norms of
the columns. Additionally, we inspect the case of a variable sized
factorization and we prove that dropout achieves the global minimum of a convex
approximation problem with (squared) nuclear norm regularization. As a result,
we conclude that dropout can be used as a low-rank regularizer with data
dependent singular-value thresholding.
| 1 | 0 | 0 | 1 | 0 | 0 |
20,800 | Introduction to a Temporal Graph Benchmark | A temporal graph is a data structure, consisting of nodes and edges in which
the edges are associated with time labels. To analyze the temporal graph, the
first step is to find a proper graph dataset/benchmark. While many temporal
graph datasets exist online, none could be found that used the interval labels
in which each edge is associated with a starting and ending time. Therefore we
create a temporal graph data based on Wikipedia reference graph for temporal
analysis. This report aims to provide more details of this graph benchmark to
those who are interested in using it.
| 1 | 1 | 0 | 0 | 0 | 0 |