title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Competitive Advantage Attacks to Decentralized Federated Learning | Decentralized federated learning (DFL) enables clients (e.g., hospitals and
banks) to jointly train machine learning models without a central orchestration
server. In each global training round, each client trains a local model on its
own training data and then they exchange local models for aggregation. In this
work, we propose SelfishAttack, a new family of attacks to DFL. In
SelfishAttack, a set of selfish clients aim to achieve competitive advantages
over the remaining non-selfish ones, i.e., the final learnt local models of the
selfish clients are more accurate than those of the non-selfish ones. Towards
this goal, the selfish clients send carefully crafted local models to each
remaining non-selfish one in each global training round. We formulate finding
such local models as an optimization problem and propose methods to solve it
when DFL uses different aggregation rules. Theoretically, we show that our
methods find the optimal solutions to the optimization problem. Empirically, we
show that SelfishAttack successfully increases the accuracy gap (i.e.,
competitive advantage) between the final learnt local models of selfish clients
and those of non-selfish ones. Moreover, SelfishAttack achieves larger accuracy
gaps than poisoning attacks when extended to increase competitive advantages. | [
"Yuqi Jia",
"Minghong Fang",
"Neil Zhenqiang Gong"
] | 2023-10-20 23:57:57 | http://arxiv.org/abs/2310.13862v1 | http://arxiv.org/pdf/2310.13862v1 | 2310.13862v1 |
Exponential weight averaging as damped harmonic motion | The exponential moving average (EMA) is a commonly used statistic for
providing stable estimates of stochastic quantities in deep learning
optimization. Recently, EMA has seen considerable use in generative models,
where it is computed with respect to the model weights, and significantly
improves the stability of the inference model during and after training. While
the practice of weight averaging at the end of training is well-studied and
known to improve estimates of local optima, the benefits of EMA over the course
of training is less understood. In this paper, we derive an explicit connection
between EMA and a damped harmonic system between two particles, where one
particle (the EMA weights) is drawn to the other (the model weights) via an
idealized zero-length spring. We then leverage this physical analogy to analyze
the effectiveness of EMA, and propose an improved training algorithm, which we
call BELAY. Finally, we demonstrate theoretically and empirically several
advantages enjoyed by BELAY over standard EMA. | [
"Jonathan Patsenker",
"Henry Li",
"Yuval Kluger"
] | 2023-10-20 23:15:46 | http://arxiv.org/abs/2310.13854v1 | http://arxiv.org/pdf/2310.13854v1 | 2310.13854v1 |
Gradual Domain Adaptation: Theory and Algorithms | Unsupervised domain adaptation (UDA) adapts a model from a labeled source
domain to an unlabeled target domain in a one-off way. Though widely applied,
UDA faces a great challenge whenever the distribution shift between the source
and the target is large. Gradual domain adaptation (GDA) mitigates this
limitation by using intermediate domains to gradually adapt from the source to
the target domain. In this work, we first theoretically analyze gradual
self-training, a popular GDA algorithm, and provide a significantly improved
generalization bound compared with Kumar et al. (2020). Our theoretical
analysis leads to an interesting insight: to minimize the generalization error
on the target domain, the sequence of intermediate domains should be placed
uniformly along the Wasserstein geodesic between the source and target domains.
The insight is particularly useful under the situation where intermediate
domains are missing or scarce, which is often the case in real-world
applications. Based on the insight, we propose $\textbf{G}$enerative Gradual
D$\textbf{O}$main $\textbf{A}$daptation with Optimal $\textbf{T}$ransport
(GOAT), an algorithmic framework that can generate intermediate domains in a
data-dependent way. More concretely, we first generate intermediate domains
along the Wasserstein geodesic between two given consecutive domains in a
feature space, then apply gradual self-training to adapt the source-trained
classifier to the target along the sequence of intermediate domains.
Empirically, we demonstrate that our GOAT framework can improve the performance
of standard GDA when the given intermediate domains are scarce, significantly
broadening the real-world application scenarios of GDA. Our code is available
at https://github.com/yifei-he/GOAT. | [
"Yifei He",
"Haoxiang Wang",
"Bo Li",
"Han Zhao"
] | 2023-10-20 23:02:08 | http://arxiv.org/abs/2310.13852v1 | http://arxiv.org/pdf/2310.13852v1 | 2310.13852v1 |
Augment with Care: Enhancing Graph Contrastive Learning with Selective Spectrum Perturbation | In recent years, Graph Contrastive Learning (GCL) has shown remarkable
effectiveness in learning representations on graphs. As a component of GCL,
good augmentation views are supposed to be invariant to the important
information while discarding the unimportant part. Existing augmentation views
with perturbed graph structures are usually based on random topology corruption
in the spatial domain; however, from perspectives of the spectral domain, this
approach may be ineffective as it fails to pose tailored impacts on the
information of different frequencies, thus weakening the agreement between the
augmentation views. By a preliminary experiment, we show that the impacts
caused by spatial random perturbation are approximately evenly distributed
among frequency bands, which may harm the invariance of augmentations required
by contrastive learning frameworks. To address this issue, we argue that the
perturbation should be selectively posed on the information concerning
different frequencies. In this paper, we propose GASSER which poses tailored
perturbation on the specific frequencies of graph structures in spectral
domain, and the edge perturbation is selectively guided by the spectral hints.
As shown by extensive experiments and theoretical analysis, the augmentation
views are adaptive and controllable, as well as heuristically fitting the
homophily ratios and spectrum of graph structures. | [
"Kaiqi Yang",
"Haoyu Han",
"Wei Jin",
"Hui Liu"
] | 2023-10-20 22:39:07 | http://arxiv.org/abs/2310.13845v1 | http://arxiv.org/pdf/2310.13845v1 | 2310.13845v1 |
Fast hyperboloid decision tree algorithms | Hyperbolic geometry is gaining traction in machine learning for its
effectiveness at capturing hierarchical structures in real-world data.
Hyperbolic spaces, where neighborhoods grow exponentially, offer substantial
advantages and consistently deliver state-of-the-art results across diverse
applications. However, hyperbolic classifiers often grapple with computational
challenges. Methods reliant on Riemannian optimization frequently exhibit
sluggishness, stemming from the increased computational demands of operations
on Riemannian manifolds. In response to these challenges, we present hyperDT, a
novel extension of decision tree algorithms into hyperbolic space. Crucially,
hyperDT eliminates the need for computationally intensive Riemannian
optimization, numerically unstable exponential and logarithmic maps, or
pairwise comparisons between points by leveraging inner products to adapt
Euclidean decision tree algorithms to hyperbolic space. Our approach is
conceptually straightforward and maintains constant-time decision complexity
while mitigating the scalability issues inherent in high-dimensional Euclidean
spaces. Building upon hyperDT we introduce hyperRF, a hyperbolic random forest
model. Extensive benchmarking across diverse datasets underscores the superior
performance of these models, providing a swift, precise, accurate, and
user-friendly toolkit for hyperbolic data analysis. | [
"Philippe Chlenski",
"Ethan Turok",
"Antonio Moretti",
"Itsik Pe'er"
] | 2023-10-20 22:31:10 | http://arxiv.org/abs/2310.13841v1 | http://arxiv.org/pdf/2310.13841v1 | 2310.13841v1 |
CNN-based Prediction of Partition Path for VVC Fast Inter Partitioning Using Motion Fields | The Versatile Video Coding (VVC) standard has been recently finalized by the
Joint Video Exploration Team (JVET). Compared to the High Efficiency Video
Coding (HEVC) standard, VVC offers about 50% compression efficiency gain, in
terms of Bjontegaard Delta-Rate (BD-rate), at the cost of a 10-fold increase in
encoding complexity. In this paper, we propose a method based on Convolutional
Neural Network (CNN) to speed up the inter partitioning process in VVC.
Firstly, a novel representation for the quadtree with nested multi-type tree
(QTMT) partition is introduced, derived from the partition path. Secondly, we
develop a U-Net-based CNN taking a multi-scale motion vector field as input at
the Coding Tree Unit (CTU) level. The purpose of CNN inference is to predict
the optimal partition path during the Rate-Distortion Optimization (RDO)
process. To achieve this, we divide CTU into grids and predict the Quaternary
Tree (QT) depth and Multi-type Tree (MT) split decisions for each cell of the
grid. Thirdly, an efficient partition pruning algorithm is introduced to employ
the CNN predictions at each partitioning level to skip RDO evaluations of
unnecessary partition paths. Finally, an adaptive threshold selection scheme is
designed, making the trade-off between complexity and efficiency scalable.
Experiments show that the proposed method can achieve acceleration ranging from
16.5% to 60.2% under the RandomAccess Group Of Picture 32 (RAGOP32)
configuration with a reasonable efficiency drop ranging from 0.44% to 4.59% in
terms of BD-rate, which surpasses other state-of-the-art solutions.
Additionally, our method stands out as one of the lightest approaches in the
field, which ensures its applicability to other encoders. | [
"Yiqun Liu",
"Marc Riviere",
"Thomas Guionnet",
"Aline Roumy",
"Christine Guillemot"
] | 2023-10-20 22:26:49 | http://arxiv.org/abs/2310.13838v1 | http://arxiv.org/pdf/2310.13838v1 | 2310.13838v1 |
Foundation Model's Embedded Representations May Detect Distribution Shift | Distribution shifts between train and test datasets obscure our ability to
understand the generalization capacity of neural network models. This topic is
especially relevant given the success of pre-trained foundation models as
starting points for transfer learning (TL) models across tasks and contexts. We
present a case study for TL on a pre-trained GPT-2 model onto the Sentiment140
dataset for sentiment classification. We show that Sentiment140's test dataset
$M$ is not sampled from the same distribution as the training dataset $P$, and
hence training on $P$ and measuring performance on $M$ does not actually
account for the model's generalization on sentiment classification. | [
"Adam Tsou",
"Max Vargas",
"Andrew Engel",
"Tony Chiang"
] | 2023-10-20 22:20:50 | http://arxiv.org/abs/2310.13836v1 | http://arxiv.org/pdf/2310.13836v1 | 2310.13836v1 |
GraphMaker: Can Diffusion Models Generate Large Attributed Graphs? | Large-scale graphs with node attributes are fundamental in real-world
scenarios, such as social and financial networks. The generation of synthetic
graphs that emulate real-world ones is pivotal in graph machine learning,
aiding network evolution understanding and data utility preservation when
original data cannot be shared. Traditional models for graph generation suffer
from limited model capacity. Recent developments in diffusion models have shown
promise in merely graph structure generation or the generation of small
molecular graphs with attributes. However, their applicability to large
attributed graphs remains unaddressed due to challenges in capturing intricate
patterns and scalability. This paper introduces GraphMaker, a novel diffusion
model tailored for generating large attributed graphs. We study the diffusion
models that either couple or decouple graph structure and node attribute
generation to address their complex correlation. We also employ node-level
conditioning and adopt a minibatch strategy for scalability. We further propose
a new evaluation pipeline using models trained on generated synthetic graphs
and tested on original graphs to evaluate the quality of synthetic data.
Empirical evaluations on real-world datasets showcase GraphMaker's superiority
in generating realistic and diverse large-attributed graphs beneficial for
downstream tasks. | [
"Mufei Li",
"Eleonora Kreačić",
"Vamsi K. Potluru",
"Pan Li"
] | 2023-10-20 22:12:46 | http://arxiv.org/abs/2310.13833v1 | http://arxiv.org/pdf/2310.13833v1 | 2310.13833v1 |
Universal Representation of Permutation-Invariant Functions on Vectors and Tensors | A main object of our study is multiset functions -- that is,
permutation-invariant functions over inputs of varying sizes. Deep Sets,
proposed by \cite{zaheer2017deep}, provides a \emph{universal representation}
for continuous multiset functions on scalars via a sum-decomposable model.
Restricting the domain of the functions to finite multisets of $D$-dimensional
vectors, Deep Sets also provides a \emph{universal approximation} that requires
a latent space dimension of $O(N^D)$ -- where $N$ is an upper bound on the size
of input multisets. In this paper, we strengthen this result by proving that
universal representation is guaranteed for continuous and discontinuous
multiset functions though a latent space dimension of $O(N^D)$. We then
introduce \emph{identifiable} multisets for which we can uniquely label their
elements using an identifier function, namely, finite-precision vectors are
identifiable. Using our analysis on identifiable multisets, we prove that a
sum-decomposable model for general continuous multiset functions only requires
a latent dimension of $2DN$. We further show that both encoder and decoder
functions of the model are continuous -- our main contribution to the existing
work which lack such a guarantee. Also this provides a significant improvement
over the aforementioned $O(N^D)$ bound which was derived for universal
representation of continuous and discontinuous multiset functions. We then
extend our results and provide special sum-decomposition structures to
universally represent permutation-invariant tensor functions on identifiable
tensors. These families of sum-decomposition models enables us to design deep
network architectures and deploy them on a variety of learning tasks on
sequences, images, and graphs. | [
"Puoya Tabaghi",
"Yusu Wang"
] | 2023-10-20 22:00:59 | http://arxiv.org/abs/2310.13829v1 | http://arxiv.org/pdf/2310.13829v1 | 2310.13829v1 |
Adversarial Attacks on Fairness of Graph Neural Networks | Fairness-aware graph neural networks (GNNs) have gained a surge of attention
as they can reduce the bias of predictions on any demographic group (e.g.,
female) in graph-based applications. Although these methods greatly improve the
algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully
designed adversarial attacks. In this paper, we investigate the problem of
adversarial attacks on fairness of GNNs and propose G-FairAttack, a general
framework for attacking various types of fairness-aware GNNs in terms of
fairness with an unnoticeable effect on prediction utility. In addition, we
propose a fast computation technique to reduce the time complexity of
G-FairAttack. The experimental study demonstrates that G-FairAttack
successfully corrupts the fairness of different types of GNNs while keeping the
attack unnoticeable. Our study on fairness attacks sheds light on potential
vulnerabilities in fairness-aware GNNs and guides further research on the
robustness of GNNs in terms of fairness. The open-source code is available at
https://github.com/zhangbinchi/G-FairAttack. | [
"Binchi Zhang",
"Yushun Dong",
"Chen Chen",
"Yada Zhu",
"Minnan Luo",
"Jundong Li"
] | 2023-10-20 21:19:54 | http://arxiv.org/abs/2310.13822v1 | http://arxiv.org/pdf/2310.13822v1 | 2310.13822v1 |
Geometric Learning with Positively Decomposable Kernels | Kernel methods are powerful tools in machine learning. Classical kernel
methods are based on positive-definite kernels, which map data spaces into
reproducing kernel Hilbert spaces (RKHS). For non-Euclidean data spaces,
positive-definite kernels are difficult to come by. In this case, we propose
the use of reproducing kernel Krein space (RKKS) based methods, which require
only kernels that admit a positive decomposition. We show that one does not
need to access this decomposition in order to learn in RKKS. We then
investigate the conditions under which a kernel is positively decomposable. We
show that invariant kernels admit a positive decomposition on homogeneous
spaces under tractable regularity assumptions. This makes them much easier to
construct than positive-definite kernels, providing a route for learning with
kernels for non-Euclidean data. By the same token, this provides theoretical
foundations for RKKS-based methods in general. | [
"Nathael Da Costa",
"Cyrus Mostajeran",
"Juan-Pablo Ortega",
"Salem Said"
] | 2023-10-20 21:18:04 | http://arxiv.org/abs/2310.13821v1 | http://arxiv.org/pdf/2310.13821v1 | 2310.13821v1 |
FERI: A Multitask-based Fairness Achieving Algorithm with Applications to Fair Organ Transplantation | Liver transplantation often faces fairness challenges across subgroups
defined by sensitive attributes like age group, gender, and race/ethnicity.
Machine learning models for outcome prediction can introduce additional biases.
To address these, we introduce Fairness through the Equitable Rate of
Improvement in Multitask Learning (FERI) algorithm for fair predictions of
graft failure risk in liver transplant patients. FERI constrains subgroup loss
by balancing learning rates and preventing subgroup dominance in the training
process. Our experiments show that FERI maintains high predictive accuracy with
AUROC and AUPRC comparable to baseline models. More importantly, FERI
demonstrates an ability to improve fairness without sacrificing accuracy.
Specifically, for gender, FERI reduces the demographic parity disparity by
71.74%, and for the age group, it decreases the equalized odds disparity by
40.46%. Therefore, the FERI algorithm advances fairness-aware predictive
modeling in healthcare and provides an invaluable tool for equitable healthcare
systems. | [
"Can Li",
"Dejian Lai",
"Xiaoqian Jiang",
"Kai Zhang"
] | 2023-10-20 21:14:07 | http://arxiv.org/abs/2310.13820v1 | http://arxiv.org/pdf/2310.13820v1 | 2310.13820v1 |
FATA-Trans: Field And Time-Aware Transformer for Sequential Tabular Data | Sequential tabular data is one of the most commonly used data types in
real-world applications. Different from conventional tabular data, where rows
in a table are independent, sequential tabular data contains rich contextual
and sequential information, where some fields are dynamically changing over
time and others are static. Existing transformer-based approaches analyzing
sequential tabular data overlook the differences between dynamic and static
fields by replicating and filling static fields into each transformer, and
ignore temporal information between rows, which leads to three major
disadvantages: (1) computational overhead, (2) artificially simplified data for
masked language modeling pre-training task that may yield less meaningful
representations, and (3) disregarding the temporal behavioral patterns implied
by time intervals. In this work, we propose FATA-Trans, a model with two field
transformers for modeling sequential tabular data, where each processes static
and dynamic field information separately. FATA-Trans is field- and time-aware
for sequential tabular data. The field-type embedding in the method enables
FATA-Trans to capture differences between static and dynamic fields. The
time-aware position embedding exploits both order and time interval information
between rows, which helps the model detect underlying temporal behavior in a
sequence. Our experiments on three benchmark datasets demonstrate that the
learned representations from FATA-Trans consistently outperform
state-of-the-art solutions in the downstream tasks. We also present
visualization studies to highlight the insights captured by the learned
representations, enhancing our understanding of the underlying data. Our codes
are available at https://github.com/zdy93/FATA-Trans. | [
"Dongyu Zhang",
"Liang Wang",
"Xin Dai",
"Shubham Jain",
"Junpeng Wang",
"Yujie Fan",
"Chin-Chia Michael Yeh",
"Yan Zheng",
"Zhongfang Zhuang",
"Wei Zhang"
] | 2023-10-20 21:12:11 | http://arxiv.org/abs/2310.13818v1 | http://arxiv.org/pdf/2310.13818v1 | 2310.13818v1 |
A Better Match for Drivers and Riders: Reinforcement Learning at Lyft | To better match drivers to riders in our ridesharing application, we revised
Lyft's core matching algorithm. We use a novel online reinforcement learning
approach that estimates the future earnings of drivers in real time and use
this information to find more efficient matches. This change was the first
documented implementation of a ridesharing matching algorithm that can learn
and improve in real time. We evaluated the new approach during weeks of
switchback experimentation in most Lyft markets, and estimated how it benefited
drivers, riders, and the platform. In particular, it enabled our drivers to
serve millions of additional riders each year, leading to more than $30 million
per year in incremental revenue. Lyft rolled out the algorithm globally in
2021. | [
"Xabi Azagirre",
"Akshay Balwally",
"Guillaume Candeli",
"Nicholas Chamandy",
"Benjamin Han",
"Alona King",
"Hyungjun Lee",
"Martin Loncaric",
"Sébastien Martin",
"Vijay Narasiman",
"Zhiwei",
"Qin",
"Baptiste Richard",
"Sara Smoot",
"Sean Taylor",
"Garrett van Ryzin",
"Di Wu",
"Fei Yu",
"Alex Zamoshchin"
] | 2023-10-20 20:49:06 | http://arxiv.org/abs/2310.13810v1 | http://arxiv.org/pdf/2310.13810v1 | 2310.13810v1 |
Learning to (Learn at Test Time) | We reformulate the problem of supervised learning as learning to learn with
two nested loops (i.e. learning problems). The inner loop learns on each
individual instance with self-supervision before final prediction. The outer
loop learns the self-supervised task used by the inner loop, such that its
final prediction improves. Our inner loop turns out to be equivalent to linear
attention when the inner-loop learner is only a linear model, and to
self-attention when it is a kernel estimator. For practical comparison with
linear or self-attention layers, we replace each of them in a transformer with
an inner loop, so our outer loop is equivalent to training the architecture.
When each inner-loop learner is a neural network, our approach vastly
outperforms transformers with linear attention on ImageNet from 224 x 224 raw
pixels in both accuracy and FLOPs, while (regular) transformers cannot run. | [
"Yu Sun",
"Xinhao Li",
"Karan Dalal",
"Chloe Hsu",
"Sanmi Koyejo",
"Carlos Guestrin",
"Xiaolong Wang",
"Tatsunori Hashimoto",
"Xinlei Chen"
] | 2023-10-20 20:42:00 | http://arxiv.org/abs/2310.13807v1 | http://arxiv.org/pdf/2310.13807v1 | 2310.13807v1 |
RoseNet: Predicting Energy Metrics of Double InDel Mutants Using Deep Learning | An amino acid insertion or deletion, or InDel, can have profound and varying
functional impacts on a protein's structure. InDel mutations in the
transmembrane conductor regulator protein for example give rise to cystic
fibrosis. Unfortunately performing InDel mutations on physical proteins and
studying their effects is a time prohibitive process. Consequently, modeling
InDels computationally can supplement and inform wet lab experiments. In this
work, we make use of our data sets of exhaustive double InDel mutations for
three proteins which we computationally generated using a robotics inspired
inverse kinematics approach available in Rosetta. We develop and train a neural
network, RoseNet, on several structural and energetic metrics output by Rosetta
during the mutant generation process. We explore and present how RoseNet is
able to emulate the exhaustive data set using deep learning methods, and show
to what extent it can predict Rosetta metrics for unseen mutant sequences with
two InDels. RoseNet achieves a Pearson correlation coefficient median accuracy
of 0.775 over all Rosetta scores for the largest protein. Furthermore, a
sensitivity analysis is performed to determine the necessary quantity of data
required to accurately emulate the structural scores for computationally
generated mutants. We show that the model can be trained on minimal data (<50%)
and still retain a high level of accuracy. | [
"Sarah Coffland",
"Katie Christensen",
"Filip Jagodzinski",
"Brian Hutchinson"
] | 2023-10-20 20:36:13 | http://arxiv.org/abs/2310.13806v1 | http://arxiv.org/pdf/2310.13806v1 | 2310.13806v1 |
Normalizing flow-based deep variational Bayesian network for seismic multi-hazards and impacts estimation from InSAR imagery | Onsite disasters like earthquakes can trigger cascading hazards and impacts,
such as landslides and infrastructure damage, leading to catastrophic losses;
thus, rapid and accurate estimates are crucial for timely and effective
post-disaster responses. Interferometric Synthetic aperture radar (InSAR) data
is important in providing high-resolution onsite information for rapid hazard
estimation. Most recent methods using InSAR imagery signals predict a single
type of hazard and thus often suffer low accuracy due to noisy and complex
signals induced by co-located hazards, impacts, and irrelevant environmental
changes (e.g., vegetation changes, human activities). We introduce a novel
stochastic variational inference with normalizing flows derived to jointly
approximate posteriors of multiple unobserved hazards and impacts from noisy
InSAR imagery. | [
"Xuechun Li",
"Paula M. Burgi",
"Wei Ma",
"Hae Young Noh",
"David J. Wald",
"Susu Xu"
] | 2023-10-20 20:32:43 | http://arxiv.org/abs/2310.13805v1 | http://arxiv.org/pdf/2310.13805v1 | 2310.13805v1 |
Improving Molecular Properties Prediction Through Latent Space Fusion | Pre-trained Language Models have emerged as promising tools for predicting
molecular properties, yet their development is in its early stages,
necessitating further research to enhance their efficacy and address challenges
such as generalization and sample efficiency. In this paper, we present a
multi-view approach that combines latent spaces derived from state-of-the-art
chemical models. Our approach relies on two pivotal elements: the embeddings
derived from MHG-GNN, which represent molecular structures as graphs, and
MoLFormer embeddings rooted in chemical language. The attention mechanism of
MoLFormer is able to identify relations between two atoms even when their
distance is far apart, while the GNN of MHG-GNN can more precisely capture
relations among multiple atoms closely located. In this work, we demonstrate
the superior performance of our proposed multi-view approach compared to
existing state-of-the-art methods, including MoLFormer-XL, which was trained on
1.1 billion molecules, particularly in intricate tasks such as predicting
clinical trial drug toxicity and inhibiting HIV replication. We assessed our
approach using six benchmark datasets from MoleculeNet, where it outperformed
competitors in five of them. Our study highlights the potential of latent space
fusion and feature integration for advancing molecular property prediction. In
this work, we use small versions of MHG-GNN and MoLFormer, which opens up an
opportunity for further improvement when our approach uses a larger-scale
dataset. | [
"Eduardo Soares",
"Akihiro Kishimoto",
"Emilio Vital Brazil",
"Seiji Takeda",
"Hiroshi Kajino",
"Renato Cerqueira"
] | 2023-10-20 20:29:32 | http://arxiv.org/abs/2310.13802v1 | http://arxiv.org/pdf/2310.13802v1 | 2310.13802v1 |
A Unified View of Evaluation Metrics for Structured Prediction | We present a conceptual framework that unifies a variety of evaluation
metrics for different structured prediction tasks (e.g. event and relation
extraction, syntactic and semantic parsing). Our framework requires
representing the outputs of these tasks as objects of certain data types, and
derives metrics through matching of common substructures, possibly followed by
normalization. We demonstrate how commonly used metrics for a number of tasks
can be succinctly expressed by this framework, and show that new metrics can be
naturally derived in a bottom-up way based on an output structure. We release a
library that enables this derivation to create new metrics. Finally, we
consider how specific characteristics of tasks motivate metric design
decisions, and suggest possible modifications to existing metrics in line with
those motivations. | [
"Yunmo Chen",
"William Gantt",
"Tongfei Chen",
"Aaron Steven White",
"Benjamin Van Durme"
] | 2023-10-20 20:02:02 | http://arxiv.org/abs/2310.13793v1 | http://arxiv.org/pdf/2310.13793v1 | 2310.13793v1 |
Comparative Analysis of Machine Learning Algorithms for Solar Irradiance Forecasting in Smart Grids | The increasing global demand for clean and environmentally friendly energy
resources has caused increased interest in harnessing solar power through
photovoltaic (PV) systems for smart grids and homes. However, the inherent
unpredictability of PV generation poses problems associated with smart grid
planning and management, energy trading and market participation, demand
response, reliability, etc. Therefore, solar irradiance forecasting is
essential for optimizing PV system utilization. This study proposes the
next-generation machine learning algorithms such as random forests, Extreme
Gradient Boosting (XGBoost), Light Gradient Boosted Machine (lightGBM)
ensemble, CatBoost, and Multilayer Perceptron Artificial Neural Networks
(MLP-ANNs) to forecast solar irradiance. Besides, Bayesian optimization is
applied to hyperparameter tuning. Unlike tree-based ensemble algorithms that
select the features intrinsically, MLP-ANN needs feature selection as a
separate step. The simulation results indicate that the performance of the
MLP-ANNs improves when feature selection is applied. Besides, the random forest
outperforms the other learning algorithms. | [
"Saman Soleymani",
"Shima Mohammadzadeh"
] | 2023-10-20 19:52:37 | http://arxiv.org/abs/2310.13791v1 | http://arxiv.org/pdf/2310.13791v1 | 2310.13791v1 |
Enhancing Illicit Activity Detection using XAI: A Multimodal Graph-LLM Framework | Financial cybercrime prevention is an increasing issue with many
organisations and governments. As deep learning models have progressed to
identify illicit activity on various financial and social networks, the
explainability behind the model decisions has been lacklustre with the
investigative analyst at the heart of any deep learning platform. In our paper,
we present a state-of-the-art, novel multimodal proactive approach to
addressing XAI in financial cybercrime detection.
We leverage a triad of deep learning models designed to distill essential
representations from transaction sequencing, subgraph connectivity, and
narrative generation to significantly streamline the analyst's investigative
process. Our narrative generation proposal leverages LLM to ingest transaction
details and output contextual narrative for an analyst to understand a
transaction and its metadata much further. | [
"Jack Nicholls",
"Aditya Kuppa",
"Nhien-An Le-Khac"
] | 2023-10-20 19:33:44 | http://arxiv.org/abs/2310.13787v1 | http://arxiv.org/pdf/2310.13787v1 | 2310.13787v1 |
Fundamental Limits of Membership Inference Attacks on Machine Learning Models | Membership inference attacks (MIA) can reveal whether a particular data point
was part of the training dataset, potentially exposing sensitive information
about individuals. This article explores the fundamental statistical
limitations associated with MIAs on machine learning models. More precisely, we
first derive the statistical quantity that governs the effectiveness and
success of such attacks. Then, we investigate several situations for which we
provide bounds on this quantity of interest. This allows us to infer the
accuracy of potential attacks as a function of the number of samples and other
structural parameters of learning models, which in some cases can be directly
estimated from the dataset. | [
"Eric Aubinais",
"Elisabeth Gassiat",
"Pablo Piantanida"
] | 2023-10-20 19:32:54 | http://arxiv.org/abs/2310.13786v1 | http://arxiv.org/pdf/2310.13786v1 | 2310.13786v1 |
TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models | We present TexFusion (Texture Diffusion), a new method to synthesize textures
for given 3D geometries, using large-scale text-guided image diffusion models.
In contrast to recent works that leverage 2D text-to-image diffusion models to
distill 3D objects using a slow and fragile optimization process, TexFusion
introduces a new 3D-consistent generation technique specifically designed for
texture synthesis that employs regular diffusion model sampling on different 2D
rendered views. Specifically, we leverage latent diffusion models, apply the
diffusion model's denoiser on a set of 2D renders of the 3D object, and
aggregate the different denoising predictions on a shared latent texture map.
Final output RGB textures are produced by optimizing an intermediate neural
color field on the decodings of 2D renders of the latent texture. We thoroughly
validate TexFusion and show that we can efficiently generate diverse, high
quality and globally coherent textures. We achieve state-of-the-art text-guided
texture synthesis performance using only image diffusion models, while avoiding
the pitfalls of previous distillation-based methods. The text-conditioning
offers detailed control and we also do not rely on any ground truth 3D textures
for training. This makes our method versatile and applicable to a broad range
of geometry and texture types. We hope that TexFusion will advance AI-based
texturing of 3D assets for applications in virtual reality, game design,
simulation, and more. | [
"Tianshi Cao",
"Karsten Kreis",
"Sanja Fidler",
"Nicholas Sharp",
"Kangxue Yin"
] | 2023-10-20 19:15:29 | http://arxiv.org/abs/2310.13772v1 | http://arxiv.org/pdf/2310.13772v1 | 2310.13772v1 |
Graph AI in Medicine | In clinical artificial intelligence (AI), graph representation learning,
mainly through graph neural networks (GNNs), stands out for its capability to
capture intricate relationships within structured clinical datasets. With
diverse data -- from patient records to imaging -- GNNs process data
holistically by viewing modalities as nodes interconnected by their
relationships. Graph AI facilitates model transfer across clinical tasks,
enabling models to generalize across patient populations without additional
parameters or minimal re-training. However, the importance of human-centered
design and model interpretability in clinical decision-making cannot be
overstated. Since graph AI models capture information through localized neural
transformations defined on graph relationships, they offer both an opportunity
and a challenge in elucidating model rationale. Knowledge graphs can enhance
interpretability by aligning model-driven insights with medical knowledge.
Emerging graph models integrate diverse data modalities through pre-training,
facilitate interactive feedback loops, and foster human-AI collaboration,
paving the way to clinically meaningful predictions. | [
"Ruth Johnson",
"Michelle M. Li",
"Ayush Noori",
"Owen Queen",
"Marinka Zitnik"
] | 2023-10-20 19:01:01 | http://arxiv.org/abs/2310.13767v1 | http://arxiv.org/pdf/2310.13767v1 | 2310.13767v1 |
Learning Interatomic Potentials at Multiple Scales | The need to use a short time step is a key limit on the speed of molecular
dynamics (MD) simulations. Simulations governed by classical potentials are
often accelerated by using a multiple-time-step (MTS) integrator that evaluates
certain potential energy terms that vary more slowly than others less
frequently. This approach is enabled by the simple but limiting analytic forms
of classical potentials. Machine learning interatomic potentials (MLIPs), in
particular recent equivariant neural networks, are much more broadly applicable
than classical potentials and can faithfully reproduce the expensive but
accurate reference electronic structure calculations used to train them. They
still, however, require the use of a single short time step, as they lack the
inherent term-by-term scale separation of classical potentials. This work
introduces a method to learn a scale separation in complex interatomic
interactions by co-training two MLIPs. Initially, a small and efficient model
is trained to reproduce short-time-scale interactions. Subsequently, a large
and expressive model is trained jointly to capture the remaining interactions
not captured by the small model. When running MD, the MTS integrator then
evaluates the smaller model for every time step and the larger model less
frequently, accelerating simulation. Compared to a conventionally trained MLIP,
our approach can achieve a significant speedup (~3x in our experiments) without
a loss of accuracy on the potential energy or simulation-derived quantities. | [
"Xiang Fu",
"Albert Musaelian",
"Anders Johansson",
"Tommi Jaakkola",
"Boris Kozinsky"
] | 2023-10-20 18:34:32 | http://arxiv.org/abs/2310.13756v1 | http://arxiv.org/pdf/2310.13756v1 | 2310.13756v1 |
FairBranch: Fairness Conflict Correction on Task-group Branches for Fair Multi-Task Learning | The generalization capacity of Multi-Task Learning (MTL) becomes limited when
unrelated tasks negatively impact each other by updating shared parameters with
conflicting gradients, resulting in negative transfer and a reduction in MTL
accuracy compared to single-task learning (STL). Recently, there has been an
increasing focus on the fairness of MTL models, necessitating the optimization
of both accuracy and fairness for individual tasks. Similarly to how negative
transfer affects accuracy, task-specific fairness considerations can adversely
influence the fairness of other tasks when there is a conflict of fairness loss
gradients among jointly learned tasks, termed bias transfer. To address both
negative and bias transfer in MTL, we introduce a novel method called
FairBranch. FairBranch branches the MTL model by assessing the similarity of
learned parameters, grouping related tasks to mitigate negative transfer.
Additionally, it incorporates fairness loss gradient conflict correction
between adjoining task-group branches to address bias transfer within these
task groups. Our experiments in tabular and visual MTL problems demonstrate
that FairBranch surpasses state-of-the-art MTL methods in terms of both
fairness and accuracy. | [
"Arjun Roy",
"Christos Koutlis",
"Symeon Papadopoulos",
"Eirini Ntoutsi"
] | 2023-10-20 18:07:15 | http://arxiv.org/abs/2310.13746v1 | http://arxiv.org/pdf/2310.13746v1 | 2310.13746v1 |
CAPIVARA: Cost-Efficient Approach for Improving Multilingual CLIP Performance on Low-Resource Languages | This work introduces CAPIVARA, a cost-efficient framework designed to enhance
the performance of multilingual CLIP models in low-resource languages. While
CLIP has excelled in zero-shot vision-language tasks, the resource-intensive
nature of model training remains challenging. Many datasets lack linguistic
diversity, featuring solely English descriptions for images. CAPIVARA addresses
this by augmenting text data using image captioning and machine translation to
generate multiple synthetic captions in low-resource languages. We optimize the
training pipeline with LiT, LoRA, and gradient checkpointing to alleviate the
computational cost. Through extensive experiments, CAPIVARA emerges as state of
the art in zero-shot tasks involving images and Portuguese texts. We show the
potential for significant improvements in other low-resource languages,
achieved by fine-tuning the pre-trained multilingual CLIP using CAPIVARA on a
single GPU for 2 hours. Our model and code is available at
https://github.com/hiaac-nlp/CAPIVARA. | [
"Gabriel Oliveira dos Santos",
"Diego A. B. Moreira",
"Alef Iury Ferreira",
"Jhessica Silva",
"Luiz Pereira",
"Pedro Bueno",
"Thiago Sousa",
"Helena Maia",
"Nádia Da Silva",
"Esther Colombini",
"Helio Pedrini",
"Sandra Avila"
] | 2023-10-20 17:44:25 | http://arxiv.org/abs/2310.13683v2 | http://arxiv.org/pdf/2310.13683v2 | 2310.13683v2 |
Optimizing Retrieval-augmented Reader Models via Token Elimination | Fusion-in-Decoder (FiD) is an effective retrieval-augmented language model
applied across a variety of open-domain tasks, such as question answering, fact
checking, etc. In FiD, supporting passages are first retrieved and then
processed using a generative model (Reader), which can cause a significant
bottleneck in decoding time, particularly with long outputs. In this work, we
analyze the contribution and necessity of all the retrieved passages to the
performance of reader models, and propose eliminating some of the retrieved
information, at the token level, that might not contribute essential
information to the answer generation process. We demonstrate that our method
can reduce run-time by up to 62.2%, with only a 2% reduction in performance,
and in some cases, even improve the performance results. | [
"Moshe Berchansky",
"Peter Izsak",
"Avi Caciularu",
"Ido Dagan",
"Moshe Wasserblat"
] | 2023-10-20 17:41:36 | http://arxiv.org/abs/2310.13682v1 | http://arxiv.org/pdf/2310.13682v1 | 2310.13682v1 |
RealFM: A Realistic Mechanism to Incentivize Data Contribution and Device Participation | Edge device participation in federating learning (FL) has been typically
studied under the lens of device-server communication (e.g., device dropout)
and assumes an undying desire from edge devices to participate in FL. As a
result, current FL frameworks are flawed when implemented in real-world
settings, with many encountering the free-rider problem. In a step to push FL
towards realistic settings, we propose RealFM: the first truly federated
mechanism which (1) realistically models device utility, (2) incentivizes data
contribution and device participation, and (3) provably removes the free-rider
phenomena. RealFM does not require data sharing and allows for a non-linear
relationship between model accuracy and utility, which improves the utility
gained by the server and participating devices compared to non-participating
devices as well as devices participating in other FL mechanisms. On real-world
data, RealFM improves device and server utility, as well as data contribution,
by up to 3 magnitudes and 7x respectively compared to baseline mechanisms. | [
"Marco Bornstein",
"Amrit Singh Bedi",
"Anit Kumar Sahu",
"Furqan Khan",
"Furong Huang"
] | 2023-10-20 17:40:39 | http://arxiv.org/abs/2310.13681v1 | http://arxiv.org/pdf/2310.13681v1 | 2310.13681v1 |
Long-Form Speech Translation through Segmentation with Finite-State Decoding Constraints on Large Language Models | One challenge in speech translation is that plenty of spoken content is
long-form, but short units are necessary for obtaining high-quality
translations. To address this mismatch, we adapt large language models (LLMs)
to split long ASR transcripts into segments that can be independently
translated so as to maximize the overall translation quality. We overcome the
tendency of hallucination in LLMs by incorporating finite-state constraints
during decoding; these eliminate invalid outputs without requiring additional
training. We discover that LLMs are adaptable to transcripts containing ASR
errors through prompt-tuning or fine-tuning. Relative to a state-of-the-art
automatic punctuation baseline, our best LLM improves the average BLEU by 2.9
points for English-German, English-Spanish, and English-Arabic TED talk
translation in 9 test sets, just by improving segmentation. | [
"Arya D. McCarthy",
"Hao Zhang",
"Shankar Kumar",
"Felix Stahlberg",
"Ke Wu"
] | 2023-10-20 17:31:39 | http://arxiv.org/abs/2310.13678v2 | http://arxiv.org/pdf/2310.13678v2 | 2310.13678v2 |
Using Human-like Mechanism to Weaken Effect of Pre-training Weight Bias in Face-Recognition Convolutional Neural Network | Convolutional neural network (CNN), as an important model in artificial
intelligence, has been widely used and studied in different disciplines. The
computational mechanisms of CNNs are still not fully revealed due to the their
complex nature. In this study, we focused on 4 extensively studied CNNs
(AlexNet, VGG11, VGG13, and VGG16) which has been analyzed as human-like models
by neuroscientists with ample evidence. We trained these CNNs to emotion
valence classification task by transfer learning. Comparing their performance
with human data, the data unveiled that these CNNs would partly perform as
human does. We then update the object-based AlexNet using self-attention
mechanism based on neuroscience and behavioral data. The updated FE-AlexNet
outperformed all the other tested CNNs and closely resembles human perception.
The results further unveil the computational mechanisms of these CNNs.
Moreover, this study offers a new paradigm to better understand and improve CNN
performance via human data. | [
"Haojiang Ying",
"Yi-Fan Li",
"Yiyang Chen"
] | 2023-10-20 17:22:57 | http://arxiv.org/abs/2310.13674v1 | http://arxiv.org/pdf/2310.13674v1 | 2310.13674v1 |
ManifoldNeRF: View-dependent Image Feature Supervision for Few-shot Neural Radiance Fields | Novel view synthesis has recently made significant progress with the advent
of Neural Radiance Fields (NeRF). DietNeRF is an extension of NeRF that aims to
achieve this task from only a few images by introducing a new loss function for
unknown viewpoints with no input images. The loss function assumes that a
pre-trained feature extractor should output the same feature even if input
images are captured at different viewpoints since the images contain the same
object. However, while that assumption is ideal, in reality, it is known that
as viewpoints continuously change, also feature vectors continuously change.
Thus, the assumption can harm training. To avoid this harmful training, we
propose ManifoldNeRF, a method for supervising feature vectors at unknown
viewpoints using interpolated features from neighboring known viewpoints. Since
the method provides appropriate supervision for each unknown viewpoint by the
interpolated features, the volume representation is learned better than
DietNeRF. Experimental results show that the proposed method performs better
than others in a complex scene. We also experimented with several subsets of
viewpoints from a set of viewpoints and identified an effective set of
viewpoints for real environments. This provided a basic policy of viewpoint
patterns for real-world application. The code is available at
https://github.com/haganelego/ManifoldNeRF_BMVC2023 | [
"Daiju Kanaoka",
"Motoharu Sonogashira",
"Hakaru Tamukoh",
"Yasutomo Kawanishi"
] | 2023-10-20 17:13:52 | http://arxiv.org/abs/2310.13670v1 | http://arxiv.org/pdf/2310.13670v1 | 2310.13670v1 |
Automatic Unit Test Data Generation and Actor-Critic Reinforcement Learning for Code Synthesis | The advent of large pre-trained language models in the domain of Code
Synthesis has shown remarkable performance on various benchmarks, treating the
problem of Code Generation in a fashion similar to Natural Language Generation,
trained with a Language Modelling (LM) objective. In addition, the property of
programming language code being precisely evaluable with respect to its
semantics -- through the use of Unit Tests to check its functional correctness
-- lends itself to using Reinforcement Learning (RL) as a further training
paradigm. Previous work has shown that RL can be applied as such to improve
models' coding capabilities; however, such RL-based methods rely on a reward
signal based on defined Unit Tests, which are much harder to obtain compared to
the huge crawled code datasets used in LM objectives. In this work, we present
a novel approach to automatically obtain data consisting of function signatures
and associated Unit Tests, suitable for RL training of Code Synthesis models.
We also introduce a straightforward, simple yet effective Actor-Critic RL
training scheme and show that it, in conjunction with automatically generated
training data, leads to improvement of a pre-trained code language model's
performance by up to 9.9% improvement over the original underlying code
synthesis LM, and up to 4.3% over RL-based models trained with standard PPO or
CodeRL. | [
"Philip John Gorinski",
"Matthieu Zimmer",
"Gerasimos Lampouras",
"Derrick Goh Xin Deik",
"Ignacio Iacobacci"
] | 2023-10-20 17:13:16 | http://arxiv.org/abs/2310.13669v1 | http://arxiv.org/pdf/2310.13669v1 | 2310.13669v1 |
An experimental study for early diagnosing Parkinson's disease using machine learning | One of the most catastrophic neurological disorders worldwide is Parkinson's
Disease. Along with it, the treatment is complicated and abundantly expensive.
The only effective action to control the progression is diagnosing it in the
early stage. However, this is challenging because early detection necessitates
a large and complex clinical study. This experimental work used Machine
Learning techniques to automate the early detection of Parkinson's Disease from
clinical characteristics, voice features and motor examination. In this study,
we develop ML models utilizing a public dataset of 130 individuals, 30 of whom
are untreated Parkinson's Disease patients, 50 of whom are Rapid Eye Movement
Sleep Behaviour Disorder patients who are at a greater risk of contracting
Parkinson's Disease, and 50 of whom are Healthy Controls. We use MinMax Scaler
to rescale the data points, Local Outlier Factor to remove outliers, and SMOTE
to balance existing class frequency. Afterwards, apply a number of Machine
Learning techniques. We implement the approaches in such a way that data
leaking and overfitting are not possible. Finally, obtained 100% accuracy in
classifying PD and RBD patients, as well as 92% accuracy in classifying PD and
HC individuals. | [
"Md. Taufiqul Haque Khan Tusar",
"Md. Touhidul Islam",
"Abul Hasnat Sakil"
] | 2023-10-20 16:59:18 | http://arxiv.org/abs/2310.13654v1 | http://arxiv.org/pdf/2310.13654v1 | 2310.13654v1 |
Optimal Transport for Measures with Noisy Tree Metric | We study optimal transport (OT) problem for probability measures supported on
a tree metric space. It is known that such OT problem (i.e., tree-Wasserstein
(TW)) admits a closed-form expression, but depends fundamentally on the
underlying tree structure over supports of input measures. In practice, the
given tree structure may be, however, perturbed due to noisy or adversarial
measurements. In order to mitigate this issue, we follow the max-min robust OT
approach which considers the maximal possible distances between two input
measures over an uncertainty set of tree metrics. In general, this approach is
hard to compute, even for measures supported in $1$-dimensional space, due to
its non-convexity and non-smoothness which hinders its practical applications,
especially for large-scale settings. In this work, we propose \emph{novel
uncertainty sets of tree metrics} from the lens of edge deletion/addition which
covers a diversity of tree structures in an elegant framework. Consequently, by
building upon the proposed uncertainty sets, and leveraging the tree structure
over supports, we show that the max-min robust OT also admits a closed-form
expression for a fast computation as its counterpart standard OT (i.e., TW).
Furthermore, we demonstrate that the max-min robust OT satisfies the metric
property and is negative definite. We then exploit its negative definiteness to
propose \emph{positive definite kernels} and test them in several simulations
on various real-world datasets on document classification and topological data
analysis for measures with noisy tree metric. | [
"Tam Le",
"Truyen Nguyen",
"Kenji Fukumizu"
] | 2023-10-20 16:56:08 | http://arxiv.org/abs/2310.13653v1 | http://arxiv.org/pdf/2310.13653v1 | 2310.13653v1 |
Weighted Joint Maximum Mean Discrepancy Enabled Multi-Source-Multi-Target Unsupervised Domain Adaptation Fault Diagnosis | Despite the remarkable results that can be achieved by data-driven
intelligent fault diagnosis techniques, they presuppose the same distribution
of training and test data as well as sufficient labeled data. Various operating
states often exist in practical scenarios, leading to the problem of domain
shift that hinders the effectiveness of fault diagnosis. While recent
unsupervised domain adaptation methods enable cross-domain fault diagnosis,
they struggle to effectively utilize information from multiple source domains
and achieve effective diagnosis faults in multiple target domains
simultaneously. In this paper, we innovatively proposed a weighted joint
maximum mean discrepancy enabled multi-source-multi-target unsupervised domain
adaptation (WJMMD-MDA), which realizes domain adaptation under
multi-source-multi-target scenarios in the field of fault diagnosis for the
first time. The proposed method extracts sufficient information from multiple
labeled source domains and achieves domain alignment between source and target
domains through an improved weighted distance loss. As a result,
domain-invariant and discriminative features between multiple source and target
domains are learned with cross-domain fault diagnosis realized. The performance
of the proposed method is evaluated in comprehensive comparative experiments on
three datasets, and the experimental results demonstrate the superiority of
this method. | [
"Zixuan Wang",
"Haoran Tang",
"Haibo Wang",
"Bo Qin",
"Mark D. Butala",
"Weiming Shen",
"Hongwei Wang"
] | 2023-10-20 16:53:31 | http://arxiv.org/abs/2310.14790v1 | http://arxiv.org/pdf/2310.14790v1 | 2310.14790v1 |
Contrastive Prefence Learning: Learning from Human Feedback without RL | Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular
paradigm for aligning models with human intent. Typically RLHF algorithms
operate in two phases: first, use human preferences to learn a reward function
and second, align the model by optimizing the learned reward via reinforcement
learning (RL). This paradigm assumes that human preferences are distributed
according to reward, but recent work suggests that they instead follow the
regret under the user's optimal policy. Thus, learning a reward function from
feedback is not only based on a flawed assumption of human preference, but also
leads to unwieldy optimization challenges that stem from policy gradients or
bootstrapping in the RL phase. Because of these optimization challenges,
contemporary RLHF methods restrict themselves to contextual bandit settings
(e.g., as in large language models) or limit observation dimensionality (e.g.,
state-based robotics). We overcome these limitations by introducing a new
family of algorithms for optimizing behavior from human feedback using the
regret-based model of human preferences. Using the principle of maximum
entropy, we derive Contrastive Preference Learning (CPL), an algorithm for
learning optimal policies from preferences without learning reward functions,
circumventing the need for RL. CPL is fully off-policy, uses only a simple
contrastive objective, and can be applied to arbitrary MDPs. This enables CPL
to elegantly scale to high-dimensional and sequential RLHF problems while being
simpler than prior methods. | [
"Joey Hejna",
"Rafael Rafailov",
"Harshit Sikchi",
"Chelsea Finn",
"Scott Niekum",
"W. Bradley Knox",
"Dorsa Sadigh"
] | 2023-10-20 16:37:56 | http://arxiv.org/abs/2310.13639v1 | http://arxiv.org/pdf/2310.13639v1 | 2310.13639v1 |
Analyzing the contribution of different passively collected data to predict Stress and Depression | The possibility of recognizing diverse aspects of human behavior and
environmental context from passively captured data motivates its use for mental
health assessment. In this paper, we analyze the contribution of different
passively collected sensor data types (WiFi, GPS, Social interaction, Phone
Log, Physical Activity, Audio, and Academic features) to predict daily
selfreport stress and PHQ-9 depression score. First, we compute 125 mid-level
features from the original raw data. These 125 features include groups of
features from the different sensor data types. Then, we evaluate the
contribution of each feature type by comparing the performance of Neural
Network models trained with all features against Neural Network models trained
with specific feature groups. Our results show that WiFi features (which encode
mobility patterns) and Phone Log features (which encode information correlated
with sleep patterns), provide significative information for stress and
depression prediction. | [
"Irene Bonafonte",
"Cristina Bustos",
"Abraham Larrazolo",
"Gilberto Lorenzo Martinez Luna",
"Adolfo Guzman Arenas",
"Xavier Baro",
"Isaac Tourgeman",
"Mercedes Balcells",
"Agata Lapedriza"
] | 2023-10-20 15:57:22 | http://arxiv.org/abs/2310.13607v1 | http://arxiv.org/pdf/2310.13607v1 | 2310.13607v1 |
Towards equilibrium molecular conformation generation with GFlowNets | Sampling diverse, thermodynamically feasible molecular conformations plays a
crucial role in predicting properties of a molecule. In this paper we propose
to use GFlowNet for sampling conformations of small molecules from the
Boltzmann distribution, as determined by the molecule's energy. The proposed
approach can be used in combination with energy estimation methods of different
fidelity and discovers a diverse set of low-energy conformations for highly
flexible drug-like molecules. We demonstrate that GFlowNet can reproduce
molecular potential energy surfaces by sampling proportionally to the Boltzmann
distribution. | [
"Alexandra Volokhova",
"Michał Koziarski",
"Alex Hernández-García",
"Cheng-Hao Liu",
"Santiago Miret",
"Pablo Lemos",
"Luca Thiede",
"Zichao Yan",
"Alán Aspuru-Guzik",
"Yoshua Bengio"
] | 2023-10-20 15:41:50 | http://arxiv.org/abs/2310.14782v1 | http://arxiv.org/pdf/2310.14782v1 | 2310.14782v1 |
ReLM: Leveraging Language Models for Enhanced Chemical Reaction Prediction | Predicting chemical reactions, a fundamental challenge in chemistry, involves
forecasting the resulting products from a given reaction process. Conventional
techniques, notably those employing Graph Neural Networks (GNNs), are often
limited by insufficient training data and their inability to utilize textual
information, undermining their applicability in real-world applications. In
this work, we propose ReLM, a novel framework that leverages the chemical
knowledge encoded in language models (LMs) to assist GNNs, thereby enhancing
the accuracy of real-world chemical reaction predictions. To further enhance
the model's robustness and interpretability, we incorporate the confidence
score strategy, enabling the LMs to self-assess the reliability of their
predictions. Our experimental results demonstrate that ReLM improves the
performance of state-of-the-art GNN-based methods across various chemical
reaction datasets, especially in out-of-distribution settings. Codes are
available at https://github.com/syr-cn/ReLM. | [
"Yaorui Shi",
"An Zhang",
"Enzhi Zhang",
"Zhiyuan Liu",
"Xiang Wang"
] | 2023-10-20 15:33:23 | http://arxiv.org/abs/2310.13590v1 | http://arxiv.org/pdf/2310.13590v1 | 2310.13590v1 |
Improving Cross-Lingual Transfer through Subtree-Aware Word Reordering | Despite the impressive growth of the abilities of multilingual language
models, such as XLM-R and mT5, it has been shown that they still face
difficulties when tackling typologically-distant languages, particularly in the
low-resource setting. One obstacle for effective cross-lingual transfer is
variability in word-order patterns. It can be potentially mitigated via source-
or target-side word reordering, and numerous approaches to reordering have been
proposed. However, they rely on language-specific rules, work on the level of
POS tags, or only target the main clause, leaving subordinate clauses intact.
To address these limitations, we present a new powerful reordering method,
defined in terms of Universal Dependencies, that is able to learn fine-grained
word-order patterns conditioned on the syntactic context from a small amount of
annotated data and can be applied at all levels of the syntactic tree. We
conduct experiments on a diverse set of tasks and show that our method
consistently outperforms strong baselines over different language pairs and
model architectures. This performance advantage holds true in both zero-shot
and few-shot scenarios. | [
"Ofir Arviv",
"Dmitry Nikolaev",
"Taelin Karidi",
"Omri Abend"
] | 2023-10-20 15:25:53 | http://arxiv.org/abs/2310.13583v1 | http://arxiv.org/pdf/2310.13583v1 | 2310.13583v1 |
Tree Search in DAG Space with Model-based Reinforcement Learning for Causal Discovery | Identifying causal structure is central to many fields ranging from strategic
decision-making to biology and economics. In this work, we propose a
model-based reinforcement learning method for causal discovery based on tree
search, which builds directed acyclic graphs incrementally. We also formalize
and prove the correctness of an efficient algorithm for excluding edges that
would introduce cycles, which enables deeper discrete search and sampling in
DAG space. We evaluate our approach on two real-world tasks, achieving
substantially better performance than the state-of-the-art model-free method
and greedy search, constituting a promising advancement for combinatorial
methods. | [
"Victor-Alexandru Darvariu",
"Stephen Hailes",
"Mirco Musolesi"
] | 2023-10-20 15:14:18 | http://arxiv.org/abs/2310.13576v1 | http://arxiv.org/pdf/2310.13576v1 | 2310.13576v1 |
Progressive Dual Priori Network for Generalized Breast Tumor Segmentation | To promote the generalization ability of breast tumor segmentation models, as
well as to improve the segmentation performance for breast tumors with smaller
size, low-contrast amd irregular shape, we propose a progressive dual priori
network (PDPNet) to segment breast tumors from dynamic enhanced magnetic
resonance images (DCE-MRI) acquired at different sites. The PDPNet first
cropped tumor regions with a coarse-segmentation based localization module,
then the breast tumor mask was progressively refined by using the weak semantic
priori and cross-scale correlation prior knowledge. To validate the
effectiveness of PDPNet, we compared it with several state-of-the-art methods
on multi-center datasets. The results showed that, comparing against the
suboptimal method, the DSC, SEN, KAPPA and HD95 of PDPNet were improved 3.63\%,
8.19\%, 5.52\%, and 3.66\% respectively. In addition, through ablations, we
demonstrated that the proposed localization module can decrease the influence
of normal tissues and therefore improve the generalization ability of the
model. The weak semantic priors allow focusing on tumor regions to avoid
missing small tumors and low-contrast tumors. The cross-scale correlation
priors are beneficial for promoting the shape-aware ability for irregual
tumors. Thus integrating them in a unified framework improved the multi-center
breast tumor segmentation performance. | [
"Li Wang",
"Lihui Wang",
"Zixiang Kuai",
"Lei Tang",
"Yingfeng Ou",
"Chen Ye",
"Yuemin Zhu"
] | 2023-10-20 15:12:06 | http://arxiv.org/abs/2310.13574v1 | http://arxiv.org/pdf/2310.13574v1 | 2310.13574v1 |
Unraveling the Enigma of Double Descent: An In-depth Analysis through the Lens of Learned Feature Space | Double descent presents a counter-intuitive aspect within the machine
learning domain, and researchers have observed its manifestation in various
models and tasks. While some theoretical explanations have been proposed for
this phenomenon in specific contexts, an accepted theory to account for its
occurrence in deep learning remains yet to be established. In this study, we
revisit the phenomenon of double descent and demonstrate that its occurrence is
strongly influenced by the presence of noisy data. Through conducting a
comprehensive analysis of the feature space of learned representations, we
unveil that double descent arises in imperfect models trained with noisy data.
We argue that double descent is a consequence of the model first learning the
noisy data until interpolation and then adding implicit regularization via
over-parameterization acquiring therefore capability to separate the
information from the noise. We postulate that double descent should never occur
in well-regularized models. | [
"Yufei Gu",
"Xiaoqing Zheng",
"Tomaso Aste"
] | 2023-10-20 15:10:16 | http://arxiv.org/abs/2310.13572v1 | http://arxiv.org/pdf/2310.13572v1 | 2310.13572v1 |
Reward Shaping for Happier Autonomous Cyber Security Agents | As machine learning models become more capable, they have exhibited increased
potential in solving complex tasks. One of the most promising directions uses
deep reinforcement learning to train autonomous agents in computer network
defense tasks. This work studies the impact of the reward signal that is
provided to the agents when training for this task. Due to the nature of
cybersecurity tasks, the reward signal is typically 1) in the form of penalties
(e.g., when a compromise occurs), and 2) distributed sparsely across each
defense episode. Such reward characteristics are atypical of classic
reinforcement learning tasks where the agent is regularly rewarded for progress
(cf. to getting occasionally penalized for failures). We investigate reward
shaping techniques that could bridge this gap so as to enable agents to train
more sample-efficiently and potentially converge to a better performance. We
first show that deep reinforcement learning algorithms are sensitive to the
magnitude of the penalties and their relative size. Then, we combine penalties
with positive external rewards and study their effect compared to penalty-only
training. Finally, we evaluate intrinsic curiosity as an internal positive
reward mechanism and discuss why it might not be as advantageous for high-level
network monitoring tasks. | [
"Elizabeth Bates",
"Vasilios Mavroudis",
"Chris Hicks"
] | 2023-10-20 15:04:42 | http://arxiv.org/abs/2310.13565v1 | http://arxiv.org/pdf/2310.13565v1 | 2310.13565v1 |
Cache & Distil: Optimising API Calls to Large Language Models | Large-scale deployment of generative AI tools often depends on costly API
calls to a Large Language Model (LLM) to fulfil user queries. To curtail the
frequency of these calls, one can employ a smaller language model -- a student
-- which is continuously trained on the responses of the LLM. This student
gradually gains proficiency in independently handling an increasing number of
user requests, a process we term neural caching. The crucial element in neural
caching is a policy that decides which requests should be processed by the
student alone and which should be redirected to the LLM, subsequently aiding
the student's learning. In this study, we focus on classification tasks, and we
consider a range of classic active learning-based selection criteria as the
policy. Our experiments suggest that Margin Sampling and Query by Committee
bring consistent benefits across tasks and budgets. | [
"Guillem Ramírez",
"Matthias Lindemann",
"Alexandra Birch",
"Ivan Titov"
] | 2023-10-20 15:01:55 | http://arxiv.org/abs/2310.13561v1 | http://arxiv.org/pdf/2310.13561v1 | 2310.13561v1 |
On sample complexity of conditional independence testing with Von Mises estimator with application to causal discovery | Motivated by conditional independence testing, an essential step in
constraint-based causal discovery algorithms, we study the nonparametric Von
Mises estimator for the entropy of multivariate distributions built on a kernel
density estimator. We establish an exponential concentration inequality for
this estimator. We design a test for conditional independence (CI) based on our
estimator, called VM-CI, which achieves optimal parametric rates under
smoothness assumptions. Leveraging the exponential concentration, we prove a
tight upper bound for the overall error of VM-CI. This, in turn, allows us to
characterize the sample complexity of any constraint-based causal discovery
algorithm that uses VM-CI for CI tests. To the best of our knowledge, this is
the first sample complexity guarantee for causal discovery for continuous
variables. Furthermore, we empirically show that VM-CI outperforms other
popular CI tests in terms of either time or sample complexity (or both), which
translates to a better performance in structure learning as well. | [
"Fateme Jamshidi",
"Luca Ganassali",
"Negar Kiyavash"
] | 2023-10-20 14:52:25 | http://arxiv.org/abs/2310.13553v1 | http://arxiv.org/pdf/2310.13553v1 | 2310.13553v1 |
Provable Benefits of Multi-task RL under Non-Markovian Decision Making Processes | In multi-task reinforcement learning (RL) under Markov decision processes
(MDPs), the presence of shared latent structures among multiple MDPs has been
shown to yield significant benefits to the sample efficiency compared to
single-task RL. In this paper, we investigate whether such a benefit can extend
to more general sequential decision making problems, such as partially
observable MDPs (POMDPs) and more general predictive state representations
(PSRs). The main challenge here is that the large and complex model space makes
it hard to identify what types of common latent structure of multi-task PSRs
can reduce the model complexity and improve sample efficiency. To this end, we
posit a joint model class for tasks and use the notion of $\eta$-bracketing
number to quantify its complexity; this number also serves as a general metric
to capture the similarity of tasks and thus determines the benefit of
multi-task over single-task RL. We first study upstream multi-task learning
over PSRs, in which all tasks share the same observation and action spaces. We
propose a provably efficient algorithm UMT-PSR for finding near-optimal
policies for all PSRs, and demonstrate that the advantage of multi-task
learning manifests if the joint model class of PSRs has a smaller
$\eta$-bracketing number compared to that of individual single-task learning.
We also provide several example multi-task PSRs with small $\eta$-bracketing
numbers, which reap the benefits of multi-task learning. We further investigate
downstream learning, in which the agent needs to learn a new target task that
shares some commonalities with the upstream tasks via a similarity constraint.
By exploiting the learned PSRs from the upstream, we develop a sample-efficient
algorithm that provably finds a near-optimal policy. | [
"Ruiquan Huang",
"Yuan Cheng",
"Jing Yang",
"Vincent Tan",
"Yingbin Liang"
] | 2023-10-20 14:50:28 | http://arxiv.org/abs/2310.13550v1 | http://arxiv.org/pdf/2310.13550v1 | 2310.13550v1 |
Towards Understanding Sycophancy in Language Models | Reinforcement learning from human feedback (RLHF) is a popular technique for
training high-quality AI assistants. However, RLHF may also encourage model
responses that match user beliefs over truthful responses, a behavior known as
sycophancy. We investigate the prevalence of sycophancy in RLHF-trained models
and whether human preference judgements are responsible. We first demonstrate
that five state-of-the-art AI assistants consistently exhibit sycophantic
behavior across four varied free-form text-generation tasks. To understand if
human preferences drive this broadly observed behavior of RLHF models, we
analyze existing human preference data. We find that when a response matches a
user's views, it is more likely to be preferred. Moreover, both humans and
preference models (PMs) prefer convincingly-written sycophantic responses over
correct ones a negligible fraction of the time. Optimizing model outputs
against PMs also sometimes sacrifices truthfulness in favor of sycophancy.
Overall, our results indicate that sycophancy is a general behavior of RLHF
models, likely driven in part by human preference judgements favoring
sycophantic responses. | [
"Mrinank Sharma",
"Meg Tong",
"Tomasz Korbak",
"David Duvenaud",
"Amanda Askell",
"Samuel R. Bowman",
"Newton Cheng",
"Esin Durmus",
"Zac Hatfield-Dodds",
"Scott R. Johnston",
"Shauna Kravec",
"Timothy Maxwell",
"Sam McCandlish",
"Kamal Ndousse",
"Oliver Rausch",
"Nicholas Schiefer",
"Da Yan",
"Miranda Zhang",
"Ethan Perez"
] | 2023-10-20 14:46:48 | http://arxiv.org/abs/2310.13548v1 | http://arxiv.org/pdf/2310.13548v1 | 2310.13548v1 |
Positive-Unlabeled Node Classification with Structure-aware Graph Learning | Node classification on graphs is an important research problem with many
applications. Real-world graph data sets may not be balanced and accurate as
assumed by most existing works. A challenging setting is positive-unlabeled
(PU) node classification, where labeled nodes are restricted to positive nodes.
It has diverse applications, e.g., pandemic prediction or network anomaly
detection. Existing works on PU node classification overlook information in the
graph structure, which can be critical. In this paper, we propose to better
utilize graph structure for PU node classification. We first propose a
distance-aware PU loss that uses homophily in graphs to introduce more accurate
supervision. We also propose a regularizer to align the model with graph
structure. Theoretical analysis shows that minimizing the proposed loss also
leads to minimizing the expected loss with both positive and negative labels.
Extensive empirical evaluation on diverse graph data sets demonstrates its
superior performance over existing state-of-the-art methods. | [
"Hansi Yang",
"Yongqi Zhang",
"Quanming Yao",
"James Kwok"
] | 2023-10-20 14:32:54 | http://arxiv.org/abs/2310.13538v1 | http://arxiv.org/pdf/2310.13538v1 | 2310.13538v1 |
Technical Report for ICCV 2023 Visual Continual Learning Challenge: Continuous Test-time Adaptation for Semantic Segmentation | The goal of the challenge is to develop a test-time adaptation (TTA) method,
which could adapt the model to gradually changing domains in video sequences
for semantic segmentation task. It is based on a synthetic driving video
dataset - SHIFT. The source model is trained on images taken during daytime in
clear weather. Domain changes at test-time are mainly caused by varying weather
conditions and times of day. The TTA methods are evaluated in each image
sequence (video) separately, meaning the model is reset to the source model
state before the next sequence. Images come one by one and a prediction has to
be made at the arrival of each frame. Each sequence is composed of 401 images
and starts with the source domain, then gradually drifts to a different one
(changing weather or time of day) until the middle of the sequence. In the
second half of the sequence, the domain gradually shifts back to the source
one. Ground truth data is available only for the validation split of the SHIFT
dataset, in which there are only six sequences that start and end with the
source domain. We conduct an analysis specifically on those sequences. Ground
truth data for test split, on which the developed TTA methods are evaluated for
leader board ranking, are not publicly available.
The proposed solution secured a 3rd place in a challenge and received an
innovation award. Contrary to the solutions that scored better, we did not use
any external pretrained models or specialized data augmentations, to keep the
solutions as general as possible. We have focused on analyzing the
distributional shift and developing a method that could adapt to changing data
dynamics and generalize across different scenarios. | [
"Damian Sójka",
"Yuyang Liu",
"Dipam Goswami",
"Sebastian Cygert",
"Bartłomiej Twardowski",
"Joost van de Weijer"
] | 2023-10-20 14:20:21 | http://arxiv.org/abs/2310.13533v1 | http://arxiv.org/pdf/2310.13533v1 | 2310.13533v1 |
Controlled Randomness Improves the Performance of Transformer Models | During the pre-training step of natural language models, the main objective
is to learn a general representation of the pre-training dataset, usually
requiring large amounts of textual data to capture the complexity and diversity
of natural language. Contrasting this, in most cases, the size of the data
available to solve the specific downstream task is often dwarfed by the
aforementioned pre-training dataset, especially in domains where data is
scarce. We introduce controlled randomness, i.e. noise, into the training
process to improve fine-tuning language models and explore the performance of
targeted noise in addition to the parameters of these models. We find that
adding such noise can improve the performance in our two downstream tasks of
joint named entity recognition and relation extraction and text summarization. | [
"Tobias Deußer",
"Cong Zhao",
"Wolfgang Krämer",
"David Leonhard",
"Christian Bauckhage",
"Rafet Sifa"
] | 2023-10-20 14:12:55 | http://arxiv.org/abs/2310.13526v1 | http://arxiv.org/pdf/2310.13526v1 | 2310.13526v1 |
Variational measurement-based quantum computation for generative modeling | Measurement-based quantum computation (MBQC) offers a fundamentally unique
paradigm to design quantum algorithms. Indeed, due to the inherent randomness
of quantum measurements, the natural operations in MBQC are not deterministic
and unitary, but are rather augmented with probabilistic byproducts. Yet, the
main algorithmic use of MBQC so far has been to completely counteract this
probabilistic nature in order to simulate unitary computations expressed in the
circuit model. In this work, we propose designing MBQC algorithms that embrace
this inherent randomness and treat the random byproducts in MBQC as a resource
for computation. As a natural application where randomness can be beneficial,
we consider generative modeling, a task in machine learning centered around
generating complex probability distributions. To address this task, we propose
a variational MBQC algorithm equipped with control parameters that allow to
directly adjust the degree of randomness to be admitted in the computation. Our
numerical findings indicate that this additional randomness can lead to
significant gains in learning performance in certain generative modeling tasks.
These results highlight the potential advantages in exploiting the inherent
randomness of MBQC and motivate further research into MBQC-based algorithms. | [
"Arunava Majumder",
"Marius Krumm",
"Tina Radkohl",
"Hendrik Poulsen Nautrup",
"Sofiene Jerbi",
"Hans J. Briegel"
] | 2023-10-20 14:11:58 | http://arxiv.org/abs/2310.13524v1 | http://arxiv.org/pdf/2310.13524v1 | 2310.13524v1 |
Feature Selection and Hyperparameter Fine-tuning in Artificial Neural Networks for Wood Quality Classification | Quality classification of wood boards is an essential task in the sawmill
industry, which is still usually performed by human operators in small to
median companies in developing countries. Machine learning algorithms have been
successfully employed to investigate the problem, offering a more affordable
alternative compared to other solutions. However, such approaches usually
present some drawbacks regarding the proper selection of their hyperparameters.
Moreover, the models are susceptible to the features extracted from wood board
images, which influence the induction of the model and, consequently, its
generalization power. Therefore, in this paper, we investigate the problem of
simultaneously tuning the hyperparameters of an artificial neural network (ANN)
as well as selecting a subset of characteristics that better describes the wood
board quality. Experiments were conducted over a private dataset composed of
images obtained from a sawmill industry and described using different feature
descriptors. The predictive performance of the model was compared against five
baseline methods as well as a random search, performing either ANN
hyperparameter tuning and feature selection. Experimental results suggest that
hyperparameters should be adjusted according to the feature set, or the
features should be selected considering the hyperparameter values. In summary,
the best predictive performance, i.e., a balanced accuracy of $0.80$, was
achieved in two distinct scenarios: (i) performing only feature selection, and
(ii) performing both tasks concomitantly. Thus, we suggest that at least one of
the two approaches should be considered in the context of industrial
applications. | [
"Mateus Roder",
"Leandro Aparecido Passos",
"João Paulo Papa",
"André Luis Debiaso Rossi"
] | 2023-10-20 13:32:45 | http://arxiv.org/abs/2310.13490v1 | http://arxiv.org/pdf/2310.13490v1 | 2310.13490v1 |
Personalized identification, prediction, and stimulation of neural oscillations via data-driven models of epileptic network dynamics | Neural oscillations are considered to be brain-specific signatures of
information processing and communication in the brain. They also reflect
pathological brain activity in neurological disorders, thus offering a basis
for diagnoses and forecasting. Epilepsy is one of the most common neurological
disorders, characterized by abnormal synchronization and desynchronization of
the oscillations in the brain. About one third of epilepsy cases are
pharmacoresistant, and as such emphasize the need for novel therapy approaches,
where brain stimulation appears to be a promising therapeutic option. The
development of brain stimulation paradigms, however, is often based on
generalized assumptions about brain dynamics, although it is known that
significant differences occur between patients and brain states. We developed a
framework to extract individualized predictive models of epileptic network
dynamics directly from EEG data. The models are based on the dominant coherent
oscillations and their dynamical coupling, thus combining an established
interpretation of dynamics through neural oscillations, with accurate
patient-specific features. We show that it is possible to build a direct
correspondence between the models of brain-network dynamics under periodic
driving, and the mechanism of neural entrainment via periodic stimulation. When
our framework is applied to EEG recordings of patients in status epilepticus (a
brain state of perpetual seizure activity), it yields a model-driven predictive
analysis of the therapeutic performance of periodic brain stimulation. This
suggests that periodic brain stimulation can drive pathological states of
epileptic network dynamics towards a healthy functional brain state. | [
"Tena Dubcek",
"Debora Ledergerber",
"Jana Thomann",
"Giovanna Aiello",
"Marc Serra-Garcia",
"Lukas Imbach",
"Rafael Polania"
] | 2023-10-20 13:21:31 | http://arxiv.org/abs/2310.13480v1 | http://arxiv.org/pdf/2310.13480v1 | 2310.13480v1 |
Segment, Select, Correct: A Framework for Weakly-Supervised Referring Segmentation | Referring Image Segmentation (RIS) - the problem of identifying objects in
images through natural language sentences - is a challenging task currently
mostly solved through supervised learning. However, while collecting referred
annotation masks is a time-consuming process, the few existing
weakly-supervised and zero-shot approaches fall significantly short in
performance compared to fully-supervised learning ones. To bridge the
performance gap without mask annotations, we propose a novel weakly-supervised
framework that tackles RIS by decomposing it into three steps: obtaining
instance masks for the object mentioned in the referencing instruction
(segment), using zero-shot learning to select a potentially correct mask for
the given instruction (select), and bootstrapping a model which allows for
fixing the mistakes of zero-shot selection (correct). In our experiments, using
only the first two steps (zero-shot segment and select) outperforms other
zero-shot baselines by as much as 19%, while our full method improves upon this
much stronger baseline and sets the new state-of-the-art for weakly-supervised
RIS, reducing the gap between the weakly-supervised and fully-supervised
methods in some cases from around 33% to as little as 14%. Code is available at
https://github.com/fgirbal/segment-select-correct. | [
"Francisco Eiras",
"Kemal Oksuz",
"Adel Bibi",
"Philip H. S. Torr",
"Puneet K. Dokania"
] | 2023-10-20 13:20:17 | http://arxiv.org/abs/2310.13479v2 | http://arxiv.org/pdf/2310.13479v2 | 2310.13479v2 |
An Analysis of $D^α$ seeding for $k$-means | One of the most popular clustering algorithms is the celebrated $D^\alpha$
seeding algorithm (also know as $k$-means++ when $\alpha=2$) by Arthur and
Vassilvitskii (2007), who showed that it guarantees in expectation an
$O(2^{2\alpha}\cdot \log k)$-approximate solution to the ($k$,$\alpha$)-means
cost (where euclidean distances are raised to the power $\alpha$) for any
$\alpha\ge 1$. More recently, Balcan, Dick, and White (2018) observed
experimentally that using $D^\alpha$ seeding with $\alpha>2$ can lead to a
better solution with respect to the standard $k$-means objective (i.e. the
$(k,2)$-means cost).
In this paper, we provide a rigorous understanding of this phenomenon. For
any $\alpha>2$, we show that $D^\alpha$ seeding guarantees in expectation an
approximation factor of $$ O_\alpha \left((g_\alpha)^{2/\alpha}\cdot
\left(\frac{\sigma_{\mathrm{max}}}{\sigma_{\mathrm{min}}}\right)^{2-4/\alpha}\cdot
(\min\{\ell,\log k\})^{2/\alpha}\right)$$ with respect to the standard
$k$-means cost of any underlying clustering; where $g_\alpha$ is a parameter
capturing the concentration of the points in each cluster,
$\sigma_{\mathrm{max}}$ and $\sigma_{\mathrm{min}}$ are the maximum and minimum
standard deviation of the clusters around their means, and $\ell$ is the number
of distinct mixing weights in the underlying clustering (after rounding them to
the nearest power of $2$). We complement these results by some lower bounds
showing that the dependency on $g_\alpha$ and
$\sigma_{\mathrm{max}}/\sigma_{\mathrm{min}}$ is tight.
Finally, we provide an experimental confirmation of the effects of the
aforementioned parameters when using $D^\alpha$ seeding. Further, we
corroborate the observation that $\alpha>2$ can indeed improve the $k$-means
cost compared to $D^2$ seeding, and that this advantage remains even if we run
Lloyd's algorithm after the seeding. | [
"Etienne Bamas",
"Sai Ganesh Nagarajan",
"Ola Svensson"
] | 2023-10-20 13:15:18 | http://arxiv.org/abs/2310.13474v1 | http://arxiv.org/pdf/2310.13474v1 | 2310.13474v1 |
Stable Nonconvex-Nonconcave Training via Linear Interpolation | This paper presents a theoretical analysis of linear interpolation as a
principled method for stabilizing (large-scale) neural network training. We
argue that instabilities in the optimization process are often caused by the
nonmonotonicity of the loss landscape and show how linear interpolation can
help by leveraging the theory of nonexpansive operators. We construct a new
optimization scheme called relaxed approximate proximal point (RAPP), which is
the first explicit method to achieve last iterate convergence rates for the
full range of cohypomonotone problems. The construction extends to constrained
and regularized settings. By replacing the inner optimizer in RAPP we
rediscover the family of Lookahead algorithms for which we establish
convergence in cohypomonotone problems even when the base optimizer is taken to
be gradient descent ascent. The range of cohypomonotone problems in which
Lookahead converges is further expanded by exploiting that Lookahead inherits
the properties of the base optimizer. We corroborate the results with
experiments on generative adversarial networks which demonstrates the benefits
of the linear interpolation present in both RAPP and Lookahead. | [
"Thomas Pethick",
"Wanyun Xie",
"Volkan Cevher"
] | 2023-10-20 12:45:12 | http://arxiv.org/abs/2310.13459v1 | http://arxiv.org/pdf/2310.13459v1 | 2310.13459v1 |
Correspondence learning between morphologically different robots through task demonstrations | We observe a large variety of robots in terms of their bodies, sensors, and
actuators. Given the commonalities in the skill sets, teaching each skill to
each different robot independently is inefficient and not scalable when the
large variety in the robotic landscape is considered. If we can learn the
correspondences between the sensorimotor spaces of different robots, we can
expect a skill that is learned in one robot can be more directly and easily
transferred to the other robots. In this paper, we propose a method to learn
correspondences between robots that have significant differences in their
morphologies: a fixed-based manipulator robot with joint control and a
differential drive mobile robot. For this, both robots are first given
demonstrations that achieve the same tasks. A common latent representation is
formed while learning the corresponding policies. After this initial learning
stage, the observation of a new task execution by one robot becomes sufficient
to generate a latent space representation pertaining to the other robot to
achieve the same task. We verified our system in a set of experiments where the
correspondence between two simulated robots is learned (1) when the robots need
to follow the same paths to achieve the same task, (2) when the robots need to
follow different trajectories to achieve the same task, and (3) when
complexities of the required sensorimotor trajectories are different for the
robots considered. We also provide a proof-of-the-concept realization of
correspondence learning between a real manipulator robot and a simulated mobile
robot. | [
"Hakan Aktas",
"Yukie Nagai",
"Minoru Asada",
"Erhan Oztop",
"Emre Ugur"
] | 2023-10-20 12:42:06 | http://arxiv.org/abs/2310.13458v1 | http://arxiv.org/pdf/2310.13458v1 | 2310.13458v1 |
Random Matrix Analysis to Balance between Supervised and Unsupervised Learning under the Low Density Separation Assumption | We propose a theoretical framework to analyze semi-supervised classification
under the low density separation assumption in a high-dimensional regime. In
particular, we introduce QLDS, a linear classification model, where the low
density separation assumption is implemented via quadratic margin maximization.
The algorithm has an explicit solution with rich theoretical properties, and we
show that particular cases of our algorithm are the least-square support vector
machine in the supervised case, the spectral clustering in the fully
unsupervised regime, and a class of semi-supervised graph-based approaches. As
such, QLDS establishes a smooth bridge between these supervised and
unsupervised learning methods. Using recent advances in the random matrix
theory, we formally derive a theoretical evaluation of the classification error
in the asymptotic regime. As an application, we derive a hyperparameter
selection policy that finds the best balance between the supervised and the
unsupervised terms of our learning criterion. Finally, we provide extensive
illustrations of our framework, as well as an experimental study on several
benchmarks to demonstrate that QLDS, while being computationally more
efficient, improves over cross-validation for hyperparameter selection,
indicating a high promise of the usage of random matrix theory for
semi-supervised model selection. | [
"Vasilii Feofanov",
"Malik Tiomoko",
"Aladin Virmaux"
] | 2023-10-20 11:46:12 | http://arxiv.org/abs/2310.13434v1 | http://arxiv.org/pdf/2310.13434v1 | 2310.13434v1 |
Y-Diagonal Couplings: Approximating Posteriors with Conditional Wasserstein Distances | In inverse problems, many conditional generative models approximate the
posterior measure by minimizing a distance between the joint measure and its
learned approximation. While this approach also controls the distance between
the posterior measures in the case of the Kullback Leibler divergence, it does
not hold true for the Wasserstein distance. We will introduce a conditional
Wasserstein distance with a set of restricted couplings that equals the
expected Wasserstein distance of the posteriors. By deriving its dual, we find
a rigorous way to motivate the loss of conditional Wasserstein GANs. We outline
conditions under which the vanilla and the conditional Wasserstein distance
coincide. Furthermore, we will show numerical examples where training with the
conditional Wasserstein distance yields favorable properties for posterior
sampling. | [
"Jannis Chemseddine",
"Paul Hagemann",
"Christian Wald"
] | 2023-10-20 11:46:05 | http://arxiv.org/abs/2310.13433v1 | http://arxiv.org/pdf/2310.13433v1 | 2310.13433v1 |
HRTF Interpolation using a Spherical Neural Process Meta-Learner | Several individualization methods have recently been proposed to estimate a
subject's Head-Related Transfer Function (HRTF) using convenient input
modalities such as anthropometric measurements or pinnae photographs. There
exists a need for adaptively correcting the estimation error committed by such
methods using a few data point samples from the subject's HRTF, acquired using
acoustic measurements or perceptual feedback. To this end, we introduce a
Convolutional Conditional Neural Process meta-learner specialized in HRTF error
interpolation. In particular, the model includes a Spherical Convolutional
Neural Network component to accommodate the spherical geometry of HRTF data. It
also exploits potential symmetries between the HRTF's left and right channels
about the median axis. In this work, we evaluate the proposed model's
performance purely on time-aligned spectrum interpolation grounds under a
simplified setup where a generic population-mean HRTF forms the initial
estimates prior to corrections instead of individualized ones. The trained
model achieves up to 3 dB relative error reduction compared to state-of-the-art
interpolation methods despite being trained using only 85 subjects. This
improvement translates up to nearly a halving of the data point count required
to achieve comparable accuracy, in particular from 50 to 28 points to reach an
average of -20 dB relative error per interpolated feature. Moreover, we show
that the trained model provides well-calibrated uncertainty estimates.
Accordingly, such estimates can inform the sequential decision problem of
acquiring as few correcting HRTF data points as needed to meet a desired level
of HRTF individualization accuracy. | [
"Etienne Thuillier",
"Craig Jin",
"Vesa Välimäki"
] | 2023-10-20 11:41:54 | http://arxiv.org/abs/2310.13430v1 | http://arxiv.org/pdf/2310.13430v1 | 2310.13430v1 |
FLTracer: Accurate Poisoning Attack Provenance in Federated Learning | Federated Learning (FL) is a promising distributed learning approach that
enables multiple clients to collaboratively train a shared global model.
However, recent studies show that FL is vulnerable to various poisoning
attacks, which can degrade the performance of global models or introduce
backdoors into them. In this paper, we first conduct a comprehensive study on
prior FL attacks and detection methods. The results show that all existing
detection methods are only effective against limited and specific attacks. Most
detection methods suffer from high false positives, which lead to significant
performance degradation, especially in not independent and identically
distributed (non-IID) settings. To address these issues, we propose FLTracer,
the first FL attack provenance framework to accurately detect various attacks
and trace the attack time, objective, type, and poisoned location of updates.
Different from existing methodologies that rely solely on cross-client anomaly
detection, we propose a Kalman filter-based cross-round detection to identify
adversaries by seeking the behavior changes before and after the attack. Thus,
this makes it resilient to data heterogeneity and is effective even in non-IID
settings. To further improve the accuracy of our detection method, we employ
four novel features and capture their anomalies with the joint decisions.
Extensive evaluations show that FLTracer achieves an average true positive rate
of over $96.88\%$ at an average false positive rate of less than $2.67\%$,
significantly outperforming SOTA detection methods. \footnote{Code is available
at \url{https://github.com/Eyr3/FLTracer}.} | [
"Xinyu Zhang",
"Qingyu Liu",
"Zhongjie Ba",
"Yuan Hong",
"Tianhang Zheng",
"Feng Lin",
"Li Lu",
"Kui Ren"
] | 2023-10-20 11:24:38 | http://arxiv.org/abs/2310.13424v1 | http://arxiv.org/pdf/2310.13424v1 | 2310.13424v1 |
BRFL: A Blockchain-based Byzantine-Robust Federated Learning Model | With the increasing importance of machine learning, the privacy and security
of training data have become critical. Federated learning, which stores data in
distributed nodes and shares only model parameters, has gained significant
attention for addressing this concern. However, a challenge arises in federated
learning due to the Byzantine Attack Problem, where malicious local models can
compromise the global model's performance during aggregation. This article
proposes the Blockchain-based Byzantine-Robust Federated Learning (BRLF) model
that combines federated learning with blockchain technology. This integration
enables traceability of malicious models and provides incentives for locally
trained clients. Our approach involves selecting the aggregation node based on
Pearson's correlation coefficient, and we perform spectral clustering and
calculate the average gradient within each cluster, validating its accuracy
using local dataset of the aggregation nodes. Experimental results on public
datasets demonstrate the superior byzantine robustness of our secure
aggregation algorithm compared to other baseline byzantine robust aggregation
methods, and proved our proposed model effectiveness in addressing the resource
consumption problem. | [
"Yang Li",
"Chunhe Xia",
"Chang Li",
"Tianbo Wang"
] | 2023-10-20 10:21:50 | http://arxiv.org/abs/2310.13403v1 | http://arxiv.org/pdf/2310.13403v1 | 2310.13403v1 |
Calibrating Neural Simulation-Based Inference with Differentiable Coverage Probability | Bayesian inference allows expressing the uncertainty of posterior belief
under a probabilistic model given prior information and the likelihood of the
evidence. Predominantly, the likelihood function is only implicitly established
by a simulator posing the need for simulation-based inference (SBI). However,
the existing algorithms can yield overconfident posteriors (Hermans *et al.*,
2022) defeating the whole purpose of credibility if the uncertainty
quantification is inaccurate. We propose to include a calibration term directly
into the training objective of the neural model in selected amortized SBI
techniques. By introducing a relaxation of the classical formulation of
calibration error we enable end-to-end backpropagation. The proposed method is
not tied to any particular neural model and brings moderate computational
overhead compared to the profits it introduces. It is directly applicable to
existing computational pipelines allowing reliable black-box posterior
inference. We empirically show on six benchmark problems that the proposed
method achieves competitive or better results in terms of coverage and expected
posterior density than the previously existing approaches. | [
"Maciej Falkiewicz",
"Naoya Takeishi",
"Imahn Shekhzadeh",
"Antoine Wehenkel",
"Arnaud Delaunoy",
"Gilles Louppe",
"Alexandros Kalousis"
] | 2023-10-20 10:20:45 | http://arxiv.org/abs/2310.13402v1 | http://arxiv.org/pdf/2310.13402v1 | 2310.13402v1 |
Equivariant Deep Weight Space Alignment | Permutation symmetries of deep networks make simple operations like model
averaging and similarity estimation challenging. In many cases, aligning the
weights of the networks, i.e., finding optimal permutations between their
weights, is necessary. More generally, weight alignment is essential for a wide
range of applications, from model merging, through exploring the optimization
landscape of deep neural networks, to defining meaningful distance functions
between neural networks. Unfortunately, weight alignment is an NP-hard problem.
Prior research has mainly focused on solving relaxed versions of the alignment
problem, leading to either time-consuming methods or sub-optimal solutions. To
accelerate the alignment process and improve its quality, we propose a novel
framework aimed at learning to solve the weight alignment problem, which we
name Deep-Align. To that end, we first demonstrate that weight alignment
adheres to two fundamental symmetries and then, propose a deep architecture
that respects these symmetries. Notably, our framework does not require any
labeled data. We provide a theoretical analysis of our approach and evaluate
Deep-Align on several types of network architectures and learning setups. Our
experimental results indicate that a feed-forward pass with Deep-Align produces
better or equivalent alignments compared to those produced by current
optimization algorithms. Additionally, our alignments can be used as an
initialization for other methods to gain even better solutions with a
significant speedup in convergence. | [
"Aviv Navon",
"Aviv Shamsian",
"Ethan Fetaya",
"Gal Chechik",
"Nadav Dym",
"Haggai Maron"
] | 2023-10-20 10:12:06 | http://arxiv.org/abs/2310.13397v1 | http://arxiv.org/pdf/2310.13397v1 | 2310.13397v1 |
RL-X: A Deep Reinforcement Learning Library (not only) for RoboCup | This paper presents the new Deep Reinforcement Learning (DRL) library RL-X
and its application to the RoboCup Soccer Simulation 3D League and classic DRL
benchmarks. RL-X provides a flexible and easy-to-extend codebase with
self-contained single directory algorithms. Through the fast JAX-based
implementations, RL-X can reach up to 4.5x speedups compared to well-known
frameworks like Stable-Baselines3. | [
"Nico Bohlinger",
"Klaus Dorer"
] | 2023-10-20 10:06:03 | http://arxiv.org/abs/2310.13396v1 | http://arxiv.org/pdf/2310.13396v1 | 2310.13396v1 |
Optimal Best Arm Identification with Fixed Confidence in Restless Bandits | We study best arm identification in a restless multi-armed bandit setting
with finitely many arms. The discrete-time data generated by each arm forms a
homogeneous Markov chain taking values in a common, finite state space. The
state transitions in each arm are captured by an ergodic transition probability
matrix (TPM) that is a member of a single-parameter exponential family of TPMs.
The real-valued parameters of the arm TPMs are unknown and belong to a given
space. Given a function $f$ defined on the common state space of the arms, the
goal is to identify the best arm -- the arm with the largest average value of
$f$ evaluated under the arm's stationary distribution -- with the fewest number
of samples, subject to an upper bound on the decision's error probability
(i.e., the fixed-confidence regime). A lower bound on the growth rate of the
expected stopping time is established in the asymptote of a vanishing error
probability. Furthermore, a policy for best arm identification is proposed, and
its expected stopping time is proved to have an asymptotic growth rate that
matches the lower bound. It is demonstrated that tracking the long-term
behavior of a certain Markov decision process and its state-action visitation
proportions are the key ingredients in analyzing the converse and achievability
bounds. It is shown that under every policy, the state-action visitation
proportions satisfy a specific approximate flow conservation constraint and
that these proportions match the optimal proportions dictated by the lower
bound under any asymptotically optimal policy. The prior studies on best arm
identification in restless bandits focus on independent observations from the
arms, rested Markov arms, and restless Markov arms with known arm TPMs. In
contrast, this work is the first to study best arm identification in restless
bandits with unknown arm TPMs. | [
"P. N. Karthik",
"Vincent Y. F. Tan",
"Arpan Mukherjee",
"Ali Tajer"
] | 2023-10-20 10:04:05 | http://arxiv.org/abs/2310.13393v1 | http://arxiv.org/pdf/2310.13393v1 | 2310.13393v1 |
Learning Successor Representations with Distributed Hebbian Temporal Memory | This paper presents a novel approach to address the challenge of online
hidden representation learning for decision-making under uncertainty in
non-stationary, partially observable environments. The proposed algorithm,
Distributed Hebbian Temporal Memory (DHTM), is based on factor graph formalism
and a multicomponent neuron model. DHTM aims to capture sequential data
relationships and make cumulative predictions about future observations,
forming Successor Representation (SR). Inspired by neurophysiological models of
the neocortex, the algorithm utilizes distributed representations, sparse
transition matrices, and local Hebbian-like learning rules to overcome the
instability and slow learning process of traditional temporal memory algorithms
like RNN and HMM. Experimental results demonstrate that DHTM outperforms
classical LSTM and performs comparably to more advanced RNN-like algorithms,
speeding up Temporal Difference learning for SR in changing environments.
Additionally, we compare the SRs produced by DHTM to another biologically
inspired HMM-like algorithm, CSCG. Our findings suggest that DHTM is a
promising approach for addressing the challenges of online hidden
representation learning in dynamic environments. | [
"Evgenii Dzhivelikian",
"Petr Kuderov",
"Aleksandr I. Panov"
] | 2023-10-20 10:03:14 | http://arxiv.org/abs/2310.13391v1 | http://arxiv.org/pdf/2310.13391v1 | 2310.13391v1 |
Music Augmentation and Denoising For Peak-Based Audio Fingerprinting | Audio fingerprinting is a well-established solution for song identification
from short recording excerpts. Popular methods rely on the extraction of sparse
representations, generally spectral peaks, and have proven to be accurate,
fast, and scalable to large collections. However, real-world applications of
audio identification often happen in noisy environments, which can cause these
systems to fail. In this work, we tackle this problem by introducing and
releasing a new audio augmentation pipeline that adds noise to music snippets
in a realistic way, by stochastically mimicking real-world scenarios. We then
propose and release a deep learning model that removes noisy components from
spectrograms in order to improve peak-based fingerprinting systems' accuracy.
We show that the addition of our model improves the identification performance
of commonly used audio fingerprinting systems, even under noisy conditions. | [
"Kamil Akesbi",
"Dorian Desblancs",
"Benjamin Martin"
] | 2023-10-20 09:56:22 | http://arxiv.org/abs/2310.13388v1 | http://arxiv.org/pdf/2310.13388v1 | 2310.13388v1 |
Assumption violations in causal discovery and the robustness of score matching | When domain knowledge is limited and experimentation is restricted by
ethical, financial, or time constraints, practitioners turn to observational
causal discovery methods to recover the causal structure, exploiting the
statistical properties of their data. Because causal discovery without further
assumptions is an ill-posed problem, each algorithm comes with its own set of
usually untestable assumptions, some of which are hard to meet in real
datasets. Motivated by these considerations, this paper extensively benchmarks
the empirical performance of recent causal discovery methods on observational
i.i.d. data generated under different background conditions, allowing for
violations of the critical assumptions required by each selected approach. Our
experimental findings show that score matching-based methods demonstrate
surprising performance in the false positive and false negative rate of the
inferred graph in these challenging scenarios, and we provide theoretical
insights into their performance. This work is also the first effort to
benchmark the stability of causal discovery algorithms with respect to the
values of their hyperparameters. Finally, we hope this paper will set a new
standard for the evaluation of causal discovery methods and can serve as an
accessible entry point for practitioners interested in the field, highlighting
the empirical implications of different algorithm choices. | [
"Francesco Montagna",
"Atalanti A. Mastakouri",
"Elias Eulig",
"Nicoletta Noceti",
"Lorenzo Rosasco",
"Dominik Janzing",
"Bryon Aragam",
"Francesco Locatello"
] | 2023-10-20 09:56:07 | http://arxiv.org/abs/2310.13387v1 | http://arxiv.org/pdf/2310.13387v1 | 2310.13387v1 |
Tuna: Instruction Tuning using Feedback from Large Language Models | Instruction tuning of open-source large language models (LLMs) like LLaMA,
using direct outputs from more powerful LLMs such as Instruct-GPT and GPT-4,
has proven to be a cost-effective way to align model behaviors with human
preferences. However, the instruction-tuned model has only seen one response
per instruction, lacking the knowledge of potentially better responses. In this
paper, we propose finetuning an instruction-tuned LLM using our novel
\textit{probabilistic ranking} and \textit{contextual ranking} approaches to
increase the likelihood of generating better responses. Probabilistic ranking
enables the instruction-tuned model to inherit the relative rankings of
high-quality and low-quality responses from the teacher LLM. On the other hand,
learning with contextual ranking allows the model to refine its own response
distribution using the contextual understanding ability of stronger LLMs.
Furthermore, we apply probabilistic ranking and contextual ranking sequentially
to the instruction-tuned LLM. The resulting model, which we call \textbf{Tuna},
consistently improves the performance on Super Natural Instructions (119 test
tasks), LMentry (25 test tasks), Vicuna QA, and can even obtain better results
than several strong reinforcement learning baselines. Our code and data are
available at \url{ https://github.com/microsoft/LMOps}. | [
"Haoran Li",
"Yiran Liu",
"Xingxing Zhang",
"Wei Lu",
"Furu Wei"
] | 2023-10-20 09:55:06 | http://arxiv.org/abs/2310.13385v1 | http://arxiv.org/pdf/2310.13385v1 | 2310.13385v1 |
Salted Inference: Enhancing Privacy while Maintaining Efficiency of Split Inference in Mobile Computing | Split inference partitions a deep neural network (DNN) to run the early part
at the edge and the later part in the cloud. This meets two key requirements
for on-device machine learning: input privacy and compute efficiency. Still, an
open question in split inference is output privacy, given that the output of a
DNN is visible to the cloud. While encrypted computing can protect output
privacy, it mandates extensive computation and communication resources. In this
paper, we introduce "Salted DNNs": a novel method that lets clients control the
semantic interpretation of DNN output at inference time while maintaining
accuracy and efficiency very close to that of a standard DNN. Experimental
evaluations conducted on both image and sensor data show that Salted DNNs
achieve classification accuracy very close to standard DNNs, particularly when
the salted layer is positioned within the early part to meet the requirements
of split inference. Our method is general and can be applied to various DNNs.
We open-source our code and results, as a benchmark for future studies. | [
"Mohammad Malekzadeh",
"Fahim Kawsar"
] | 2023-10-20 09:53:55 | http://arxiv.org/abs/2310.13384v1 | http://arxiv.org/pdf/2310.13384v1 | 2310.13384v1 |
Accelerated sparse Kernel Spectral Clustering for large scale data clustering problems | An improved version of the sparse multiway kernel spectral clustering (KSC)
is presented in this brief. The original algorithm is derived from weighted
kernel principal component (KPCA) analysis formulated within the primal-dual
least-squares support vector machine (LS-SVM) framework. Sparsity is achieved
then by the combination of the incomplete Cholesky decomposition (ICD) based
low rank approximation of the kernel matrix with the so called reduced set
method. The original ICD based sparse KSC algorithm was reported to be
computationally far too demanding, especially when applied on large scale data
clustering problems that actually it was designed for, which has prevented to
gain more than simply theoretical relevance so far. This is altered by the
modifications reported in this brief that drastically improve the computational
characteristics. Solving the alternative, symmetrized version of the
computationally most demanding core eigenvalue problem eliminates the necessity
of forming and SVD of large matrices during the model construction. This
results in solving clustering problems now within seconds that were reported to
require hours without altering the results. Furthermore, sparsity is also
improved significantly, leading to more compact model representation,
increasing further not only the computational efficiency but also the
descriptive power. These transform the original, only theoretically relevant
ICD based sparse KSC algorithm applicable for large scale practical clustering
problems. Theoretical results and improvements are demonstrated by
computational experiments on carefully selected synthetic data as well as on
real life problems such as image segmentation. | [
"Mihaly Novak",
"Rocco Langone",
"Carlos Alzate",
"Johan Suykens"
] | 2023-10-20 09:51:42 | http://arxiv.org/abs/2310.13381v1 | http://arxiv.org/pdf/2310.13381v1 | 2310.13381v1 |
Physics-Informed Graph Convolutional Networks: Towards a generalized framework for complex geometries | Since the seminal work of [9] and their Physics-Informed neural networks
(PINNs), many efforts have been conducted towards solving partial differential
equations (PDEs) with Deep Learning models. However, some challenges remain,
for instance the extension of such models to complex three-dimensional
geometries, and a study on how such approaches could be combined to classical
numerical solvers. In this work, we justify the use of graph neural networks
for these problems, based on the similarity between these architectures and the
meshes used in traditional numerical techniques for solving partial
differential equations. After proving an issue with the Physics-Informed
framework for complex geometries, during the computation of PDE residuals, an
alternative procedure is proposed, by combining classical numerical solvers and
the Physics-Informed framework. Finally, we propose an implementation of this
approach, that we test on a three-dimensional problem on an irregular geometry. | [
"Marien Chenaud",
"José Alves",
"Frédéric Magoulès"
] | 2023-10-20 09:46:12 | http://arxiv.org/abs/2310.14948v1 | http://arxiv.org/pdf/2310.14948v1 | 2310.14948v1 |
SigFormer: Signature Transformers for Deep Hedging | Deep hedging is a promising direction in quantitative finance, incorporating
models and techniques from deep learning research. While giving excellent
hedging strategies, models inherently requires careful treatment in designing
architectures for neural networks. To mitigate such difficulties, we introduce
SigFormer, a novel deep learning model that combines the power of path
signatures and transformers to handle sequential data, particularly in cases
with irregularities. Path signatures effectively capture complex data patterns,
while transformers provide superior sequential attention. Our proposed model is
empirically compared to existing methods on synthetic data, showcasing faster
learning and enhanced robustness, especially in the presence of irregular
underlying price data. Additionally, we validate our model performance through
a real-world backtest on hedging the SP 500 index, demonstrating positive
outcomes. | [
"Anh Tong",
"Thanh Nguyen-Tang",
"Dongeun Lee",
"Toan Tran",
"Jaesik Choi"
] | 2023-10-20 09:25:35 | http://arxiv.org/abs/2310.13369v1 | http://arxiv.org/pdf/2310.13369v1 | 2310.13369v1 |
VFedMH: Vertical Federated Learning for Training Multi-party Heterogeneous Models | Vertical Federated Learning (VFL) has gained increasing attention as a novel
training paradigm that integrates sample alignment and feature union. However,
existing VFL methods face challenges when dealing with heterogeneous local
models among participants, which affects optimization convergence and
generalization. To address this issue, this paper proposes a novel approach
called Vertical Federated learning for training Multi-parties Heterogeneous
models (VFedMH). VFedMH focuses on aggregating the embeddings of each
participant's knowledge instead of intermediate results during forward
propagation. The active party, who possesses labels and features of the sample,
in VFedMH securely aggregates local embeddings to obtain global knowledge
embeddings, and sends them to passive parties. The passive parties, who own
only features of the sample, then utilize the global embeddings to propagate
forward on their local heterogeneous networks. However, the passive party does
not own the labels, so the local model gradient cannot be calculated locally.
To overcome this limitation, the active party assists the passive party in
computing its local heterogeneous model gradients. Then, each participant
trains their local model using the heterogeneous model gradients. The objective
is to minimize the loss value of their respective local heterogeneous models.
Additionally, the paper provides a theoretical analysis of VFedMH's convergence
performance. Extensive experiments are conducted to demonstrate that VFedMH can
simultaneously train multiple heterogeneous models with heterogeneous
optimization and outperform some recent methods in model performance. | [
"Shuo Wang",
"Keke Gai",
"Jing Yu",
"Liehuang Zhu"
] | 2023-10-20 09:22:51 | http://arxiv.org/abs/2310.13367v1 | http://arxiv.org/pdf/2310.13367v1 | 2310.13367v1 |
Dissecting Causal Biases | Accurately measuring discrimination in machine learning-based automated
decision systems is required to address the vital issue of fairness between
subpopulations and/or individuals. Any bias in measuring discrimination can
lead to either amplification or underestimation of the true value of
discrimination. This paper focuses on a class of bias originating in the way
training data is generated and/or collected. We call such class causal biases
and use tools from the field of causality to formally define and analyze such
biases. Four sources of bias are considered, namely, confounding, selection,
measurement, and interaction. The main contribution of this paper is to
provide, for each source of bias, a closed-form expression in terms of the
model parameters. This makes it possible to analyze the behavior of each source
of bias, in particular, in which cases they are absent and in which other cases
they are maximized. We hope that the provided characterizations help the
community better understand the sources of bias in machine learning
applications. | [
"Rūta Binkytė",
"Sami Zhioua",
"Yassine Turki"
] | 2023-10-20 09:12:10 | http://arxiv.org/abs/2310.13364v1 | http://arxiv.org/pdf/2310.13364v1 | 2310.13364v1 |
Towards General Error Diagnosis via Behavioral Testing in Machine Translation | Behavioral testing offers a crucial means of diagnosing linguistic errors and
assessing capabilities of NLP models. However, applying behavioral testing to
machine translation (MT) systems is challenging as it generally requires human
efforts to craft references for evaluating the translation quality of such
systems on newly generated test cases. Existing works in behavioral testing of
MT systems circumvent this by evaluating translation quality without
references, but this restricts diagnosis to specific types of errors, such as
incorrect translation of single numeric or currency words. In order to diagnose
general errors, this paper proposes a new Bilingual Translation Pair Generation
based Behavior Testing (BTPGBT) framework for conducting behavioral testing of
MT systems. The core idea of BTPGBT is to employ a novel bilingual translation
pair generation (BTPG) approach that automates the construction of high-quality
test cases and their pseudoreferences. Experimental results on various MT
systems demonstrate that BTPGBT could provide comprehensive and accurate
behavioral testing results for general error diagnosis, which further leads to
several insightful findings. Our code and data are available at https:
//github.com/wujunjie1998/BTPGBT. | [
"Junjie Wu",
"Lemao Liu",
"Dit-Yan Yeung"
] | 2023-10-20 09:06:41 | http://arxiv.org/abs/2310.13362v1 | http://arxiv.org/pdf/2310.13362v1 | 2310.13362v1 |
DYNAMITE: Dynamic Interplay of Mini-Batch Size and Aggregation Frequency for Federated Learning with Static and Streaming Dataset | Federated Learning (FL) is a distributed learning paradigm that can
coordinate heterogeneous edge devices to perform model training without sharing
private data. While prior works have focused on analyzing FL convergence with
respect to hyperparameters like batch size and aggregation frequency, the joint
effects of adjusting these parameters on model performance, training time, and
resource consumption have been overlooked, especially when facing dynamic data
streams and network characteristics. This paper introduces novel analytical
models and optimization algorithms that leverage the interplay between batch
size and aggregation frequency to navigate the trade-offs among convergence,
cost, and completion time for dynamic FL training. We establish a new
convergence bound for training error considering heterogeneous datasets across
devices and derive closed-form solutions for co-optimized batch size and
aggregation frequency that are consistent across all devices. Additionally, we
design an efficient algorithm for assigning different batch configurations
across devices, improving model accuracy and addressing the heterogeneity of
both data and system characteristics. Further, we propose an adaptive control
algorithm that dynamically estimates network states, efficiently samples
appropriate data batches, and effectively adjusts batch sizes and aggregation
frequency on the fly. Extensive experiments demonstrate the superiority of our
offline optimal solutions and online adaptive algorithm. | [
"Weijie Liu",
"Xiaoxi Zhang",
"Jingpu Duan",
"Carlee Joe-Wong",
"Zhi Zhou",
"Xu Chen"
] | 2023-10-20 08:36:12 | http://arxiv.org/abs/2310.14906v1 | http://arxiv.org/pdf/2310.14906v1 | 2310.14906v1 |
DeepFDR: A Deep Learning-based False Discovery Rate Control Method for Neuroimaging Data | Voxel-based multiple testing is widely used in neuroimaging data analysis.
Traditional false discovery rate (FDR) control methods often ignore the spatial
dependence among the voxel-based tests and thus suffer from substantial loss of
testing power. While recent spatial FDR control methods have emerged, their
validity and optimality remain questionable when handling the complex spatial
dependencies of the brain. Concurrently, deep learning methods have
revolutionized image segmentation, a task closely related to voxel-based
multiple testing. In this paper, we propose DeepFDR, a novel spatial FDR
control method that leverages unsupervised deep learning-based image
segmentation to address the voxel-based multiple testing problem. Numerical
studies, including comprehensive simulations and Alzheimer's disease FDG-PET
image analysis, demonstrate DeepFDR's superiority over existing methods.
DeepFDR not only excels in FDR control and effectively diminishes the false
nondiscovery rate, but also boasts exceptional computational efficiency highly
suited for tackling large-scale neuroimaging data. | [
"Taehyo Kim",
"Hai Shu",
"Qiran Jia",
"Mony de Leon"
] | 2023-10-20 08:27:13 | http://arxiv.org/abs/2310.13349v1 | http://arxiv.org/pdf/2310.13349v1 | 2310.13349v1 |
Boosting for Bounding the Worst-class Error | This paper tackles the problem of the worst-class error rate, instead of the
standard error rate averaged over all classes. For example, a three-class
classification task with class-wise error rates of 10\%, 10\%, and 40\% has a
worst-class error rate of 40\%, whereas the average is 20\% under the
class-balanced condition. The worst-class error is important in many
applications. For example, in a medical image classification task, it would not
be acceptable for the malignant tumor class to have a 40\% error rate, while
the benign and healthy classes have 10\% error rates.We propose a boosting
algorithm that guarantees an upper bound of the worst-class training error and
derive its generalization bound. Experimental results show that the algorithm
lowers worst-class test error rates while avoiding overfitting to the training
set. | [
"Yuya Saito",
"Shinnosuke Matsuo",
"Seiichi Uchida",
"Daiki Suehiro"
] | 2023-10-20 07:49:10 | http://arxiv.org/abs/2310.14890v1 | http://arxiv.org/pdf/2310.14890v1 | 2310.14890v1 |
Non-Negative Spherical Relaxations for Universe-Free Multi-Matching and Clustering | We propose a novel non-negative spherical relaxation for optimization
problems over binary matrices with injectivity constraints, which in particular
has applications in multi-matching and clustering. We relax respective binary
matrix constraints to the (high-dimensional) non-negative sphere. To optimize
our relaxed problem, we use a conditional power iteration method to iteratively
improve the objective function, while at same time sweeping over a continuous
scalar parameter that is (indirectly) related to the universe size (or number
of clusters). Opposed to existing procedures that require to fix the integer
universe size before optimization, our method automatically adjusts the
analogous continuous parameter. Furthermore, while our approach shares
similarities with spectral multi-matching and spectral clustering, our
formulation has the strong advantage that we do not rely on additional
post-processing procedures to obtain binary results. Our method shows
compelling results in various multi-matching and clustering settings, even when
compared to methods that use the ground truth universe size (or number of
clusters). | [
"Johan Thunberg",
"Florian Bernard"
] | 2023-10-20 07:01:29 | http://arxiv.org/abs/2310.13311v1 | http://arxiv.org/pdf/2310.13311v1 | 2310.13311v1 |
Test-Time Self-Adaptive Small Language Models for Question Answering | Recent instruction-finetuned large language models (LMs) have achieved
notable performances in various tasks, such as question-answering (QA).
However, despite their ability to memorize a vast amount of general knowledge
across diverse tasks, they might be suboptimal on specific tasks due to their
limited capacity to transfer and adapt knowledge to target tasks. Moreover,
further finetuning LMs with labeled datasets is often infeasible due to their
absence, but it is also questionable if we can transfer smaller LMs having
limited knowledge only with unlabeled test data. In this work, we show and
investigate the capabilities of smaller self-adaptive LMs, only with unlabeled
test data. In particular, we first stochastically generate multiple answers,
and then ensemble them while filtering out low-quality samples to mitigate
noise from inaccurate labels. Our proposed self-adaption strategy demonstrates
significant performance improvements on benchmark QA datasets with higher
robustness across diverse prompts, enabling LMs to stay stable. Code is
available at: https://github.com/starsuzi/T-SAS. | [
"Soyeong Jeong",
"Jinheon Baek",
"Sukmin Cho",
"Sung Ju Hwang",
"Jong C. Park"
] | 2023-10-20 06:49:32 | http://arxiv.org/abs/2310.13307v1 | http://arxiv.org/pdf/2310.13307v1 | 2310.13307v1 |
Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting | Automatic response forecasting for news media plays a crucial role in
enabling content producers to efficiently predict the impact of news releases
and prevent unexpected negative outcomes such as social conflict and moral
injury. To effectively forecast responses, it is essential to develop measures
that leverage the social dynamics and contextual information surrounding
individuals, especially in cases where explicit profiles or historical actions
of the users are limited (referred to as lurkers). As shown in a previous
study, 97% of all tweets are produced by only the most active 25% of users.
However, existing approaches have limited exploration of how to best process
and utilize these important features. To address this gap, we propose a novel
framework, named SocialSense, that leverages a large language model to induce a
belief-centered graph on top of an existent social network, along with
graph-based propagation to capture social dynamics. We hypothesize that the
induced graph that bridges the gap between distant users who share similar
beliefs allows the model to effectively capture the response patterns. Our
method surpasses existing state-of-the-art in experimental evaluations for both
zero-shot and supervised settings, demonstrating its effectiveness in response
forecasting. Moreover, the analysis reveals the framework's capability to
effectively handle unseen user and lurker scenarios, further highlighting its
robustness and practical applicability. | [
"Chenkai Sun",
"Jinning Li",
"Yi R. Fung",
"Hou Pong Chan",
"Tarek Abdelzaher",
"ChengXiang Zhai",
"Heng Ji"
] | 2023-10-20 06:17:02 | http://arxiv.org/abs/2310.13297v1 | http://arxiv.org/pdf/2310.13297v1 | 2310.13297v1 |
CXR-CLIP: Toward Large Scale Chest X-ray Language-Image Pre-training | A large-scale image-text pair dataset has greatly contributed to the
development of vision-language pre-training (VLP) models, which enable
zero-shot or few-shot classification without costly annotation. However, in the
medical domain, the scarcity of data remains a significant challenge for
developing a powerful VLP model. In this paper, we tackle the lack of
image-text data in chest X-ray by expanding image-label pair as image-text pair
via general prompt and utilizing multiple images and multiple sections in a
radiologic report. We also design two contrastive losses, named ICL and TCL,
for learning study-level characteristics of medical images and reports,
respectively. Our model outperforms the state-of-the-art models trained under
the same conditions. Also, enlarged dataset improve the discriminative power of
our pre-trained model for classification, while sacrificing marginal retrieval
performance. Code is available at https://github.com/kakaobrain/cxr-clip. | [
"Kihyun You",
"Jawook Gu",
"Jiyeon Ham",
"Beomhee Park",
"Jiho Kim",
"Eun Kyoung Hong",
"Woonhyunk Baek",
"Byungseok Roh"
] | 2023-10-20 05:44:55 | http://arxiv.org/abs/2310.13292v1 | http://arxiv.org/pdf/2310.13292v1 | 2310.13292v1 |
Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks | Large language models have revolutionized the field of NLP by achieving
state-of-the-art performance on various tasks. However, there is a concern that
these models may disclose information in the training data. In this study, we
focus on the summarization task and investigate the membership inference (MI)
attack: given a sample and black-box access to a model's API, it is possible to
determine if the sample was part of the training data. We exploit text
similarity and the model's resistance to document modifications as potential MI
signals and evaluate their effectiveness on widely used datasets. Our results
demonstrate that summarization models are at risk of exposing data membership,
even in cases where the reference summary is not available. Furthermore, we
discuss several safeguards for training summarization models to protect against
MI attacks and discuss the inherent trade-off between privacy and utility. | [
"Ruixiang Tang",
"Gord Lueck",
"Rodolfo Quispe",
"Huseyin A Inan",
"Janardhan Kulkarni",
"Xia Hu"
] | 2023-10-20 05:44:39 | http://arxiv.org/abs/2310.13291v1 | http://arxiv.org/pdf/2310.13291v1 | 2310.13291v1 |
Learning Recurrent Models with Temporally Local Rules | Fitting generative models to sequential data typically involves two recursive
computations through time, one forward and one backward. The latter could be a
computation of the loss gradient (as in backpropagation through time), or an
inference algorithm (as in the RTS/Kalman smoother). The backward pass in
particular is computationally expensive (since it is inherently serial and
cannot exploit GPUs), and difficult to map onto biological processes.
Work-arounds have been proposed; here we explore a very different one:
requiring the generative model to learn the joint distribution over current and
previous states, rather than merely the transition probabilities. We show on
toy datasets that different architectures employing this principle can learn
aspects of the data typically requiring the backward pass. | [
"Azwar Abdulsalam",
"Joseph G. Makin"
] | 2023-10-20 05:30:30 | http://arxiv.org/abs/2310.13284v1 | http://arxiv.org/pdf/2310.13284v1 | 2310.13284v1 |
FedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA Tuning | Federated learning (FL) is an emerging machine learning paradigm in which a
central server coordinates multiple participants (a.k.a. FL clients) to train a
model collaboratively on decentralized data with privacy protection. This
paradigm constrains that all clients have to train models with the same
structures (homogeneous). In practice, FL often faces statistical
heterogeneity, system heterogeneity and model heterogeneity challenges. These
challenging issues inspire the field of Model-Heterogeneous Personalized
Federated Learning (MHPFL) which aims to train a personalized and heterogeneous
local model for each FL client. Existing MHPFL approaches cannot achieve
satisfactory model performance, acceptable computational overhead and efficient
communication simultaneously. To bridge this gap, we propose a novel
computation- and communication-efficient model-heterogeneous personalized
Federated learning framework based on LoRA tuning (FedLoRA). It is designed to
incorporate a homogeneous small adapter for each client's heterogeneous local
model. Both models are trained following the proposed iterative training for
global-local knowledge exchange. The homogeneous small local adapters are sent
to the FL server to be aggregated into a global adapter. In this way, FL
clients can train heterogeneous local models without incurring high computation
and communication costs. We theoretically prove the non-convex convergence rate
of FedLoRA. Extensive experiments on two real-world datasets demonstrate that
FedLoRA outperforms six state-of-the-art baselines, beating the best approach
by 1.35% in terms of test accuracy, 11.81 times computation overhead reduction
and 7.41 times communication cost saving. | [
"Liping Yi",
"Han Yu",
"Gang Wang",
"Xiaoguang Liu"
] | 2023-10-20 05:24:28 | http://arxiv.org/abs/2310.13283v1 | http://arxiv.org/pdf/2310.13283v1 | 2310.13283v1 |
An Event based Prediction Suffix Tree | This article introduces the Event based Prediction Suffix Tree (EPST), a
biologically inspired, event-based prediction algorithm. The EPST learns a
model online based on the statistics of an event based input and can make
predictions over multiple overlapping patterns. The EPST uses a representation
specific to event based data, defined as a portion of the power set of event
subsequences within a short context window. It is explainable, and possesses
many promising properties such as fault tolerance, resistance to event noise,
as well as the capability for one-shot learning. The computational features of
the EPST are examined in a synthetic data prediction task with additive event
noise, event jitter, and dropout. The resulting algorithm outputs predicted
projections for the near term future of the signal, which may be applied to
tasks such as event based anomaly detection or pattern recognition. | [
"Evie Andrew",
"Travis Monk",
"André van Schaik"
] | 2023-10-20 05:07:45 | http://arxiv.org/abs/2310.14944v1 | http://arxiv.org/pdf/2310.14944v1 | 2310.14944v1 |
InvGC: Robust Cross-Modal Retrieval by Inverse Graph Convolution | Over recent decades, significant advancements in cross-modal retrieval are
mainly driven by breakthroughs in visual and linguistic modeling. However, a
recent study shows that multi-modal data representations tend to cluster within
a limited convex cone (as representation degeneration problem), which hinders
retrieval performance due to the inseparability of these representations. In
our study, we first empirically validate the presence of the representation
degeneration problem across multiple cross-modal benchmarks and methods. Next,
to address it, we introduce a novel method, called InvGC, a post-processing
technique inspired by graph convolution and average pooling. Specifically,
InvGC defines the graph topology within the datasets and then applies graph
convolution in a subtractive manner. This method effectively separates
representations by increasing the distances between data points. To improve the
efficiency and effectiveness of InvGC, we propose an advanced graph topology,
LocalAdj, which only aims to increase the distances between each data point and
its nearest neighbors. To understand why InvGC works, we present a detailed
theoretical analysis, proving that the lower bound of recall will be improved
after deploying InvGC. Extensive empirical results show that InvGC and InvGC
w/LocalAdj significantly mitigate the representation degeneration problem,
thereby enhancing retrieval performance.
Our code is available at
https://github.com/yimuwangcs/Better_Cross_Modal_Retrieval | [
"Xiangru Jian",
"Yimu Wang"
] | 2023-10-20 04:45:44 | http://arxiv.org/abs/2310.13276v1 | http://arxiv.org/pdf/2310.13276v1 | 2310.13276v1 |
Meta-learning of Physics-informed Neural Networks for Efficiently Solving Newly Given PDEs | We propose a neural network-based meta-learning method to efficiently solve
partial differential equation (PDE) problems. The proposed method is designed
to meta-learn how to solve a wide variety of PDE problems, and uses the
knowledge for solving newly given PDE problems. We encode a PDE problem into a
problem representation using neural networks, where governing equations are
represented by coefficients of a polynomial function of partial derivatives,
and boundary conditions are represented by a set of point-condition pairs. We
use the problem representation as an input of a neural network for predicting
solutions, which enables us to efficiently predict problem-specific solutions
by the forwarding process of the neural network without updating model
parameters. To train our model, we minimize the expected error when adapted to
a PDE problem based on the physics-informed neural network framework, by which
we can evaluate the error even when solutions are unknown. We demonstrate that
our proposed method outperforms existing methods in predicting solutions of PDE
problems. | [
"Tomoharu Iwata",
"Yusuke Tanaka",
"Naonori Ueda"
] | 2023-10-20 04:35:59 | http://arxiv.org/abs/2310.13270v1 | http://arxiv.org/pdf/2310.13270v1 | 2310.13270v1 |
An Exploratory Study on Simulated Annealing for Feature Selection in Learning-to-Rank | Learning-to-rank is an applied domain of supervised machine learning. As
feature selection has been found to be effective for improving the accuracy of
learning models in general, it is intriguing to investigate this process for
learning-to-rank domain. In this study, we investigate the use of a popular
meta-heuristic approach called simulated annealing for this task. Under the
general framework of simulated annealing, we explore various neighborhood
selection strategies and temperature cooling schemes. We further introduce a
new hyper-parameter called the progress parameter that can effectively be used
to traverse the search space. Our algorithms are evaluated on five publicly
benchmark datasets of learning-to-rank. For a better validation, we also
compare the simulated annealing-based feature selection algorithm with another
effective meta-heuristic algorithm, namely local beam search. Extensive
experimental results shows the efficacy of our proposed models. | [
"Mohd. Sayemul Haque",
"Md. Fahim",
"Muhammad Ibrahim"
] | 2023-10-20 04:30:44 | http://arxiv.org/abs/2310.13269v1 | http://arxiv.org/pdf/2310.13269v1 | 2310.13269v1 |
DPM-Solver-v3: Improved Diffusion ODE Solver with Empirical Model Statistics | Diffusion probabilistic models (DPMs) have exhibited excellent performance
for high-fidelity image generation while suffering from inefficient sampling.
Recent works accelerate the sampling procedure by proposing fast ODE solvers
that leverage the specific ODE form of DPMs. However, they highly rely on
specific parameterization during inference (such as noise/data prediction),
which might not be the optimal choice. In this work, we propose a novel
formulation towards the optimal parameterization during sampling that minimizes
the first-order discretization error of the ODE solution. Based on such
formulation, we propose \textit{DPM-Solver-v3}, a new fast ODE solver for DPMs
by introducing several coefficients efficiently computed on the pretrained
model, which we call \textit{empirical model statistics}. We further
incorporate multistep methods and a predictor-corrector framework, and propose
some techniques for improving sample quality at small numbers of function
evaluations (NFE) or large guidance scales. Experiments show that DPM-Solver-v3
achieves consistently better or comparable performance in both unconditional
and conditional sampling with both pixel-space and latent-space DPMs,
especially in 5$\sim$10 NFEs. We achieve FIDs of 12.21 (5 NFE), 2.51 (10 NFE)
on unconditional CIFAR10, and MSE of 0.55 (5 NFE, 7.5 guidance scale) on Stable
Diffusion, bringing a speed-up of 15\%$\sim$30\% compared to previous
state-of-the-art training-free methods. Code is available at
\url{https://github.com/thu-ml/DPM-Solver-v3}. | [
"Kaiwen Zheng",
"Cheng Lu",
"Jianfei Chen",
"Jun Zhu"
] | 2023-10-20 04:23:12 | http://arxiv.org/abs/2310.13268v1 | http://arxiv.org/pdf/2310.13268v1 | 2310.13268v1 |
On the Language Encoder of Contrastive Cross-modal Models | Contrastive cross-modal models such as CLIP and CLAP aid various
vision-language (VL) and audio-language (AL) tasks. However, there has been
limited investigation of and improvement in their language encoder, which is
the central component of encoding natural language descriptions of image/audio
into vector representations. We extensively evaluate how unsupervised and
supervised sentence embedding training affect language encoder quality and
cross-modal task performance. In VL pretraining, we found that sentence
embedding training language encoder quality and aids in cross-modal tasks,
improving contrastive VL models such as CyCLIP. In contrast, AL pretraining
benefits less from sentence embedding training, which may result from the
limited amount of pretraining data. We analyze the representation spaces to
understand the strengths of sentence embedding training, and find that it
improves text-space uniformity, at the cost of decreased cross-modal alignment. | [
"Mengjie Zhao",
"Junya Ono",
"Zhi Zhong",
"Chieh-Hsin Lai",
"Yuhta Takida",
"Naoki Murata",
"Wei-Hsiang Liao",
"Takashi Shibuya",
"Hiromi Wakaki",
"Yuki Mitsufuji"
] | 2023-10-20 04:21:09 | http://arxiv.org/abs/2310.13267v1 | http://arxiv.org/pdf/2310.13267v1 | 2310.13267v1 |
Enhancing drug and cell line representations via contrastive learning for improved anti-cancer drug prioritization | Due to cancer's complex nature and variable response to therapy, precision
oncology informed by omics sequence analysis has become the current standard of
care. However, the amount of data produced for each patients makes it difficult
to quickly identify the best treatment regimen. Moreover, limited data
availability has hindered computational methods' abilities to learn patterns
associated with effective drug-cell line pairs. In this work, we propose the
use of contrastive learning to improve learned drug and cell line
representations by preserving relationship structures associated with drug
mechanism of action and cell line cancer types. In addition to achieving
enhanced performance relative to a state-of-the-art method, we find that
classifiers using our learned representations exhibit a more balances reliance
on drug- and cell line-derived features when making predictions. This
facilitates more personalized drug prioritizations that are informed by signals
related to drug resistance. | [
"Patrick J. Lawrence",
"Xia Ning Ph. D"
] | 2023-10-20 04:18:47 | http://arxiv.org/abs/2310.13725v1 | http://arxiv.org/pdf/2310.13725v1 | 2310.13725v1 |
DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee | Mixed-integer linear programming (MILP) stands as a notable NP-hard problem
pivotal to numerous crucial industrial applications. The development of
effective algorithms, the tuning of solvers, and the training of machine
learning models for MILP resolution all hinge on access to extensive, diverse,
and representative data. Yet compared to the abundant naturally occurring data
in image and text realms, MILP is markedly data deficient, underscoring the
vital role of synthetic MILP generation. We present DIG-MILP, a deep generative
framework based on variational auto-encoder (VAE), adept at extracting
deep-level structural features from highly limited MILP data and producing
instances that closely mirror the target data. Notably, by leveraging the MILP
duality, DIG-MILP guarantees a correct and complete generation space as well as
ensures the boundedness and feasibility of the generated instances. Our
empirical study highlights the novelty and quality of the instances generated
by DIG-MILP through two distinct downstream tasks: (S1) Data sharing, where
solver solution times correlate highly positive between original and
DIG-MILP-generated instances, allowing data sharing for solver tuning without
publishing the original data; (S2) Data Augmentation, wherein the
DIG-MILP-generated instances bolster the generalization performance of machine
learning models tasked with resolving MILP problems. | [
"Haoyu Wang",
"Jialin Liu",
"Xiaohan Chen",
"Xinshang Wang",
"Pan Li",
"Wotao Yin"
] | 2023-10-20 03:45:29 | http://arxiv.org/abs/2310.13261v1 | http://arxiv.org/pdf/2310.13261v1 | 2310.13261v1 |
ManiCast: Collaborative Manipulation with Cost-Aware Human Forecasting | Seamless human-robot manipulation in close proximity relies on accurate
forecasts of human motion. While there has been significant progress in
learning forecast models at scale, when applied to manipulation tasks, these
models accrue high errors at critical transition points leading to degradation
in downstream planning performance. Our key insight is that instead of
predicting the most likely human motion, it is sufficient to produce forecasts
that capture how future human motion would affect the cost of a robot's plan.
We present ManiCast, a novel framework that learns cost-aware human forecasts
and feeds them to a model predictive control planner to execute collaborative
manipulation tasks. Our framework enables fluid, real-time interactions between
a human and a 7-DoF robot arm across a number of real-world tasks such as
reactive stirring, object handovers, and collaborative table setting. We
evaluate both the motion forecasts and the end-to-end forecaster-planner system
against a range of learned and heuristic baselines while additionally
contributing new datasets. We release our code and datasets at
https://portal-cornell.github.io/manicast/. | [
"Kushal Kedia",
"Prithwish Dan",
"Atiksh Bhardwaj",
"Sanjiban Choudhury"
] | 2023-10-20 03:34:31 | http://arxiv.org/abs/2310.13258v1 | http://arxiv.org/pdf/2310.13258v1 | 2310.13258v1 |
Knowledge Graph Context-Enhanced Diversified Recommendation | The field of Recommender Systems (RecSys) has been extensively studied to
enhance accuracy by leveraging users' historical interactions. Nonetheless,
this persistent pursuit of accuracy frequently engenders diminished diversity,
culminating in the well-recognized "echo chamber" phenomenon. Diversified
RecSys has emerged as a countermeasure, placing diversity on par with accuracy
and garnering noteworthy attention from academic circles and industry
practitioners. This research explores the realm of diversified RecSys within
the intricate context of knowledge graphs (KG). These KGs act as repositories
of interconnected information concerning entities and items, offering a
propitious avenue to amplify recommendation diversity through the incorporation
of insightful contextual information. Our contributions include introducing an
innovative metric, Entity Coverage, and Relation Coverage, which effectively
quantifies diversity within the KG domain. Additionally, we introduce the
Diversified Embedding Learning (DEL) module, meticulously designed to formulate
user representations that possess an innate awareness of diversity. In tandem
with this, we introduce a novel technique named Conditional Alignment and
Uniformity (CAU). It adeptly encodes KG item embeddings while preserving
contextual integrity. Collectively, our contributions signify a substantial
stride towards augmenting the panorama of recommendation diversity within the
realm of KG-informed RecSys paradigms. | [
"Xiaolong Liu",
"Liangwei Yang",
"Zhiwei Liu",
"Mingdai Yang",
"Chen Wang",
"Hao Peng",
"Philip S. Yu"
] | 2023-10-20 03:18:57 | http://arxiv.org/abs/2310.13253v1 | http://arxiv.org/pdf/2310.13253v1 | 2310.13253v1 |
FLEE-GNN: A Federated Learning System for Edge-Enhanced Graph Neural Network in Analyzing Geospatial Resilience of Multicommodity Food Flows | Understanding and measuring the resilience of food supply networks is a
global imperative to tackle increasing food insecurity. However, the complexity
of these networks, with their multidimensional interactions and decisions,
presents significant challenges. This paper proposes FLEE-GNN, a novel
Federated Learning System for Edge-Enhanced Graph Neural Network, designed to
overcome these challenges and enhance the analysis of geospatial resilience of
multicommodity food flow network, which is one type of spatial networks.
FLEE-GNN addresses the limitations of current methodologies, such as
entropy-based methods, in terms of generalizability, scalability, and data
privacy. It combines the robustness and adaptability of graph neural networks
with the privacy-conscious and decentralized aspects of federated learning on
food supply network resilience analysis across geographical regions. This paper
also discusses FLEE-GNN's innovative data generation techniques, experimental
designs, and future directions for improvement. The results show the
advancements of this approach to quantifying the resilience of multicommodity
food flow networks, contributing to efforts towards ensuring global food
security using AI methods. The developed FLEE-GNN has the potential to be
applied in other spatial networks with spatially heterogeneous sub-network
distributions. | [
"Yuxiao Qu",
"Jinmeng Rao",
"Song Gao",
"Qianheng Zhang",
"Wei-Lun Chao",
"Yu Su",
"Michelle Miller",
"Alfonso Morales",
"Patrick Huber"
] | 2023-10-20 03:06:41 | http://arxiv.org/abs/2310.13248v1 | http://arxiv.org/pdf/2310.13248v1 | 2310.13248v1 |