Update
Browse files
data.json
CHANGED
@@ -107,21 +107,6 @@
|
|
107 |
"Model": [],
|
108 |
"Dataset": []
|
109 |
},
|
110 |
-
{
|
111 |
-
"id": 20666,
|
112 |
-
"title": "Outlier-Robust Subsampling Techniques for Persistent Homology",
|
113 |
-
"authors": [
|
114 |
-
"Bernadette J. Stolz"
|
115 |
-
],
|
116 |
-
"abstract": "In recent years, persistent homology has been successfully applied to real-world data in many different settings. Despite significant computational advances, persistent homology algorithms do not yet scale to large datasets preventing interesting applications. One approach to address computational issues posed by persistent homology is to select a set of landmarks by subsampling from the data. Currently, these landmark points are chosen either at random or using the maxmin algorithm. Neither is ideal as random selection tends to favour dense areas of the data while the maxmin algorithm is very sensitive to noise. Here, we propose a novel approach to select landmarks specifically for persistent homology that preserves coarse topological information of the original dataset. Our method is motivated by the Mayer-Vietoris sequence and requires only local persistent homology calculations thus enabling efficient computation. We test our landmarks on artificial data sets which contain different levels of noise and compare them to standard landmark selection techniques. We demonstrate that our landmark selection outperforms standard methods as well as a subsampling technique based on an outlier-robust version of the k-means algorithm for low sampling densities in noisy data with respect to robustness to outliers.",
|
117 |
-
"type": "Poster",
|
118 |
-
"OpenReview": "",
|
119 |
-
"arxiv_id": "2103.14743",
|
120 |
-
"GitHub": [],
|
121 |
-
"Space": [],
|
122 |
-
"Model": [],
|
123 |
-
"Dataset": []
|
124 |
-
},
|
125 |
{
|
126 |
"id": 18844,
|
127 |
"title": "Unraveling the Key Components of OOD Generalization via Diversification",
|
@@ -221,22 +206,6 @@
|
|
221 |
"Model": [],
|
222 |
"Dataset": []
|
223 |
},
|
224 |
-
{
|
225 |
-
"id": 20662,
|
226 |
-
"title": "Be More Active! Understanding the Differences Between Mean and Sampled Representations of Variational Autoencoders",
|
227 |
-
"authors": [
|
228 |
-
"Lisa Bonheme",
|
229 |
-
"Marek Grzes"
|
230 |
-
],
|
231 |
-
"abstract": "The ability of Variational Autoencoders to learn disentangled representations has made them appealing for practical applications. However, their mean representations, which are generally used for downstream tasks, have recently been shown to be more correlated than their sampled counterpart, on which disentanglement is usually measured. In this paper, we refine this observation through the lens of selective posterior collapse, which states that only a subset of the learned representations, the active variables, is encoding useful information while the rest (the passive variables) is discarded. We first extend the existing definition to multiple data examples and show that active variables are equally disentangled in mean and sampled representations. Based on this extension and the pre-trained models from disentanglement_lib}, we then isolate the passive variables and show that they are responsible for the discrepancies between mean and sampled representations. Specifically, passive variables exhibit high correlation scores with other variables in mean representations while being fully uncorrelated in sampled ones. We thus conclude that despite what their higher correlation might suggest, mean representations are still good candidates for downstream tasks applications. However, it may be beneficial to remove their passive variables, especially when used with models sensitive to correlated features.",
|
232 |
-
"type": "Poster",
|
233 |
-
"OpenReview": "",
|
234 |
-
"arxiv_id": "2109.12679",
|
235 |
-
"GitHub": [],
|
236 |
-
"Space": [],
|
237 |
-
"Model": [],
|
238 |
-
"Dataset": []
|
239 |
-
},
|
240 |
{
|
241 |
"id": 17434,
|
242 |
"title": "Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks",
|
@@ -1416,24 +1385,6 @@
|
|
1416 |
"Model": [],
|
1417 |
"Dataset": []
|
1418 |
},
|
1419 |
-
{
|
1420 |
-
"id": 20667,
|
1421 |
-
"title": "A Framework and Benchmark for Deep Batch Active Learning for Regression",
|
1422 |
-
"authors": [
|
1423 |
-
"David Holzm\u00fcller",
|
1424 |
-
"Viktor Zaverkin",
|
1425 |
-
"Johannes K\u00e4stner",
|
1426 |
-
"Ingo Steinwart"
|
1427 |
-
],
|
1428 |
-
"abstract": "The acquisition of labels for supervised learning can be expensive. To improve the sample efficiency of neural network regression, we study active learning methods that adaptively select batches of unlabeled data for labeling. We present a framework for constructing such methods out of (network-dependent) base kernels, kernel transformations, and selection methods. Our framework encompasses many existing Bayesian methods based on Gaussian process approximations of neural networks as well as non-Bayesian methods. Additionally, we propose to replace the commonly used last-layer features with sketched finite-width neural tangent kernels and to combine them with a novel clustering method. To evaluate different methods, we introduce an open-source benchmark consisting of 15 large tabular regression data sets. Our proposed method outperforms the state-of-the-art on our benchmark, scales to large data sets, and works out-of-the-box without adjusting the network architecture or training code. We provide open-source code that includes efficient implementations of all kernels, kernel transformations, and selection methods, and can be used for reproducing our results.",
|
1429 |
-
"type": "Poster",
|
1430 |
-
"OpenReview": "",
|
1431 |
-
"arxiv_id": "2203.09410",
|
1432 |
-
"GitHub": [],
|
1433 |
-
"Space": [],
|
1434 |
-
"Model": [],
|
1435 |
-
"Dataset": []
|
1436 |
-
},
|
1437 |
{
|
1438 |
"id": 19730,
|
1439 |
"title": "An Analytical Solution to Gauss-Newton Loss for Direct Image Alignment",
|
@@ -1453,25 +1404,6 @@
|
|
1453 |
"Model": [],
|
1454 |
"Dataset": []
|
1455 |
},
|
1456 |
-
{
|
1457 |
-
"id": 20659,
|
1458 |
-
"title": "Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees",
|
1459 |
-
"authors": [
|
1460 |
-
"Jonathan Brophy",
|
1461 |
-
"Zayd Hammoudeh",
|
1462 |
-
"Daniel Lowd"
|
1463 |
-
],
|
1464 |
-
"abstract": "Influence estimation analyzes how changes to the training data can lead to different model predictions; this analysis can help us better understand these predictions, the models making those predictions, and the data sets they are trained on. However, most influence-estimation techniques are designed for deep learning models with continuous parameters. Gradient-boosted decision trees (GBDTs) are a powerful and widely-used class of models; however, these models are black boxes with opaque decision-making processes. In the pursuit of better understanding GBDT predictions and generally improving these models, we adapt recent and popular influence-estimation methods designed for deep learning models to GBDTs. Specifically, we adapt representer-point methods and TracIn, denoting our new methods TREX and BoostIn, respectively; source code is available at https://github.com/jjbrophy47/treeinfluence. We compare these methods to LeafInfluence and other baselines using 5 different evaluation measures on 22 real-world data sets with 4 popular GBDT implementations. These experiments give us a comprehensive overview of how different approaches to influence estimation work in GBDT models. We find BoostIn is an efficient influence-estimation method for GBDTs that performs equally well or better than existing work while being four orders of magnitude faster. Our evaluation also suggests the gold-standard approach of leave-one-out (LOO) retraining consistently identifies the single-most influential training example but performs poorly at finding the most influential set of training examples for a given target prediction.",
|
1465 |
-
"type": "Poster",
|
1466 |
-
"OpenReview": "",
|
1467 |
-
"arxiv_id": "2205.00359",
|
1468 |
-
"GitHub": [
|
1469 |
-
"https://github.com/jjbrophy47/tree_influence"
|
1470 |
-
],
|
1471 |
-
"Space": [],
|
1472 |
-
"Model": [],
|
1473 |
-
"Dataset": []
|
1474 |
-
},
|
1475 |
{
|
1476 |
"id": 19780,
|
1477 |
"title": "Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks",
|
@@ -1577,24 +1509,6 @@
|
|
1577 |
"Model": [],
|
1578 |
"Dataset": []
|
1579 |
},
|
1580 |
-
{
|
1581 |
-
"id": 20664,
|
1582 |
-
"title": "Quantifying Network Similarity using Graph Cumulants",
|
1583 |
-
"authors": [
|
1584 |
-
"Gecia Bravo-Hermsdorff",
|
1585 |
-
"Lee M. Gunderson",
|
1586 |
-
"Pierre-Andr\u00e9 Maugis",
|
1587 |
-
"Carey E. Priebe"
|
1588 |
-
],
|
1589 |
-
"abstract": "How might one test the hypothesis that networks were sampled from the same distribution? Here, we compare two statistical tests that use subgraph counts to address this question. The first uses the empirical subgraph densities themselves as estimates of those of the underlying distribution. The second test uses a new approach that converts these subgraph densities into estimates of the graph cumulants of the distribution (without any increase in computational complexity). We demonstrate --- via theory, simulation, and application to real data --- the superior statistical power of using graph cumulants. In summary, when analyzing data using subgraph/motif densities, we suggest using the corresponding graph cumulants instead.",
|
1590 |
-
"type": "Poster",
|
1591 |
-
"OpenReview": "",
|
1592 |
-
"arxiv_id": "2107.11403",
|
1593 |
-
"GitHub": [],
|
1594 |
-
"Space": [],
|
1595 |
-
"Model": [],
|
1596 |
-
"Dataset": []
|
1597 |
-
},
|
1598 |
{
|
1599 |
"id": 19027,
|
1600 |
"title": "Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment",
|
@@ -1845,44 +1759,6 @@
|
|
1845 |
"Model": [],
|
1846 |
"Dataset": []
|
1847 |
},
|
1848 |
-
{
|
1849 |
-
"id": 20665,
|
1850 |
-
"title": "Hard-Constrained Deep Learning for Climate Downscaling",
|
1851 |
-
"authors": [
|
1852 |
-
"Paula Harder",
|
1853 |
-
"Alex Hernandez-Garcia",
|
1854 |
-
"Venkatesh Ramesh",
|
1855 |
-
"Qidong Yang",
|
1856 |
-
"Prasanna Sattigeri",
|
1857 |
-
"Daniela Szwarcman",
|
1858 |
-
"Campbell Watson",
|
1859 |
-
"David Rolnick"
|
1860 |
-
],
|
1861 |
-
"abstract": "The availability of reliable, high-resolution climate and weather data is important to inform long-term decisions on climate adaptation and mitigation and to guide rapid responses to extreme events. Forecasting models are limited by computational costs and, therefore, often generate coarse-resolution predictions. Statistical downscaling, including super-resolution methods from deep learning, can provide an efficient method of upsampling low-resolution data. However, despite achieving visually compelling results in some cases, such models frequently violate conservation laws when predicting physical variables. In order to conserve physical quantities, here we introduce methods that guarantee statistical constraints are satisfied by a deep learning downscaling model, while also improving their performance according to traditional metrics. We compare different constraining approaches and demonstrate their applicability across different neural architectures as well as a variety of climate and weather data sets. Besides enabling faster and more accurate climate predictions through downscaling, we also show that our novel methodologies can improve super-resolution for satellite data and natural images data sets.",
|
1862 |
-
"type": "Poster",
|
1863 |
-
"OpenReview": "",
|
1864 |
-
"arxiv_id": "2208.05424",
|
1865 |
-
"GitHub": [],
|
1866 |
-
"Space": [],
|
1867 |
-
"Model": [],
|
1868 |
-
"Dataset": []
|
1869 |
-
},
|
1870 |
-
{
|
1871 |
-
"id": 20657,
|
1872 |
-
"title": "Analytically Tractable Hidden-States Inference in Bayesian Neural Networks",
|
1873 |
-
"authors": [
|
1874 |
-
"Luong-Ha Nguyen",
|
1875 |
-
"James-A. Goulet"
|
1876 |
-
],
|
1877 |
-
"abstract": "With few exceptions, neural networks have been relying on backpropagation and gradient descent as the inference engine in order to learn the model parameters, because closed-form Bayesian inference for neural networks has been considered to be intractable. In this paper, we show how we can leverage the tractable approximate Gaussian inference's (TAGI) capabilities to infer hidden states, rather than only using it for inferring the network's parameters. One novel aspect is that it allows inferring hidden states through the imposition of constraints designed to achieve specific objectives, as illustrated through three examples: (1) the generation of adversarial-attack examples, (2) the usage of a neural network as a black-box optimization method, and (3) the application of inference on continuous-action reinforcement learning. In these three examples, the constrains are in (1), a target label chosen to fool a neural network, and in (2 and 3) the derivative of the network with respect to its input that is set to zero in order to infer the optimal input values that are either maximizing or minimizing it. These applications showcase how tasks that were previously reserved to gradient-based optimization approaches can now be approached with analytically tractable inference.",
|
1878 |
-
"type": "Poster",
|
1879 |
-
"OpenReview": "",
|
1880 |
-
"arxiv_id": "2107.03759",
|
1881 |
-
"GitHub": [],
|
1882 |
-
"Space": [],
|
1883 |
-
"Model": [],
|
1884 |
-
"Dataset": []
|
1885 |
-
},
|
1886 |
{
|
1887 |
"id": 19023,
|
1888 |
"title": "Robust Model Based Reinforcement Learning Using $\\mathcal{L}_1$ Adaptive Control",
|
@@ -2894,24 +2770,6 @@
|
|
2894 |
"Model": [],
|
2895 |
"Dataset": []
|
2896 |
},
|
2897 |
-
{
|
2898 |
-
"id": 20658,
|
2899 |
-
"title": "Scalable Real-Time Recurrent Learning Using Columnar-Constructive Networks",
|
2900 |
-
"authors": [
|
2901 |
-
"Khurram Javed",
|
2902 |
-
"Haseeb Shah",
|
2903 |
-
"Richard Sutton",
|
2904 |
-
"Martha White"
|
2905 |
-
],
|
2906 |
-
"abstract": "Constructing states from sequences of observations is an important component of reinforcement learning agents. One solution for state construction is to use recurrent neural networks. Back-propagation through time (BPTT), and real-time recurrent learning (RTRL) are two popular gradient-based methods for recurrent learning. BPTT requires complete trajectories of observations before it can compute the gradients and is unsuitable for online updates. RTRL can do online updates but scales poorly to large networks. In this paper, we propose two constraints that make RTRL scalable. We show that by either decomposing the network into independent modules or learning the network in stages, we can make RTRL scale linearly with the number of parameters. Unlike prior scalable gradient estimation algorithms, such as UORO and Truncated-BPTT, our algorithms do not add noise or bias to the gradient estimate. Instead, they trade off the functional capacity of the network for computationally efficient learning. We demonstrate the effectiveness of our approach over Truncated-BPTT on a prediction benchmark inspired by animal learning and by doing policy evaluation of pre-trained policies for Atari 2600 games.",
|
2907 |
-
"type": "Poster",
|
2908 |
-
"OpenReview": "",
|
2909 |
-
"arxiv_id": "2302.05326",
|
2910 |
-
"GitHub": [],
|
2911 |
-
"Space": [],
|
2912 |
-
"Model": [],
|
2913 |
-
"Dataset": []
|
2914 |
-
},
|
2915 |
{
|
2916 |
"id": 18595,
|
2917 |
"title": "Dropout-Based Rashomon Set Exploration for Efficient Predictive Multiplicity Estimation",
|
@@ -2989,40 +2847,6 @@
|
|
2989 |
"Model": [],
|
2990 |
"Dataset": []
|
2991 |
},
|
2992 |
-
{
|
2993 |
-
"id": 20663,
|
2994 |
-
"title": "Nevis'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research",
|
2995 |
-
"authors": [
|
2996 |
-
"Jorg Bornschein",
|
2997 |
-
"Alexandre Galashov",
|
2998 |
-
"Ross Hemsley",
|
2999 |
-
"Amal Rannen-Triki",
|
3000 |
-
"Yutian Chen",
|
3001 |
-
"Arslan Chaudhry",
|
3002 |
-
"Owen He",
|
3003 |
-
"Arthur Douillard",
|
3004 |
-
"Massimo Caccia",
|
3005 |
-
"Qixuan Feng",
|
3006 |
-
"Jiajun Shen",
|
3007 |
-
"Sylvestre-Alvise Rebuffi",
|
3008 |
-
"Kitty Stacpoole",
|
3009 |
-
"Diego de las Casas",
|
3010 |
-
"Will Hawkins",
|
3011 |
-
"Angeliki Lazaridou",
|
3012 |
-
"Yee Whye Teh",
|
3013 |
-
"Andrei A. Rusu",
|
3014 |
-
"Razvan Pascanu",
|
3015 |
-
"Marc\u2019Aurelio Ranzato"
|
3016 |
-
],
|
3017 |
-
"abstract": "A shared goal of several machine learning communities like continual learning, meta-learning and transfer learning, is to design algorithms and models that efficiently and robustly adapt to unseen tasks. An even more ambitious goal is to build models that never stop adapting, and that become increasingly more efficient through time by suitably transferring the accrued knowledge. Beyond the study of the actual learning algorithm and model architecture, there are several hurdles towards our quest to build such models, such as the choice of learning protocol, metric of success and data needed to validate research hypotheses. In this work, we introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks, sorted chronologically and extracted from papers sampled uniformly from computer vision proceedings spanning the last three decades. The resulting stream reflects what the research community thought was meaningful at any point in time, and it serves as an ideal test bed to assess how well models can adapt to new tasks, and do so better and more efficiently as time goes by. Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth. The diversity is also reflected in the wide range of dataset sizes, spanning over four orders of magnitude. Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks, yet with a low entry barrier as it is limited to a single modality and well understood supervised learning problems. Moreover, we provide a reference implementation including strong baselines and an evaluation protocol to compare methods in terms of their trade-off between accuracy and compute. We hope that NEVIS'22 can be useful to researchers working on continual learning, meta-learning, AutoML and more generally sequential learning, and help these communities join forces towards more robust models that efficiently adapt to a never ending stream of data.",
|
3018 |
-
"type": "Poster",
|
3019 |
-
"OpenReview": "",
|
3020 |
-
"arxiv_id": "2211.11747",
|
3021 |
-
"GitHub": [],
|
3022 |
-
"Space": [],
|
3023 |
-
"Model": [],
|
3024 |
-
"Dataset": []
|
3025 |
-
},
|
3026 |
{
|
3027 |
"id": 18992,
|
3028 |
"title": "In-Context Learning through the Bayesian Prism",
|
@@ -9530,23 +9354,6 @@
|
|
9530 |
"Model": [],
|
9531 |
"Dataset": []
|
9532 |
},
|
9533 |
-
{
|
9534 |
-
"id": 20660,
|
9535 |
-
"title": "Random Feature Amplification: Feature Learning and Generalization in Neural Networks",
|
9536 |
-
"authors": [
|
9537 |
-
"Spencer Frei",
|
9538 |
-
"Niladri Chatterji",
|
9539 |
-
"Peter L. Bartlett"
|
9540 |
-
],
|
9541 |
-
"abstract": "In this work, we provide a characterization of the feature-learning process in two-layer ReLU networks trained by gradient descent on the logistic loss following random initialization. We consider data with binary labels that are generated by an XOR-like function of the input features. We permit a constant fraction of the training labels to be corrupted by an adversary. We show that, although linear classifiers are no better than random guessing for the distribution we consider, two-layer ReLU networks trained by gradient descent achieve generalization error close to the label noise rate. We develop a novel proof technique that shows that at initialization, the vast majority of neurons function as random features that are only weakly correlated with useful features, and the gradient descent dynamics `amplify\u2019 these weak, random features to strong, useful features.",
|
9542 |
-
"type": "Poster",
|
9543 |
-
"OpenReview": "",
|
9544 |
-
"arxiv_id": "2202.07626",
|
9545 |
-
"GitHub": [],
|
9546 |
-
"Space": [],
|
9547 |
-
"Model": [],
|
9548 |
-
"Dataset": []
|
9549 |
-
},
|
9550 |
{
|
9551 |
"id": 19760,
|
9552 |
"title": "Neural Fine-Tuning Search for Few-Shot Learning",
|
@@ -10057,24 +9864,6 @@
|
|
10057 |
"Model": [],
|
10058 |
"Dataset": []
|
10059 |
},
|
10060 |
-
{
|
10061 |
-
"id": 20661,
|
10062 |
-
"title": "A Unified Experiment Design Approach for Cyclic and Acyclic Causal Models",
|
10063 |
-
"authors": [
|
10064 |
-
"Ehsan Mokhtarian",
|
10065 |
-
"Saber Salehkaleybar",
|
10066 |
-
"AmirEmad Ghassami",
|
10067 |
-
"Negar Kiyavash"
|
10068 |
-
],
|
10069 |
-
"abstract": "We study experiment design for unique identification of the causal graph of a simple SCM, where the graph may contain cycles. The presence of cycles in the structure introduces major challenges for experiment design as, unlike acyclic graphs, learning the skeleton of causal graphs with cycles may not be possible from merely the observational distribution. Furthermore, intervening on a variable in such graphs does not necessarily lead to orienting all the edges incident to it. In this paper, we propose an experiment design approach that can learn both cyclic and acyclic graphs and hence, unifies the task of experiment design for both types of graphs. We provide a lower bound on the number of experiments required to guarantee the unique identification of the causal graph in the worst case, showing that the proposed approach is order-optimal in terms of the number of experiments up to an additive logarithmic term. Moreover, we extend our result to the setting where the size of each experiment is bounded by a constant. For this case, we show that our approach is optimal in terms of the size of the largest experiment required for uniquely identifying the causal graph in the worst case.",
|
10070 |
-
"type": "Poster",
|
10071 |
-
"OpenReview": "",
|
10072 |
-
"arxiv_id": "2205.10083",
|
10073 |
-
"GitHub": [],
|
10074 |
-
"Space": [],
|
10075 |
-
"Model": [],
|
10076 |
-
"Dataset": []
|
10077 |
-
},
|
10078 |
{
|
10079 |
"id": 19185,
|
10080 |
"title": "Graph-based Virtual Sensing from Sparse and Partial Multivariate Observations",
|
@@ -44389,4 +44178,4 @@
|
|
44389 |
"Model": [],
|
44390 |
"Dataset": []
|
44391 |
}
|
44392 |
-
]
|
|
|
107 |
"Model": [],
|
108 |
"Dataset": []
|
109 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
110 |
{
|
111 |
"id": 18844,
|
112 |
"title": "Unraveling the Key Components of OOD Generalization via Diversification",
|
|
|
206 |
"Model": [],
|
207 |
"Dataset": []
|
208 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
209 |
{
|
210 |
"id": 17434,
|
211 |
"title": "Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks",
|
|
|
1385 |
"Model": [],
|
1386 |
"Dataset": []
|
1387 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1388 |
{
|
1389 |
"id": 19730,
|
1390 |
"title": "An Analytical Solution to Gauss-Newton Loss for Direct Image Alignment",
|
|
|
1404 |
"Model": [],
|
1405 |
"Dataset": []
|
1406 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1407 |
{
|
1408 |
"id": 19780,
|
1409 |
"title": "Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks",
|
|
|
1509 |
"Model": [],
|
1510 |
"Dataset": []
|
1511 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1512 |
{
|
1513 |
"id": 19027,
|
1514 |
"title": "Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment",
|
|
|
1759 |
"Model": [],
|
1760 |
"Dataset": []
|
1761 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1762 |
{
|
1763 |
"id": 19023,
|
1764 |
"title": "Robust Model Based Reinforcement Learning Using $\\mathcal{L}_1$ Adaptive Control",
|
|
|
2770 |
"Model": [],
|
2771 |
"Dataset": []
|
2772 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2773 |
{
|
2774 |
"id": 18595,
|
2775 |
"title": "Dropout-Based Rashomon Set Exploration for Efficient Predictive Multiplicity Estimation",
|
|
|
2847 |
"Model": [],
|
2848 |
"Dataset": []
|
2849 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2850 |
{
|
2851 |
"id": 18992,
|
2852 |
"title": "In-Context Learning through the Bayesian Prism",
|
|
|
9354 |
"Model": [],
|
9355 |
"Dataset": []
|
9356 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9357 |
{
|
9358 |
"id": 19760,
|
9359 |
"title": "Neural Fine-Tuning Search for Few-Shot Learning",
|
|
|
9864 |
"Model": [],
|
9865 |
"Dataset": []
|
9866 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9867 |
{
|
9868 |
"id": 19185,
|
9869 |
"title": "Graph-based Virtual Sensing from Sparse and Partial Multivariate Observations",
|
|
|
44178 |
"Model": [],
|
44179 |
"Dataset": []
|
44180 |
}
|
44181 |
+
]
|