Datasets:
source
sequence | source_labels
sequence | rouge_scores
sequence | paper_id
stringlengths 9
11
| ic
unknown | target
sequence |
---|---|---|---|---|---|
[
"Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect.",
"Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms.",
"In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks.",
"We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum.",
"Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks.",
"One particular conclusion is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum.",
"In the past decade, deep neural networks BID8 have become a popular tool that has successfully solved many challenging tasks in a variety of areas such as machine learning, artificial intelligence, computer vision, and natural language processing, etc.",
"As the understandings of deep neural networks from different aspects are mostly based on empirical studies, there is a rising need and interest to develop understandings of neural networks from theoretical aspects such as generalization error, representation power, and landscape (also referred to as geometry) properties, etc.",
"In particular, the landscape properties of loss functions (that are typically nonconex for neural networks) play a central role to determine the iteration path and convergence performance of optimization algorithms.One major landscape property is the nature of critical points, which can possibly be global minima, local minima, saddle points.",
"There have been intensive efforts in the past into understanding such an issue for various neural networks.",
"For example, it has been shown that every local minimum of the loss function is also a global minimum for shallow linear networks under the autoencoder setting and invertibility assumptions BID1 and for deep linear networks BID11 ; BID14 ; Yun et al. (2017) respectively under different assumptions.",
"The conditions on the equivalence between local minimum or critical point and global minimum has also been established for various nonlinear neural networks Yu & Chen (1995) ; BID9 ; BID15 ; BID17 ; BID6 under respective assumptions.However, most previous studies did not provide characterization of analytical forms for critical points of loss functions for neural networks with only very few exceptions.",
"In BID1 , the authors provided an analytical form for the critical points of the square loss function of shallow linear networks under certain conditions.",
"Such an analytical form further helps to establish the landscape properties around the critical points.",
"Further in BID13 , the authors characterized certain sufficient form of critical points for the square loss function of matrix factorization problems and deep linear networks.The focus of this paper is on characterizing the sufficient and necessary forms of critical points for broader scenarios, i.e., shallow and deep linear networks with no assumptions on data matrices and network dimensions, and shallow ReLU networks over certain parameter space.",
"In particular, such analytical forms of critical points capture the corresponding loss function values and the necessary and sufficient conditions to achieve global minimum.",
"This further enables us to establish new landscape properties around these critical points for the loss function of these networks under general settings, and provides alternative (yet simpler and more intuitive) proofs for existing understanding of the landscape properties.OUR CONTRIBUTION",
"1) For the square loss function of linear networks with one hidden layer, we provide a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers.",
"These results generalize the characterization in BID1 to arbitrary network parameter dimensions and any data matrices.",
"Such a generalization further enables us to establish the landscape property, i.e., every local minimum is also a global minimum and all other critical points are saddle points, under no assumptions on parameter dimensions and data matrices.",
"From a technical standpoint, we exploit the analytical forms of critical points to provide a new proof for characterizing the landscape around the critical points under full relaxation of assumptions, where the corresponding approaches in BID1 are not applicable.",
"As a special case of linear networks, the matrix factorization problem satisfies all these landscape properties.2) For the square loss function of deep linear networks, we establish a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers.",
"Such characterizations are new and have not been established in the existing art.",
"Furthermore, such analytical form divides the set of non-global-minimum critical points into different categories.",
"We identify the directions along which the loss function value decreases for two categories of the critical points, for which our result directly implies the equivalence between the local minimum and the global minimum.",
"For these cases, our proof generalizes the result in BID11 under no assumptions on the network parameter dimensions and data matrices.3) For the square loss function of one-hidden-layer nonlinear neural networks with ReLU activation function, we provide a full characterization of both the existence and the analytical forms of the critical points in certain types of regions in the parameter space.",
"Particularly, in the case where there is one hidden unit, our results fully characterize the existence and the analytical forms of the critical points in the entire parameter space.",
"Such characterization were not provided in previous work on nonlinear neural networks.",
"Moreover, we apply our results to a concrete example to demonstrate that both local minimum that is not a global minimum and local maximum do exist in such a case.",
"In this paper, we provide full characterization of the analytical forms of the critical points for the square loss function of three types of neural networks, namely, shallow linear networks, deep linear networks, and shallow ReLU nonlinear networks.",
"We show that such analytical forms of the critical points have direct implications on the values of the corresponding loss functions, achievement of global minimum, and various landscape properties around these critical points.",
"As a consequence, the loss function for linear networks has no spurious local minimum, while such point does exist for nonlinear networks with ReLU activation.",
"In the future, it is interesting to further explore nonlinear neural networks.",
"In particular, we wish to characterize the analytical form of critical points for deep nonlinear networks and over the full parameter space.",
"Such results will further facilitate the understanding of the landscape properties around these critical points."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.30188679695129395,
0.3720930218696594,
0.6037735939025879,
0.5714285373687744,
0.7234042286872864,
0.15094339847564697,
0.16129031777381897,
0.2222222238779068,
0.3478260934352875,
0.2380952388048172,
0.1875,
0.3589743673801422,
0.3829787075519562,
0.3589743673801422,
0.3243243098258972,
0.4680851101875305,
0.4067796468734741,
0.4444444477558136,
0.1463414579629898,
0.19672130048274994,
0.38596490025520325,
0.4516128897666931,
0.10526315122842789,
0.25641024112701416,
0.2745097875595093,
0.3561643660068512,
0.3265306055545807,
0.10810810327529907,
0.08163265138864517,
0.5185185074806213,
0.5,
0.1666666567325592,
0.21621620655059814,
0.43478259444236755,
0.3589743673801422
] | SysEexbRb | true | [
"We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks."
] |
[
"The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain.",
"One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways.",
"To address this “weight transport problem” (Grossberg, 1987), two biologically-plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets.",
"However, a recent study by Bartunov et al. (2018) finds that although feedback alignment (FA) and some variants of target-propagation (TP) perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet.",
"Here, we additionally evaluate the sign-symmetry (SS) algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights do not share magnitudes but share signs.",
"We examined the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet; RetinaNet for MS COCO).",
"Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks.",
"These results complement the study by Bartunov et al. (2018) and establish a new benchmark for future biologically-plausible learning algorithms on more difficult datasets and more complex architectures.",
"Deep learning models today are highly successful in task performance, learning useful representations, and even matching representations in the brain BID26 BID24 .",
"However, it remains a contentious issue whether these models reflect how the brain learns.",
"Core to the problem is the fact that backpropagation, the learning algorithm underlying most of today's deep networks, is difficult to implement in the brain given what we know about the brain's hardware BID2 however, see Hinton 2007) .",
"One main reason why backpropagation seems implausible in the brain is that it requires sharing of feedforward and feedback weights.",
"Since synapses are unidirectional in the brain, feedforward and feedback connections are physically distinct.",
"Requiring them to shared their weights, even as weights are adjusted during learning, seems highly implausible.One approach to addressing this issue is to relax the requirement for weight-symmetry in error backpropagation.",
"Surprisingly, when the feedback weights share only the sign but not the magnitude of the feedforward weights BID16 or even when the feedback weights are random (but fixed) BID17 , they can still guide useful learning in the network, with performance comparable to and sometimes even better than performance of backpropagation, on datasets such as MNIST and CIFAR.",
"Here, we refer to these two algorithms, respectively, as \"sign-symmetry\" and \"feedback alignment.\"",
"Since weight symmetry in backpropagation is required for accurately propagating the derivative of the loss function through layers, the success of asymmetric feedback algorithms indicates that learning can be supported even by inaccurate estimation of the error derivative.",
"In feedback alignment, the authors propose that the feedforward weights learn to align with the random feedback weights, thereby allowing feedback to provide approximate yet useful learning signals BID17 .However",
", a recent paper by BID0 finds that feedback alignment and a few other biologically-plausible algorithms, including variants of target propagation, do not generalize to larger and more difficult problems such as ImageNet BID4 ) and perform much worse than backpropagation. Nevertheless",
", the specific conditions Bartunov et al. tested are somewhat restrictive. They only tested",
"locally-connected networks (i.e., weight sharing is not allowed among convolution filters at different spatial locations), a choice that is motivated by biological plausibility but in practice limits the size of the network (without weight sharing, each convolutional layer needs much more memory to store its weights), making it unclear whether poor performance was attributable solely to the algorithm, or to the algorithm on those architectures.1 Second, Bartunov",
"et al. did not test sign-symmetry, which may be more powerful than feedback alignment since signsymmetric feedback weights may carry more information about the feedforward weights than the random feedback weights used in feedback alignment.In this work, we re-examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using standard ConvNet architectures (i.e., ResNet-18, AlexNet, and RetinaNet). We find that sign-symmetry",
"can in fact train networks on both tasks, achieving similar performance to backpropagation on ImageNet and reasonable performance on MS COCO. In addition, we test the use",
"of backpropagation exclusively in the last layer while otherwise using feedback alignment, hypothesizing that in the brain, the classifier layer may not be a fully-connected layer and may deliver the error signal through some other unspecified mechanism. Such partial feedback alignment",
"can achieve better performance (relative to backpropagation) than in BID0 . Taken together, these results extend",
"previous findings and indicate that existing biologicallyplausible learning algorithms remain viable options both for training artificial neural networks and for modeling how learning can occur in the brain.",
"Recent work shows that biologically-plausible learning algorithms do not scale to challenging problems such as ImageNet.",
"We evaluated sign-symmetry and re-evaluated feedback alignment on their effectiveness training ResNet and AlexNet on ImageNet and RetinaNet on MS COCO.",
"We find that",
"1) sign-symmetry performed nearly as well as backpropagation on ImageNet,",
"2) slightly modified feedback alignment performed better than previously reported, and",
"3) both algorithms had reasonable performance on MS COCO with minimal hyperparameter tuning.",
"Taken together, these results indicate that biologically-plausible learning algorithms, in particular sign-symmetry, remain promising options for training artificial neural networks and modeling learning in the brain."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0.1304347813129425,
0.1428571343421936,
0,
0.11764705181121826,
0,
0.1111111044883728,
0.06666666269302368,
0,
0.0476190447807312,
0,
0,
0,
0.072727270424366,
0.0833333283662796,
0.0476190447807312,
0.05714285373687744,
0.08163265138864517,
0,
0.027397258207201958,
0.09677419066429138,
0.11764705181121826,
0,
0,
0.05714285373687744,
0.23076923191547394,
0.14814814925193787,
0,
0.21052631735801697,
0,
0.08695651590824127,
0.1764705777168274
] | SygvZ209F7 | true | [
"Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet"
] |
[
"We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors.",
"We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.\n",
"Deep learning contains many differentiable algorithms for computing with learned representations.",
"These representations form vector spaces, sometimes equipped with additional structure.",
"A recent example is the Transformer (Vaswani et al., 2017) in which there is a vector space V of value vectors and an inner product space H of query and key vectors.",
"This structure supports a kind of messagepassing, where a value vector v j ∈ V derived from entity j is propagated to update an entity i with weight q i · k j , where q i ∈ H is a query vector derived from entity i, k j ∈ H is a key vector derived from entity j, and the inner product on H is written as a dot product.",
"The Transformer therefore represents a relational inductive bias, where a relation from entity j to entity i is perceived to the extent that q i · k j is large and positive.",
"However, the real world has structure beyond entities and their direct relationships: for example, the three blocks in Figure 1 are arranged in such a way that if either of the supporting blocks is removed, the top block will fall.",
"This is a simple 3-way relationship between entities i, j, k that is complex to represent as a system of 2-way relationships.",
"It is natural to make the hypothesis that such higher-order relationships are essential to extracting the full predictive power of data, across many domains.",
"In accordance with this hypothesis, we introduce a generalisation of the Transformer architecture, the 2-simplicial Transformer, which incorporates both 2-and 3-way interactions.",
"Mathematically, the key observation is that higher-order interactions between entities can be understood using algebras.",
"This is nothing but Boole's insight (Boole, 1847) which set in motion the development of modern logic.",
"In our situation, an appropriate algebra is the Clifford algebra Cl(H) of the space H of queries and keys, which contains that space H ⊆ Cl(H) and in which queries and keys can be multiplied.",
"To represent a 3-way interaction we map each entity i to a triple (p i , l k ) using a natural continuous function η : Cl(H) −→ R associated to the Z-grading of Cl(H).",
"This scalar measures how strongly the network perceives a 3-way interaction involving i, j, k.",
"In summary, the 2-simplicial Transformer learns how to represent entities in its environment as vectors v ∈ V , and how to transform those entities to queries and (pairs of) keys in H, so that the signals provided by the scalars q i · k j and η(p i l 1 j l 2 k ) are informative about higher-order structure in the environment.",
"As a toy example of higher-order structure, we consider the reinforcement learning problem in a variant of the BoxWorld environment from (Zambaldi et al., 2019) .",
"The original BoxWorld is played on a rectangular grid populated by keys and locked boxes of varying colours, with the goal being to open the box containing the \"Gem\".",
"In our variant of the BoxWorld environment, bridge BoxWorld, the agent must use two keys simultaneously to obtain the Gem; this structure in the environment creates many 3-way relationships between entities, including for example the relationship between the locked boxes j, k providing the two keys and the Gem entity i.",
"This structure in the environment is fundamentally logical in nature, and encodes a particular kind of conjunction; see Appendix I.",
"The architecture of our deep reinforcement learning agent largely follows (Zambaldi et al., 2019) and the details are given in Section 4.",
"The key difference between our simplicial agent and the relational agent of (Zambaldi et al., 2019) is that in place of a standard Transformer block we use a 2-simplicial Transformer block.",
"Our experiments show that the simplicial agent confers an advantage over the relational agent as an inductive bias in our reasoning task.",
"Motivation from neuroscience for a simplicial inductive bias for abstract reasoning is contained in Appendix J.",
"Our use of tensor products of value vectors is inspired by the semantics of linear logic in vector spaces (Girard, 1987; Mellis, 2009; Clift & Murfet, 2017; Wallbridge, 2018) in which an algorithm with multiple inputs computes on the tensor product of those inputs, but this is an old idea in natural language processing, used in models including the second-order RNN (Giles et al., 1989; Pollack, 1991; Goudreau et al., 1994; Giles et al., 1991) , multiplicative RNN (Sutskever et al., 2011; Irsoy & Cardie, 2015) , Neural Tensor Network (Socher et al., 2013 ) and the factored 3-way Restricted Boltzmann Machine (Ranzato et al., 2010) , see Appendix A. Tensors have been used to model predicates in a number of neural network architectures aimed at logical reasoning (Serafini & Garcez, 2016; Dong et al., 2019) .",
"The main novelty in our model lies in the introduction of the 2-simplicial attention, which allows these ideas to be incorporated into the Transformer architecture.",
"On general grounds one might expect that in the limit of infinite experience, any reinforcement learning agent with a sufficiently deep neural network will be able to solve any environment, in-cluding those like bridge BoxWorld that involve higher-order relations between entities.",
"In practice, however, we do not care about the infinite computation limit.",
"In the regime of bounded computation it is reasonable to introduce biases towards learning representations of structures that are found in a wide range of environments that we consider important.",
"We argue that higher-order relations between entities are an important example of such structures, and that the 2-simplicial Transformer is a natural inductive bias for 3-way interactions between entities.",
"We have given preliminary evidence for the utility of this bias by showing that in the bridge BoxWorld environment the simplicial agent has better performance than a purely relational agent, and that this performance involves in a meaningful way the prediction of 3-way interactions (or 2-simplices).",
"We believe that simplicial Transformers may be useful for any problem in which higher-order relations between entities are important.",
"The long history of interactions between logic and algebra is a natural source of inspiration for the design of inductive biases in deep learning.",
"In this paper we have exhibited one example: Boole's idea, that relationships between entities can be modeled by multiplication in an algebra, may be realised in the context of deep learning as an augmentation to the Transformer architecture using Clifford algebras of spaces of representations."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3333333432674408,
0.8888888955116272,
0.11428570747375488,
0,
0.26923075318336487,
0.1515151411294937,
0.2800000011920929,
0.2711864411830902,
0.1818181723356247,
0.17391303181648254,
0.31111109256744385,
0.1538461446762085,
0.19512194395065308,
0.2448979616165161,
0.1111111044883728,
0.10256409645080566,
0.1666666567325592,
0.25531914830207825,
0.19607841968536377,
0.1846153736114502,
0.3255814015865326,
0.3404255211353302,
0.3529411852359772,
0.3255814015865326,
0.3589743673801422,
0.140625,
0.260869562625885,
0.2539682388305664,
0.0555555522441864,
0.31372547149658203,
0.47999998927116394,
0.32786884903907776,
0.23255813121795654,
0.43478259444236755,
0.317460298538208
] | rkecJ6VFvr | true | [
"We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning."
] |
[
"We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics.",
"Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation.",
"Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions.",
"Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance.",
"We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs.",
"We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data.",
"One of the central questions in science is forecasting: given the past history, how well can we predict the future?",
"In many domains with complex multivariate correlation structures and nonlinear dynamics, forecasting is highly challenging since the system has long-term temporal dependencies and higher-order dynamics.",
"Examples of such systems abound in science and engineering, from biological neural network activity, fluid turbulence, to climate and traffic systems (see FIG0 ).",
"Since current forecasting systems are unable to faithfully represent the higher-order dynamics, they have limited ability for accurate long-term forecasting.",
"Therefore, a key challenge is accurately modeling nonlinear dynamics and obtaining stable long-term predictions, given a dataset of realizations of the dynamics.",
"Here, the forecasting problem can be stated as follows: how can we efficiently learn a model that, given only few initial states, can reliably predict a sequence of future states over a long horizon of T time-steps?",
"Common approaches to forecasting involve linear time series models such as auto-regressive moving average (ARMA), state space models such as hidden Markov model (HMM), and deep neural networks.",
"We refer readers to a survey on time series forecasting by BID2 and the references therein.",
"A recurrent neural network (RNN), as well as its memory-based extensions such as the LSTM, is a class of models that have achieved good performance on sequence prediction tasks from demand forecasting BID5 to speech recognition BID15 and video analysis BID9 .",
"Although these methods can be effective for short-term, smooth dynamics, neither analytic nor data-driven learning methods tend to generalize well to capturing long-term nonlinear dynamics and predicting them over longer time horizons.To address this issue, we propose a novel family of tensor-train recurrent neural networks that can learn stable long-term forecasting.",
"These models have two key features: they",
"1) explicitly model the higher-order dynamics, by using a longer history of previous hidden states and high-order state interactions with multiplicative memory units; and",
"2) they are scalable by using tensor trains, a structured low-rank tensor decomposition that greatly reduces the number of model parameters, while mostly preserving the correlation structure of the full-rank model.In this work, we analyze Tensor-Train RNNs theoretically, and also experimentally validate them over a wide range of forecasting domains.",
"Our contributions can be summarized as follows:• We describe how TT-RNNs encode higher-order non-Markovian dynamics and high-order state interactions.",
"To address the memory issue, we propose a tensor-train (TT) decomposition that makes learning tractable and fast.•",
"We provide theoretical guarantees for the representation power of TT-RNNs for nonlinear dynamics, and obtain the connection between the target dynamics and TT-RNN approximation. In",
"contrast, no such theoretical results are known for standard recurrent networks.• We",
"validate TT-RNNs on simulated data and two real-world environments with nonlinear dynamics (climate and traffic). Here",
", we show that TT-RNNs can forecast more accurately for significantly longer time horizons compared to standard RNNs and LSTMs.",
"In this work, we considered forecasting under nonlinear dynamics.We propose a novel class of RNNs -TT-RNN.",
"We provide approximation guarantees for TT-RNN and characterize its representation power.",
"We demonstrate the benefits of TT-RNN to forecast accurately for significantly longer time horizon in both synthetic and real-world multivariate time series data.As we observed, chaotic dynamics still present a significant challenge to any sequential prediction model.",
"Hence, it would be interesting to study how to learn robust models for chaotic dynamics.",
"In other sequential prediction settings, such as natural language processing, there does not (or is not known to) exist a succinct analytical description of the data-generating process.",
"It would be interesting to further investigate the effectiveness of TT-RNNs in such domains as well."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.06666666269302368,
0.06451612710952759,
0.060606054961681366,
0.13793103396892548,
0.06666666269302368,
0.052631575614213943,
0,
0.05882352590560913,
0,
0.06896550953388214,
0,
0.1428571343421936,
0.11428570747375488,
0.1538461446762085,
0.04081632196903229,
0.17241379618644714,
0,
0.060606054961681366,
0.14814814925193787,
0,
0.0714285671710968,
0,
0,
0,
0.20000000298023224,
0.14814814925193787,
0,
0.043478257954120636,
0,
0,
0
] | HJJ0w--0W | true | [
"Accurate forecasting over very long time horizons using tensor-train RNNs"
] |
[
"Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret.",
"We propose a variational message-passing algorithm for variational inference in such models.",
"We make three contributions.",
"First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE).",
"Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE.",
"Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference.",
"By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods.",
"To analyze real-world data, machine learning relies on models that can extract useful patterns.",
"Deep Neural Networks (DNNs) are a popular choice for this purpose because they can learn flexible representations.",
"Another popular choice are probabilistic graphical models (PGMs) which can find interpretable structures in the data.",
"Recent work on combining these two types of models hopes to exploit their complimentary strengths and provide powerful models that are also easy to interpret BID10 BID14 BID0 BID3 .To",
"apply such hybrid models to real-world problems, we need efficient algorithms that can extract useful structure from the data. However",
", the two fields of deep learning and PGMs traditionally use different types of algorithms. For deep",
"learning, stochastic-gradient methods are the most popular choice, e.g., those based on back-propagation. These algorithms",
"are not only widely applicable, but can also employ amortized inference to enable fast inference at test time BID17 BID12 . On the other hand",
", most popular algorithms for PGMs exploit the model's graphical conjugacy structure to gain computational efficiency, e.g., variational message passing (VMP) BID18 , expectation propagation BID16 , Kalman filtering BID4 BID5 , and more recently natural-gradient variational inference BID9 and stochastic variational inference BID8 . In short, the two",
"fields of deep learning and probabilistic modelling employ fundamentally different inferential strategies and a natural question is, whether we can design algorithms that combine their respective strengths.There have been several attempts to design such methods in the recent years, e.g., BID14 ; BID3 ; BID0 ; BID10 ; BID2 . Our work in this",
"paper is inspired by the previous work of BID10 that aims to combine message-passing, natural-gradient, and amortized inference. Our proposed method",
"in this paper simplifies and generalizes the method of BID10 .To do so, we propose",
"Structured Inference Networks (SIN) that incorporate the PGM structure in the standard inference networks used in variational auto-encoders (VAE) BID12 BID17 . We derive conditions",
"under which such inference networks can enable fast amortized inference similar to VAE. By using a recent VMP",
"method of BID11 , we The generative models are just like the decoder in VAE but they employ a structured prior, e.g., Fig. (a) has a mixture-model prior while Fig. (b) has a dynamical system prior. SINs, just like the encoder",
"in VAE, mimic the structure of the generative model by using parameters φ. One main difference is that",
"in SIN the arrows between y n and x n are reversed compared to the model, while rest of the arrows have the same direction.derive a variational message-passing algorithm whose messages automatically reduce to stochasticgradients for the deep components of the model, while perform natural-gradient updates for the PGM part. Overall, our algorithm enables",
"Structured, Amortized, and Natural-gradient (SAN) updates and therefore we call our algorithm the SAN algorithm. We show that our algorithm give",
"comparable performance to the method of BID10 while simplifying and generalizing it. The code to reproduce our results",
"is available at https://github.com/emtiyaz/vmp-for-svae/.",
"We propose an algorithm to simplify and generalize the algorithm of BID10 for models that contain both deep networks and graphical models.",
"Our proposed VMP algorithm enables structured, amortized, and natural-gradient updates given that the structured inference networks satisfy two conditions.",
"The two conditions derived in this paper generally hold for PGMs that do not force dense correlations in the latent variables x.",
"However, it is not clear how to extend our method to models where this is the case, e.g., Gaussian process models.",
"It is possible to use ideas from sparse Gaussian process models and we will investigate this in the future.",
"An additional issue is that our results are limited to small scale data.",
"We found that it is non-trivial to implement a message-passing framework that goes well with the deep learning framework.",
"We are going to pursue this direction in the future and investigate good platforms to integrate the capabilities of these two different flavors of algorithms."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.277777761220932,
0.5714285373687744,
0.0952380895614624,
0.34285715222358704,
0,
0.2222222238779068,
0.17142856121063232,
0.12903225421905518,
0.11764705181121826,
0.24242423474788666,
0.13333332538604736,
0.1621621549129486,
0.1875,
0.05882352590560913,
0.04999999329447746,
0.17241379618644714,
0.17910447716712952,
0.15789473056793213,
0.1875,
0.20512819290161133,
0.05882352590560913,
0.11764705181121826,
0.1764705777168274,
0.27586206793785095,
0.29411762952804565,
0.11764705181121826,
0,
0.6666666865348816,
0.2222222238779068,
0.15789473056793213,
0.10810810327529907,
0.1666666567325592,
0.06666666269302368,
0.3529411852359772,
0.1538461446762085
] | HyH9lbZAW | true | [
"We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model."
] |
[
"Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.",
"One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix.",
"However, performing standard low-rank factorization with a small rank can hurt the model expressiveness and significantly decrease the performance.",
"In this work, we propose to use a mixture of multiple low-rank factorizations to model a large weight matrix, and the mixture coefficients are computed dynamically depending on its input.",
"We demonstrate the effectiveness of the proposed approach on both language modeling and image classification tasks.",
"Experiments show that our method not only improves the computation efficiency but also maintains (sometimes outperforms) its accuracy compared with the full-rank counterparts.",
"Modern neural networks usually contain millions of parameters BID4 BID8 , and they are difficult to be deployed on mobile devices with limited computation resources.",
"To solve this problem, model compression techniques are proposed in recent years.",
"Low-rank factorization is a popular way of reducing the matrix size.",
"It has been extensively explored in the literature BID5 BID6 BID3 BID10 .",
"Mathematically, a large weight matrix W ∈ R m×n is factorized to two small rank-d matrices U ∈ R m×d , V ∈ R n×d with W = U V T .",
"Since both U and V are dense, no sparsity support is required from specialized hardware.",
"It naturally fits the general-purpose, off-the-shelf CPUs and GPUs.To significantly reduce the model size and computation, the rank d in the low-rank factorization needs to be small.",
"However, a small rank can limit the expressiveness of the model BID9 and lead to worse performance.",
"To understand the limitations, given a n-dim feature vector h, we observe that DISPLAYFORM0 , is a linear projection from a high-dimensional space (n dims) to a low-dimensional space (d dims).",
"This can lead to a significant loss of information.",
"The conflict between the rank d and the model expressiveness prevents us from obtaining a both compact and accurate model.To address the dilemma, we propose to increase the expressiveness by learning an adaptive, inputdependent factorization, rather than performing a fixed factorization of a weight matrix.",
"To do so, we use a mixture of multiple low-rank factorizations.",
"The mixing weights are computed based on the input.",
"This creates an adaptive linear projection from a high-dimensional space to a low-dimensional space.",
"Compared to the conventional low-rank factorization, the proposed approach can significantly improve its performance while only introducing a small additional cost.",
"DISPLAYFORM1 where z can be treated as the middle layer.",
"Techniques like pooling can be applied to compute π to make it efficient."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.04651162400841713,
0.21052631735801697,
0.1621621549129486,
0.1304347813129425,
0.23529411852359772,
0.09756097197532654,
0.09090908616781235,
0,
0.06666666269302368,
0,
0.04651162400841713,
0.11764705181121826,
0.1860465109348297,
0.11428570747375488,
0.08888888359069824,
0.0714285671710968,
0.17543859779834747,
0.06666666269302368,
0,
0.06451612710952759,
0.1538461446762085,
0,
0.06451612710952759
] | B1eHgu-Fim | true | [
"A simple modification to low-rank factorization that improves performances (in both image and language tasks) while still being compact."
] |
[
"Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices.",
"We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs).",
"PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size.",
"Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model.",
"We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression.",
"Our results hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ.",
"Distributed deep learning exploits parallelism to reduce training time, and consists of three key components: the data pipeline (storage), the forward/backward computation (compute), and the variable synchronization (network).",
"A plethora of work has investigated scaling deep learning from a compute-or network-bound perspective (e.g., Dean et al., 2012; Cui et al., 2016; Abadi et al., 2015; Cui et al., 2014; Jouppi et al., 2017; Lim et al., 2019; Alistarh et al., 2017; Wen et al., 2017; Wangni et al., 2018; .",
"However, little attention has been paid toward scaling the storage layer, where training starts and training data is sourced.",
"Unfortunately, hardware trends point to an increasing divide between compute and networking or storage bandwidth (Li et al., 2016; Lim et al., 2019; Kurth et al., 2018) .",
"For example, the transportation of data for machine learning is a key factor in the design of modern data centers (Hazelwood et al., 2018) , which are expected to be serviced by slow, yet high capacity, storage media for the foreseeable future (David Reinsel, 2018; Cheng et al., 2015; Rosenthal et al., 2012) .",
"This, combined with the memory wall-a lack of bandwidth between compute and memory-suggests that, while computation may be sufficient moving forward, the mechanisms for moving data to the compute may not (Wulf & McKee, 1995; Kwon & Rhu, 2018; Hsieh et al., 2017; Zinkevich et al., 2010) .",
"The storage pipeline is therefore a natural area to seek improvements in overall training times, which manifest from the storage medium, through the network, and into the compute nodes.",
"In this work, we propose a novel on-disk format called Progressive Compressed Records (PCRs) as a way to reduce the bandwidth cost associated with training over massive datasets.",
"Our approach leverages a compression technique that decomposes each data item into deltas, each of which increases data fidelity.",
"PCRs utilize deltas to dynamically compress entire datasets at a fidelity suitable for each application's needs, avoiding duplicating the dataset (potentially many times) at various fidelity levels.",
"Applications control the trade-off between dataset size (and, thus, bandwidth) and fidelity, and a careful layout of deltas ensures that data access is efficient at a storage medium level.",
"As a result, we find that for a variety of popular deep learning models and datasets, bandwidth (and therefore training time) can be easily reduced by 2× on average relative to JPEG compression without affecting model accuracy.",
"Overall, we make the following contributions:",
"1. In experiments with multiple architectures and several large-scale image datasets, we show that neural network training is robust to data compression in terms of test accuracy and training loss; however, the amount of compression that can be tolerated varies across learning tasks.",
"2. We introduce Progressive Compressed Records (PCRs), a novel on-disk format for training data.",
"PCRs combine progressive compression and careful data placement to enable applications to dynamically choose the fidelity of the dataset they consume, reducing data bandwidth.",
"3. We demonstrate that by using PCRs, training speed can be improved by 2× on average over standard formats using JPEG compression.",
"This is achieved by selecting a lower data fidelity, which, in turn, reduces the amount of data read without significantly impairing model performance.",
"To continue making advances in machine learning, researchers will need access to larger and larger datasets, which will eventually spill into (potentially distributed) storage systems.",
"Storage and networking bandwidth, which are precious resources, can be better utilized with efficient compression formats.",
"We introduce a novel record format, Progressive Compressed Records (PCRs), that trades off data fidelity with storage and network demands, allowing the same model to be trained with 2× less storage bandwidth while retaining model accuracy.",
"PCRs use progressive compression to split training examples into multiple examples of increasingly higher fidelity without the overheads of naive approaches.",
"PCRs avoid duplicating space, are easy to implement, and can be applied to a broad range of tasks dynamically.",
"While we apply our format in this work specifically to images with JPEG compression, PCRs are general enough to handle various data modalities or additional compression techniques; future work will include exploring these directions in fields outside of visual classification, such as audio generation or video segmentation."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.30434781312942505,
0.1599999964237213,
0.2222222238779068,
0.2181818187236786,
0.1904761791229248,
0.25,
0.10526315122842789,
0.1463414579629898,
0.08510638028383255,
0.17910447716712952,
0.1269841194152832,
0.16326530277729034,
0.19999998807907104,
0.14999999105930328,
0.2083333283662796,
0.1599999964237213,
0.2711864411830902,
0,
0.19672130048274994,
0.2702702581882477,
0.22727271914482117,
0.1860465109348297,
0.13333332538604736,
0.08695651590824127,
0.10256409645080566,
0.2857142686843872,
0.1428571343421936,
0.24390242993831635,
0.09090908616781235
] | S1e0ZlHYDB | true | [
"We propose a simple, general, and space-efficient data format to accelerate deep learning training by allowing sample fidelity to be dynamically selected at training time"
] |
[
"It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist.",
"Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised to achieve robust learning?",
"In this work, we study this question and propose gradient rescaling (GR) to solve it.",
"GR modifies the magnitude of logit vector’s gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs.",
"Apart from regularisation, we connect GR to examples weighting and designing robust loss functions.",
"We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels.",
"It is also significantly superior to standard regularisers in both clean and abnormal settings.",
"Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios.",
"DNNs have been successfully applied in diverse applications (Socher et al., 2011; Krizhevsky et al., 2012; LeCun et al., 2015) .",
"However, their success is heavily reliant on the quality of training data, especially accurate semantic labels for learning supervision.",
"Unfortunately, on the one hand, maintaining the quality of semantic labels as the scale of training data increases is expensive and almost impossible when the scale becomes excessively large.",
"On the other hand, it has been demonstrated that DNNs are capable of memorising the whole training data even when all training labels are random (Zhang et al., 2017) .",
"Therefore, DNNs struggle to discern meaningful data patterns and ignore semantically abnormal examples 1 simultaneously (Krueger et al., 2017; Arpit et al., 2017) .",
"Consequently, it becomes an inevitable demand for DNNs to hold robustness when training data contains anomalies (Larsen et al., 1998; Natarajan et al., 2013; Sukhbaatar & Fergus, 2014; Xiao et al., 2015; Patrini et al., 2017; Vahdat, 2017; Veit et al., 2017; Li et al., 2017) .",
"Recently, great progress has been made towards robustness against anomalies when training DNNs (Krueger et al., 2017) .",
"There are three appealing perspectives in terms of their simplicity and effectiveness:",
"1) Examples weighting.",
"For example, knowledge distilling from auxiliary models is popular for heuristically designing weighting schemes.",
"However, it is challenging to select and train reliable auxiliary models in practice (Li et al., 2017; Malach & Shalev-Shwartz, 2017; Jiang et al., 2018; Ren et al., 2018; Han et al., 2018b) .",
"2) Robust loss functions (Van Rooyen et al., 2015; Ghosh et al., 2017; Zhang & Sabuncu, 2018; Wang et al., 2019b) ; 3) Explicit regularisation techniques (Arpit et al., 2017; .",
"Although designing robust losses or explicit regularisation is easier and more flexible in practice, the performance is not the optimal yet.",
"1 One training example is composed of an input and its corresponding label.",
"A semantically abnormal example means the input is semantically unrelated to its label, which may come from corrupted input or label.",
"For example, in Figure 3 in the supplementary material:",
"1) Out-of-distribution anomalies: An image may contain only background or an object which does not belong to any training class;",
"2) In-distribution anomalies: An image of class a may be annotated to class b or an image may contain more than one semantic object.",
"Regarding examples weighting, there is a core research question which is not well answered yet:",
"What training examples should be focused on and how large the emphasis spread should be?",
"In this work, we present a thorough study of this practical question under different settings.",
"For better analysis, we propose two basic and necessary concepts: emphasis focus and spread with explicit definition in Sec. 3.2.",
"They are conceptually introduced as follows:",
"Emphasis focus.",
"It is a common practice to focus on harder instances when training DNNs (Shrivastava et al., 2016; Lin et al., 2017) .",
"When a dataset is clean, it achieves faster convergence and better performance to emphasise on harder examples because they own larger gradient magnitude, which means more information and a larger update step for model's parameters.",
"However, when severe noise exists, as demonstrated in (Krueger et al., 2017; Arpit et al., 2017) , DNNs learn simple meaningful patterns first before memorising abnormal ones.",
"In other words, anomalies are harder to fit and own larger gradient magnitude in the later stage.",
"Consequently, if we use the default sample weighting in categorical cross entropy (CCE) where harder samples obtain higher weights, anomalies tend to be fitted well especially when a network has large enough capacity.",
"That is why we need to move the emphasis focus towards relatively easier ones, which serves as emphasis regularisation.",
"Emphasis spread.",
"We term the weighting variance of training examples emphasis spread.",
"The key concept is that we should not treat all examples equally, neither should we let only a few be emphasised and contribute to the training.",
"Therefore, when emphasis focus changes, the emphasis spread should be adjusted accordingly.",
"We integrate emphasis focus and spread into a unified example weighting framework.",
"Emphasis focus defines what training examples own higher weights while emphasis spread indicates how large variance over their weights.",
"Specifically, we propose gradient rescaling (GR), which modifies the magnitude of logit vector's gradient.",
"The logit vector is the output of the last fully connected (FC) layer of a network.",
"We remark that we do not design the weighting scheme heuristically from scratch.",
"Instead, it is naturally motivated by the gradient analysis of several loss functions.",
"Interestingly, GR can be naturally connected to examples weighting, robust losses, explicit regularisation:",
"1) The gradient magnitude of logit vector can be regarded as weight assignment that is built-in in loss functions (Gopal, 2016; Alain et al., 2016; Zhang et al., 2018b) .",
"Therefore, rescaling the gradient magnitude equals to adjusting the weights of examples;",
"2) A specific loss function owns a fixed gradient derivation.",
"Adjusting the gradient can be treated as a more direct and flexible way of modifying optimisation objectives;",
"3) Instead of focusing on harder examples 2 by default, we can adjust emphasis focus to relative easier ones when noise is severe.",
"GR serves as emphasis regularisation and is different from standard regularisers, e.g., L2 weight decay constraints on weight parameters and Dropout samples neural units randomly (Srivastava et al., 2014) ; GR is simple yet effective.",
"We demonstrate its effectiveness on diverse computer vision tasks using different net architectures:",
"1) Image classification with clean training data;",
"2) Image classification with synthetic symmetric label noise, which is more challenging than asymmetric noise evaluated by (Vahdat, 2017; ; 3) Image classification with real-world unknown anomalies, which may contain open-set noise , e.g., images with only background, or outliers, etc.",
";",
"4) Video person re-identification, a video retrieval task containing diverse anomalies.",
"Beyond, we show that GR is notably better than other standard regularisers, e.g., L2 weight decay and dropout.",
"Besides, to comprehensively understand GR's behaviours, we present extensive ablation studies.",
"Main contribution.",
"Intuitively and principally, we claim that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting.",
"To the best of our knowledge, we are the first to thoroughly study and analyse them together in a unified framework.",
"In this work, we present three main contributions:",
"1) We analyse and answer a core research question: What training examples should be focused on and how large the emphasis spread should be?",
"2) We uncover and analyse that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting.",
"Consequently, we propose a simple yet effective gradient rescaling framework serving as emphasis regularisation.",
"3) Extensive experiments on different tasks using different network architectures are reported for better understanding and demonstration of GR's effectiveness, which are also valuable for applying GR in practice.",
"(Zheng et al., 2016) .",
"Out-of-distribution anomalies:",
"1) The first image in the 3rd row contains only background and no semantic information at all."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rylUOn4Yvr | true | [
"ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE"
] |
[
"Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images.",
"In most applications, GAN models share two aspects in common.",
"On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions.",
"On the other hand, the generator and the discriminator are parametrized in terms of deep convolutional neural networks.",
"The goal of this paper is to disentangle the contribution of these two factors to the success of GANs.",
"In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators without using discriminators, thus avoiding the instability of adversarial optimization problems.",
"Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors.",
"Generative Adversarial Networks (GANs) BID15 are a powerful framework to learn generative models of natural images.",
"GANs learn these generative models by setting up an adversarial game between two learning machines.",
"On the one hand, a generator plays to transform noise vectors into fake samples, which resemble real samples drawn from a distribution of natural images.",
"On the other hand, a discriminator plays to distinguish between real and fake samples.",
"During training, the generator and the discriminator learn in turns.",
"First, the discriminator learns to assign high scores to real samples, and low scores to fake samples.",
"Then, the generator learns to increase the scores of fake samples, as to fool the discriminator.",
"After proper training, the generator is able to produce realistic natural images from noise vectors.Recently, GANs have been used to produce high-quality images resembling handwritten digits, human faces, and house interiors BID36 .",
"Furthermore, GANs exhibit three strong signs of generalization.",
"First, the generator translates linear interpolations in the noise space into semantic interpolations in the image space.",
"In other words, a linear interpolation in the noise space will generate a smooth interpolation of visually-appealing images.",
"Second, the generator allows linear arithmetic in the noise space.",
"Similarly to word embeddings BID31 , linear arithmetic indicates that the generator organizes the noise space to disentangle the nonlinear factors of variation of natural images into linear statistics.",
"Third, the generator is able to to synthesize new images that resemble those of the data distribution.",
"This allows for applications such as image in-painting BID18 and super-resolution BID26 .Despite",
"their success, training and evaluating GANs is notoriously difficult. The adversarial",
"optimization problem implemented by GANs is sensitive to random initialization, architectural choices, and hyper-parameter settings. In many cases,",
"a fair amount of human care is necessary to find the correct configuration to train a GAN in a particular dataset. It is common to",
"observe generators with similar architectures and hyper-parameters to exhibit dramatically different behaviors. Even when properly",
"trained, the resulting generator may synthesize samples that resemble only a few localized regions (or modes) of the data distribution BID14 . While several advances",
"have been made to stabilize the training of GANs BID37 , this task remains more art than science.The difficulty of training GANs is aggravated by the challenges in their evaluation: since evaluating the likelihood of a GAN with respect to the data is an intractable problem, the current gold standard to evaluate the quality of GANs is to eyeball the samples produced by the generator. The evaluation of discriminators",
"is also difficult, since their visual features do not always transfer well to supervised tasks BID12 BID13 . Finally, the application of GANs",
"to non-image data has been relatively limited.Research question To model natural images with GANs, the generator and discriminator are commonly parametrized as deep Convolutional Networks (convnets) BID24 . Therefore, it is reasonable to hypothesize",
"that the reasons for the success of GANs in modeling natural images come from two complementary sources: (A1) Leveraging the powerful inductive bias of deep convnets. (A2) The adversarial training protocol.This",
"work",
"attempts to disentangle the factors of success (A1) and (A2) in GAN models. Specifically, we propose and study one algorithm",
"that relies on (A1) and avoids (A2), but still obtains competitive results when compared to a GAN.",
"The experimental results presented in this work suggest that, in the image domain, we can recover many of the properties of GAN models by using convnets trained with simple reconstruction losses.",
"While this does not invalidate the promise of GANs as generic models of uncertainty or as methods for building generative models, our results suggest that, in order to more fully test the adversarial construction, research needs to move beyond images and convnets.",
"On the other hand, practitioners who care only about generating images for a particular application, and find that the parameterized discriminator does improve their results can use reconstruction losses in their model searches, alleviating some of the instability of GAN training.While the visual quality of the results are promising, especially on the CelebA dataset, they are not yet to the level of the results obtained by GANs on the LSUN bedrooms.",
"This suggest several research directions: one possibility, suggested by 3, is that being able to cover the entire dataset is too onerous a task if all that is required is to generate a few nice samples.",
"In that figure we see that GANs have trouble reconstructing randomly chosen images at the same level of fidelity as their generations.",
"However, GANs can produce good images after a single pass through the data with SGD.",
"In future work we hope to better understand the tension between these two observations.",
"There are many possibilities for improving the quality of GLO samples beyond understanding the effects of coverage.",
"For example other loss functions (e.g. a VGG metric, as in BID32 ), model architectures (here we stayed close to DCGAN for ease of comparison), and more sophisticated sampling methods after training the model all may improve the visual quality of the samples.There is also much work to be done in adding structure to the Z space.",
"Because the methods here keep track of the correspondence between samples and their representatives, and because the Z space is free, we hope to be able to organize the Z in interesting ways as we train."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08695651590824127,
0.04999999701976776,
0.25925925374031067,
0.17391303181648254,
0.13333332538604736,
0.1428571343421936,
0.35483869910240173,
0.08695651590824127,
0.08888888359069824,
0.2222222238779068,
0.13636362552642822,
0.1538461446762085,
0.09090908616781235,
0.1395348757505417,
0.19999998807907104,
0.10526315122842789,
0.1428571343421936,
0.17391303181648254,
0.1538461446762085,
0.14814814925193787,
0.13333332538604736,
0.04651162400841713,
0.19512194395065308,
0.1249999925494194,
0.1599999964237213,
0.08888888359069824,
0.15094339847564697,
0.202531635761261,
0.11538460850715637,
0.1269841194152832,
0.16949151456356049,
0.16326530277729034,
0.12765957415103912,
0.3103448152542114,
0.1764705777168274,
0.20930232107639313,
0.06666666269302368,
0.11764705181121826,
0.17777776718139648,
0.045454539358615875,
0.13333332538604736,
0.14814814925193787,
0.1355932205915451
] | ryj38zWRb | true | [
"Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet generator trained with a simple reconstruction loss and learnable noise vectors leads many of the desirable properties of a GAN."
] |
[
"In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical performance of MMD GAN.",
"Different from common forests with deterministic routings, a probabilistic routing variant is used in our innovated random-forest kernel, which is possible to merge with the CNN frameworks.",
"Our proposed random-forest kernel has the following advantages: From the perspective of random forest, the output of GAN discriminator can be viewed as feature inputs to the forest, where each tree gets access to merely a fraction of the features, and thus the entire forest benefits from ensemble learning.",
"In the aspect of kernel method, random-forest kernel is proved to be characteristic, and therefore suitable for the MMD structure.",
"Besides, being an asymmetric kernel, our random-forest kernel is much more flexible, in terms of capturing the differences between distributions.",
"Sharing the advantages of CNN, kernel method, and ensemble learning, our random-forest kernel based MMD GAN obtains desirable empirical performances on CIFAR-10, CelebA and LSUN bedroom data sets.",
"Furthermore, for the sake of completeness, we also put forward comprehensive theoretical analysis to support our experimental results.",
"Generative adversarial nets (GANs; Goodfellow et al., 2014) are well-known generative models, which largely attribute to the sophisticated design of a generator and a discriminator which are trained jointly in an adversarial fashion.",
"Nowadays GANs are intensely used in a variety of practical tasks, such as image-to-image translation (Tang et al., 2019; Mo et al., 2019) ; 3D reconstruction (Gecer et al., 2019) ; video prediction (Kwon & Park, 2019) ; text-to-image generation (Zhu et al., 2019) ; just to name a few.",
"However, it's well-known that the training of GANs is a little tricky, see e.g. (Salimans et al., 2016) .",
"One reason of instability of GAN training lies in the distance used in discriminator to measure the divergence between the generated distribution and the target distribution.",
"For instance, concerning with the Jensen-Shannon divergence based GANs proposed in Goodfellow et al. (2014) , points out that if the generated distribution and the target distribution are supported on manifolds where the measure of intersection is zero, Jensen-Shannon divergence will be constant and the KL divergences be infinite.",
"Consequently, the generator fails to obtain enough useful gradient to update, which undermines GAN training.",
"Moreover, two non-overlapping distributions may be judged to be quite different by the Jensen-Shannon divergence, even if they are nearby with high probability.",
"As a result, to better measure the difference between two distributions, Integral Probability Metrics (IPM) based GANs have been proposed.",
"For instance, utilizes Wasserstein distance in GAN discriminator, while Li et al. (2017) adopts maximum mean discrepancy (MMD), managing to project and discriminate data in reproducing kernel Hilbert space (RKHS).",
"To mention, the RKHS with characteristic kernels including Gaussian RBF kernel (Li et al., 2017) and rational quadratic kernel (Bińkowski et al., 2018) has strong power in the discrimination of two distributions, see e.g. (Sriperumbudur et al., 2010) .",
"In this paper, inspired by non-linear discriminating power of decision forests, we propose a new type of kernel named random-forest kernel to improve the performance of MMD GAN discriminator.",
"In order to fit with back-propagation training procedure, we borrow the decision forest model with stochastic and differentiable decision trees from Kontschieder et al. (2015) in our random-forest kernel.",
"To be specific, each dimension of the GAN discriminator outputs is randomly connected to one internal node of a soft decision forest, serving as the candidate to-be-split dimension.",
"Then, the tree is split with a soft decision function through a probabilistic routing.",
"Other than the typical decision forest used in classification tasks where the value of each leaf node is a label, the leaf value of our random forest is the probability of a sample x i falling into a certain leaf node of the forest.",
"If the output of the discriminator is denoted as h θ N (x i ) and the probability output of the t-th tree is denoted as µ t (h θ N (x i ); θ F ), the random forest kernel k RF can be formulated as",
"where T is the total number of trees in the forest, θ N and θ F denote the parameters of the GAN discriminator and the random forest respectively.",
"Recall that random forest and deep neural networks are first combined in Kontschieder et al. (2015) , where differentiable decision tree model and deep convolutional networks are trained together in an end-to-end manner to solve classification tasks.",
"Then, Shen et al. (2017) extends the idea to label distribution learning, and Shen et al. (2018) makes further extensions in regression regime.",
"Moreover, Zuo & Drummond (2017) , Zuo et al. (2018) and Avraham et al. (2019) also introduce deep decision forests.",
"Apart from the typical ensemble method that averages the results across trees, they aggregate the results by multiplication.",
"As for the combination of random forest and GAN, Zuo et al. (2018) introduce forests structure in GAN discriminator, combining CNN network and forest as a composited classifier, while Avraham et al. (2019) uses forest structure as one of non-linear mapping functions in regularization part.",
"On the other hand, in the aspect of relationship between random forest and kernel method, Breiman (2000) initiates the literature concerning the link.",
"He shows the fact that a purely random tree partition is equivalent to a kernel acting on the true margin, of which form can be viewed as the probability of two samples falling into the same terminal node.",
"Shen & Vogelstein (2018) proves that random forest kernel is characteristic.",
"Some more theoretical analysis can be found in Davies & Ghahramani (2014) , Arlot & Genuer (2014) , Scornet (2016) .",
"However, despite their theoretical breakthroughs, forest decision functions used in these forest kernels are non-differentiable hard margins rather than differentiable soft ones, and thus cannot be directly used in back propagation regime.",
"To the best of our knowledge, MMD GAN with our proposed random-forest kernel is the first to combine random forest with deep neural network in the form of kernel MMD GAN.",
"Through theoretical analysis and numerical experiments, we evaluate the effectiveness of MMD GAN with our random-forest kernel.",
"From the theoretical point of view, our random-forest kernel enjoys the property of being characteristic, and the gradient estimators used in the training process of random-forest kernel GAN are unbiased.",
"In numerical experiments, we evaluate our random-forest kernel under the setting of both the original MMD GAN (Li et al., 2017) and the one with repulsive loss (Wang et al., 2019) .",
"Besides, we also compare our random-forest kernel with Gaussian RBF kernel (Li et al., 2017) , rational quadratic kernel (Bińkowski et al., 2018) , and bounded RBF kernel (Wang et al., 2019) .",
"As a result, MMD GAN with our random-forest kernel outperforms its counterparts with respect to both accuracy and training stability.",
"This paper is organized as follows.",
"First of all, we introduce some preliminaries of MMD GAN in Section 2.",
"Then we review the concept of deep random forest and show how it is embedded within a CNN in 3.1.",
"After that, random-forest kernels and MMD GAN with random-forest kernels are proposed in 3.2 and 3.3 respectively.",
"Besides, the training techniques of MMD GAN with random-forest kernel are demonstrated in Section 3.4 and the theoretical results are shown in Section 3.5.",
"Eventually, Section 4 presents the experimental setups and results, including the comparison between our proposed random-forest kernel and other kernels.",
"In addition, all detailed theoretical proofs are included in the Appendices.",
"The generative model captures the data distribution P X , by building a mapping function G : Z → X from a prior noise distribution P Z to data space.",
"While the discriminative model D : X → R is used to distinguish generated distribution P Y from real data distribution P X .",
"Taking X, X ∼ P X and Y, Y ∼ P Y := P G (Z) where Y := G(Z) and Y := G(Z ), the squared MMD is expressed as",
"The loss of generator and discriminator in MMD GAN proposed in Li et al. (2017) is:",
"Wang et al. (2019) proposed MMD GAN with repulsive loss, where the objective functions for G and D are:",
"we can write an unbiased estimator of the squared MMD in terms of k as",
"When k is a characteristic kernel, we have MMD 2 [P X , P Y ] ≥ 0 with equality applies if and only if P X = P Y .",
"The best-known characteristic kernels are gaussian RBF kernel and rational quadratic kernel (Bińkowski et al., 2018) ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14814814925193787,
0.1818181723356247,
0.125,
0.23076923191547394,
0.1428571343421936,
0.1764705777168274,
0,
0.052631575614213943,
0.08695651590824127,
0.1428571343421936,
0,
0.0833333283662796,
0,
0.06666666269302368,
0.1428571343421936,
0.054054051637649536,
0.0952380895614624,
0.29411762952804565,
0.17142856121063232,
0.060606058686971664,
0.1904761791229248,
0.0555555522441864,
0.04999999701976776,
0,
0,
0,
0,
0,
0.045454543083906174,
0.0714285671710968,
0.09756097197532654,
0.10526315122842789,
0,
0,
0.25806450843811035,
0.3199999928474426,
0.12903225421905518,
0.2222222238779068,
0.19354838132858276,
0.37037035822868347,
0,
0.09999999403953552,
0.06896550953388214,
0.27272728085517883,
0.27586206793785095,
0.1538461446762085,
0,
0.0624999962747097,
0,
0.06896550953388214,
0.08695651590824127,
0.14814814925193787,
0.09090908616781235,
0.1818181723356247,
0.0833333283662796
] | HJxhWa4KDr | true | [
"Equip MMD GANs with a new random-forest kernel."
] |
[
"Reinforcement learning in an actor-critic setting relies on accurate value estimates of the critic.",
"However, the combination of function approximation, temporal difference (TD) learning and off-policy training can lead to an overestimating value function.",
"A solution is to use Clipped Double Q-learning (CDQ), which is used in the TD3 algorithm and computes the minimum of two critics in the TD-target. \n",
"We show that CDQ induces an underestimation bias and propose a new algorithm that accounts for this by using a weighted average of the target from CDQ and the target coming from a single critic.\n",
"The weighting parameter is adjusted during training such that the value estimates match the actual discounted return on the most recent episodes and by that it balances over- and underestimation.\n",
"Empirically, we obtain more accurate value estimates and demonstrate state of the art results on several OpenAI gym tasks.",
"In recent years it was shown that reinforcement learning algorithms are capable of solving very complex tasks, surpassing human expert performance in games like Go , Starcraft (DeepMind) or Dota (OpenAI).",
"However, usually a large amount of training time is needed to achieve these results (e.g. 45,000 years of gameplay for Dota).",
"For many important problems (e.g. in robotics) it is prohibitively expensive for the reinforcement learning agent to interact with its environment that much.",
"This makes it difficult to apply such algorithms in the real world.",
"Off-policy reinforcement learning holds the promise of being more data-efficient than on-policy methods as old experience can be reused several times for training.",
"Unfortunately, the combination of temporal-difference (TD) learning, function approximation and off-policy training can be unstable, which is why it has been called the deadly triad (Sutton & Barto, 2018; van Hasselt et al., 2018) .",
"If the action space is discrete, solutions like Double DQN (Van Hasselt et al., 2016) are very effective at preventing divergence of the value estimates by eliminating an otherwise prevailing overestimation bias.",
"For continuous action spaces, which characterize many tasks, it was shown that Double DQN can not solve the overestimation problem Fujimoto et al. (2018) .",
"In an actor-critic setting it is important that the value estimates of the critic are accurate in order for the actor to learn a policy from the critic.",
"The TD3 Fujimoto et al. (2018) algorithm uses Clipped Double Q-learning (CDQ) to produce a critic without an overestimation bias, which greatly improved the performance of the algorithm.",
"In CDQ two critics are trained at the same time and the TD target for both of them is the minimum over the two single TD targets.",
"While the authors note that the CDQ critic update tends to underestimate the true values, this is not further examined.",
"We show that this underestimation bias occurs in practice and propose a method that accounts for over-and underestimation of the critic at the same time.",
"Similarly to CDQ we train two function approximators for the Q-values, but we regress them not on the same quantity.",
"The TD target for each of the two critics is a weighted average of the single TD target for that critic and the TD target from CDQ.",
"The weighting parameter is learned by comparing the value estimates for the most recent state-action pairs with the observed discounted returns for these pairs.",
"As the one term of the average has an underestimation bias while the other one has an overestimation bias, the weighted average balances these biases and we show empirically that this method obtains much more accurate estimates of the Q-values.",
"We verify that the more accurate critics improve the performance of the reinforcement learning agent as our method achieves state of the art results on a range of continuous control tasks from OpenAi gym Brockman et al. (2016) .",
"To guarantee reproducibility we open source our code which is easy to execute and evaluate our algorithm on a large number of different random seeds.",
"We showed that Clipped Double Q-learning (CDQ) induces an underestimation bias in the critic, while an overestimation bias occurs if just one Q-network is used.",
"From that we derived the Balanced Clipped Double Q-learning algorithm (BCDQ) that updates the critic through a weighted average of the two mentioned update mechanisms.",
"The weighting parameter is adjusted over the course of training by comparing the Q-values of recently visited state-action pairs with the actual discounted return observed from that pair onwards.",
"It was shown that BCDQ achieves much more accurate value estimates by adjusting the weighting parameter.",
"Replacing CDQ with BCDQ leads to the Balanced Twin Delayed Deep Deterministic policy gradient algorithm (BTD3).",
"Our method achieves state of the art performance on a range of continuous control tasks.",
"Furthermore, BCDQ can be added to any other actor-critic algorithm while it only minimally increases the computational complexity compared to CDQ.",
"It is also be possible to use BCDQ for discrete action spaces.",
"Evaluating that approach is an interesting area for future research."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4166666567325592,
0.06896550953388214,
0.12121211737394333,
0.10526315122842789,
0.054054051637649536,
0.20689654350280762,
0.1463414579629898,
0.06451612710952759,
0.23529411852359772,
0.09090908616781235,
0.24242423474788666,
0,
0.0476190447807312,
0,
0.29411762952804565,
0.0555555522441864,
0.0624999962747097,
0.0714285671710968,
0.25,
0.0714285671710968,
0.13793103396892548,
0.13333332538604736,
0.19512194395065308,
0.23255813121795654,
0,
0.060606054961681366,
0.0624999962747097,
0,
0.23076923191547394,
0,
0.0833333283662796,
0,
0.09090908616781235,
0.09999999403953552
] | r1xyayrtDS | true | [
"A method for more accurate critic estimates in reinforcement learning."
] |
[
"We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos.",
"As part of this framework, we construct ImageNet-Vid-Robust, a human-expert--reviewed dataset of 22,668 images grouped into 1,145 sets of perceptually similar images derived from frames in the ImageNet Video Object Detection dataset.",
"We evaluate a diverse array of classifiers trained on ImageNet, including models trained for robustness, and show a median classification accuracy drop of 16\\%.",
"Additionally, we evaluate the Faster R-CNN and R-FCN models for detection, and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points.",
"Our analysis shows that natural perturbations in the real world are heavily problematic for current CNNs, posing a significant challenge to their deployment in safety-critical environments that require reliable, low-latency predictions.",
"Despite their strong performance on various computer vision benchmarks, convolutional neural networks (CNNs) still have many troubling failure modes.",
"At one extreme,`padversarial examples can cause large drops in accuracy for state of the art models with visually imperceptible changes to the input image BID4 .",
"But since carefully crafted`pperturbations are unlikely to occur naturally in the real world, they usually do not pose a problem outside a fully adversarial context.To study more realistic failure modes, researchers have investigated benign image perturbations such as rotations & translations, colorspace changes, and various image corruptions [7, 8, 4] .",
"However, it is still unclear whether these perturbations reflect the robustness challenges commonly arising in real data since the perturbations also rely on synthetic image modifications.Recent work has therefore turned to videos as a source of naturally occurring perturbations of images [6, BID0 . In contrast to other failure modes, the perturbed images are taken from existing image data without further modifications that make the task more difficult. As a result, robustness to such perturbations directly corresponds to performance improvements on real data. However, it is currently unclear to what extent such video perturbations pose a significant robustness challenge. Azulay and Weiss BID0 only provide anecdotal evidence from a small number of videos. While [6] work with a larger video dataset to obtain accuracy estimates, they only observe a small drop in accuracy of around 2.7% on videoperturbed images, suggesting that small perturbations in videos may not actually reduce the accuracy of current CNNs significantly.We address this question by conducting a thorough evaluation of robustness to natural perturbations arising in videos.",
"As a cornerstone of our investigation, we introduce ImageNet-Vid-Robust, a carefully curated subset of ImageNet-Vid [12] .",
"In contrast to earlier work, all images in ImageNet-Vid-Robust were screened by a set of expert labelers to ensure a high annotation quality and to minimize selection biases that arise when filtering with CNNs.",
"Overall, ImageNet-Vid-Robust contains 22,668 images grouped into 1,145 sets of temporally adjacent and visually similar images of a total of 30 classes.We then utilize ImageNet-Vid-Robust to measure the accuracy of current CNNs to small, naturally occurring perturbations.",
"Our testbed contains over 40 different model types, varying both architecture and training methodology (adversarial training, data augmentation, etc).",
"We find that natural perturbations from ImageNet-Vid-Robust induce a median 16% accuracy drop for classification tasks and a median 14% drop in mAP for detection tasks.",
"Even for the best-performing model, we observe an accuracy drop of 14% -significantly larger than the 2.7% drop in [6] over the same time horizon in the video.Our results show that robustness to natural perturbations in videos is indeed a significant challenge for current CNNs.",
"As these models are increasingly deployed in safety-critical environments that require both high accuracy and low latency (e.g., autonomous vehicles), ensuring reliable predictions on every frame of a video is an important direction for future work."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0.21276594698429108,
0.25,
0.26923075318336487,
0.25,
0,
0.23255813121795654,
0.1764705777168274,
0.1818181723356247,
0.1818181723356247,
0.19999998807907104,
0.3529411852359772,
0,
0.25,
0.3050847351551056,
0.14035087823867798
] | SklRoy3qaN | true | [
"We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos."
] |
[
"Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey.",
"Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data.",
"The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach.",
"In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embedding to address the problem of classification on tabular data.",
"For each input of tabular data, the features are first projected into two-dimensional embedding like an image, and then this image is fed into fine-tuned ImageNet CNN models for classification.",
"Experimental results have shown that the proposed SuperTML method have achieved state-of-the-art results on both large and small datasets.",
"In data science, data is categorized into structured data and unstructured data.",
"Structured data is also known as tabular data, and the terms will be used interchangeably.",
"Anthony Goldbloom, the founder and CEO of Kaggle observed that winning techniques have been divided by whether the data was structured or unstructured BID12 .",
"Currently, DNN models are widely applied for usage on unstructured data such as image, speech, and text.",
"According to Anthony, \"When the data is unstructured, its definitely CNNs and RNNs that are carrying the day\" BID12 .",
"The successful CNN model in the ImageNet competition BID8 has outperformed human Preliminary work.",
"Under review by the International Conference on Machine Learning (ICML).",
"Do not distribute.for image classification task by ResNet BID6 since 2015.On the other side of the spectrum, machine learning models such as Support Vector Machine (SVM), Gradient Boosting Trees (GBT), Random Forest, and Logistic Regression, have been used to process structured data.",
"According to a recent survey of 14,000 data scientists by Kaggle (2017) , a subdivision of structured data known as relational data is reported as the most popular type of data in industry, with at least 65% working daily with relational data.",
"Regarding structured data competitions, Anthony says that currently XGBoost is winning practically every competition in the structured data category BID4 .",
"XGBoost BID2 is one popular package implementing the Gradient Boosting method.Recent research has tried using one-dimensional embedding and implementing RNNs or one-dimensional CNNs to address the TML (Tabular Machine Learning) tasks, or tasks that deal with structured data processing BID7 BID11 , and also categorical embedding for tabular data with categorical features BID5 .",
"However, this reliance upon onedimensional embeddings may soon come to change.",
"Recent NLP research has shown that the two-dimensional embedding of the Super Characters method BID9 is capable of achieving state-of-the-art results on large dataset benchmarks.",
"The Super Characters method is a two-step method that was initially designed for text classification problems.",
"In the first step, the characters of the input text are drawn onto a blank image.",
"In the second step, the image is fed into two-dimensional CNN models for classification.",
"The two-dimensional CNN models are trained by fine-tuning from pretrained models on large image dataset, e.g. ImageNet.In this paper, we propose the SuperTML method, which borrows the concept of the Super Characters method to address TML problems.",
"For each input, tabular features are first projected onto a two-dimensional embedding and fed into fine-tuned two-dimensional CNN models for classification.",
"The proposed SuperTML method handles the categorical type and missing values in tabular data automatically, without need for explicit conversion into numerical type values.",
"The proposed SuperTML method borrows the idea of twodimensional embedding from Super Characters and transfers the knowledge learned from computer vision to the structured tabular data.",
"Experimental results shows that the proposed SuperTML method has achieved state-of-the-art results on both large and small tabular dataset TAB2"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.12121211737394333,
0.1818181723356247,
0.05405404791235924,
0.10526315122842789,
0.1904761791229248,
0,
0.1818181723356247,
0.1428571343421936,
0.1111111044883728,
0.13333332538604736,
0.06451612710952759,
0.2222222238779068,
0,
0.178571417927742,
0.08888888359069824,
0.12903225421905518,
0.17241379618644714,
0,
0,
0.0714285671710968,
0,
0.1538461446762085,
0.12244897335767746,
0.1818181723356247,
0.17142856121063232,
0.2222222238779068,
0.0624999962747097
] | r1MCjkn5pV | true | [
"Deep learning for structured tabular data machine learning using pre-trained CNN model from ImageNet."
] |
[
"Learning rich representations from predictive learning without labels has been a longstanding challenge in the field of machine learning.",
"Generative pre-training has so far not been as successful as contrastive methods in modeling representations of raw images.",
"In this paper, we propose a neural architecture for self-supervised representation learning on raw images called the PatchFormer which learns to model spatial dependencies across patches in a raw image.",
"Our method learns to model the conditional probability distribution of missing patches given the context of surrounding patches.",
"We evaluate the utility of the learned representations by fine-tuning the pre-trained model on low data-regime classification tasks.",
"Specifically, we benchmark our model on semi-supervised ImageNet classification which has become a popular benchmark recently for semi-supervised and self-supervised learning methods.",
"Our model is able to achieve 30.3% and 65.5% top-1 accuracies when trained only using 1% and 10% of the labels on ImageNet showing the promise for generative pre-training methods.",
"Deep neural networks are capable of learning rich abstract representations from raw high dimensional data in an end-to-end fashion (LeCun et al., 2015) .",
"A big weakness of these neural networks is the reliance on abundant labeled datasets.",
"Self-supervised and unsupervised representation learning approaches have been proposed to address this problem (Bengio et al., 2007) .",
"It is still an open problem in the field to figure out how to take advantage of large unlabeled datasets, use them for learning rich representations and improving the data-efficiency of supervised learning systems.",
"A classic example of successful unsupervised learning of rich representations is word2vec (Mikolov et al., 2013) where the authors showed that distributed vector representations of words could be learned by contrastively predicting the neighboring words given surrounding words.",
"The shift from word embeddings to sequence embeddings in recent times began when (Dai & Le, 2015) showed that pre-trained sequence to sequence autoencoders on text corpora could be useful for a number of downstream tasks such as text classification and sentiment analysis.",
"Followed by this, it was shown in (Peters et al., 2018 ) that language modeling is useful in providing deep contextual sentence embeddings that could be fine-tuned on a number of natural language understanding tasks.",
"(Howard & Ruder, 2018 ) is another example of such a success.",
"In more recent times, the transformer (Vaswani et al., 2017) has emerged as a powerful architecture to model complex dependencies across a long sequence using global self-attention.",
"OpenAI Generative Pre-Training (GPT) (Radford et al., 2018) showed that training large Transformer models on BooksCorpus could lead to rich and useful representations that could be fine-tuned on a variety of downstream tasks covering language understanding, commonsense reasoning and question-answering.",
"The biggest success in unsupervised pre-training was achieved by BERT (Devlin et al., 2018) where the assumption for using causal language modeling was pointed out as unnecessary and it was shown that training deep transformers in a bi-directional fashion to perform the objective of masked language modeling and next sentence prediction could lead to rich and useful representations covering a wide span of natural language understanding downstream tasks.",
"Therefore, it is useful to address the following question: How do we translate the successes of masked language modeling and deep transformers to images?",
"Unlike language which is a layer of abstraction to be able to understand the world and communicate thoughts, images are raw sensory observations.",
"It is therefore much harder to model the relationship across pixels both spatially and temporally simply because the dimensionality is much higher.",
"Let's first look at the question of whether generative pre-training is well suited for images or not.",
"There is a belief that generative approaches are more suited to abstract inputs such as language wordpieces but not for less abstract entities like pixels or audio waveform bits (van den Oord et al., 2018; Hjelm et al., 2018; Bachman et al., 2019; Trinh et al., 2019) .",
"While it may as well turn out to be true, it is useful to investigate how far we could push generative approaches for pre-training even on domains they are not well suited for, such as images.",
"A successful example of such an approach is the adversarial method BiGAN (Donahue et al., 2016; Donahue & Simonyan, 2019) .",
"While BiGAN (and BigBiGAN) are meant for learning useful highlevel representations of raw images, they still retain the generative modeling aspect of unsupervised learning by learning to jointly model an encoder and a generator using the generative adversarial loss.",
"On the other hand, there has been incredible progress in recent years in generative modeling of raw pixels and audio waveforms using maximum likelihood.",
"Beginning with (Oord et al., 2016b), we have seen successes in generating diverse images by modeling the conditional distribution of pixels given context of neighboring pixels.",
"WaveNet (Oord et al., 2016a ) is an example of successful deployment of such techniques for modeling the distribution of raw audio waveforms when conditioned on text.",
"(Kalchbrenner et al., 2017 ) adopt a similar technique for generating future frames of a video conditioned on the past.",
"More recently, (Child et al., 2019 ) have pushed on using strided self-attention to achieve high-quality unconditional samples of ImageNet building upon successes of (Parmar et al., 2018) and (Menick & Kalchbrenner, 2018) .",
"Therefore, it is very reasonable to ask ourselves the following question: If generative models can work on such high dimensional data, is it necessarily the case that they would be ill-suited from a representation learning perspective?",
"If no, how do we leverage these successes for representation learning?",
"Further, how do we take inspiration from the big representation learning successes in natural language processing (Devlin et al., 2018) and the generative modeling successes for images and audio and design a representation learning approach for images?",
"As far as representation learning on images goes, the state-of-the-art systems at the moment are contrastive methods.",
"Specifically, Contrastive Predictive Coding (CPC) (van den Oord et al., 2018) which learns to contrastively predict the future given the past by sampling negatives across and between sequences has been shown to be a universally powerful representation learning approach for multiple modalities (audio, images, text, control) .",
"(Hénaff et al., 2019) and (Bachman et al., 2019) achieve impressive linear classifier probe metrics for their representations that were trained contrastively to maximize mutual information across views and space.",
"(Hénaff et al., 2019) also show that these representations could be used for downstream tasks such as semi-supervised image classification in the low-data regime going on to record impressive results in the 1% and 10% ImageNet classification.",
"While such impressive results have been shown using the contrastive methods, methods of such quality for generative approaches are ye to be shown on images.",
"Secondly, CPC and related methods adopt convolutional architectures for learning the representations.",
"We believe it is worth the research effort to investigate architectures that incorporate self-attention so that we could translate language domain's success to other domains.",
"Stand-Alone Self-Attention (Ramachandran et al., 2019) has shown that self-attentive architectures could be designed to match convolutional architectures on image classification and object detection.",
"Such a result is promising in the sense that we now know that self-attentive architectures are not a limiting factor for downstream classification performance.",
"In this paper, we attempt to inspire from a few key engineering deicisons that have benefitted the various successful approaches discussed above to motivate our design of a generative pre-training method for images.",
"1. Predicting subscales and low-bit depth for pixels: (Menick & Kalchbrenner, 2018) showed that modeling pixels by sequentially modeling the subscales and low-bit depth versions of the raw image is extremely useful.",
"(Oord et al., 2016a ) also attempted to initially model 8-bit audio rather than 16-bit.",
"Therefore, it makes sense to model the only the most significant few bits while attempting to decode pixels for representation learning.",
"Higher order bits are more relevant for texture and finer-details and may not be crucial for representation learning performance.",
"2. Use of self-attention for aggregating global context: Self-Attention (Vaswani et al., 2017 ) is an extremely powerful approach for aggregating global contextual representations across large sequences.",
"The adoption of self-attention for images began with (Wang et al., 2018) who used non-local layers for activity recognition.",
"(Zhang et al., 2018) and (Brock et al., 2018 ) exploit non-local layers for high-fidelity image generation.",
"has also shown that self-attention can be used to good effect for modeling distribution of latents for likelihood-based image generation while (Parmar et al., 2018; Menick & Kalchbrenner, 2018; Child et al., 2019) are examples for self-attentive density models.",
"3. Learning spatial dependencies across patches: CPC learns to spatially predict neighboring patches given context of surrounding patches.",
"Image Transformers (Parmar et al., 2018) adopts self-attention that takes into account local as well as global dependencies behaving like a patch-based generative model.",
"(Menick & Kalchbrenner, 2018) explot modeling spatial PixelCNNs over subscales for global image dependencies.",
"(Trinh et al., 2019) attempt to modify CPC for image representation learning by using the patch-based data extraction and modeling dependencies in a BERT-like fashion using self-attention.",
"Our key contributions are as follows:",
"1. We propose a new architecture, PatchFormer, for modeling bi-directional dependencies across patches.",
"Our architecture learning to decode missing patches in an image by extracting represenstations of the given patches, using attention-pooling to aggregate the context, and decode the low-bit grayscale sub-sampled versions of the missing patches.",
"Specifically, we decode only the 2-bit grayscale version of the missing patch.",
"2. We show that our model could be pre-trained on the unsupervised objective of decoding missing patches and fine-tuned on downstream low-data regime classification tasks.",
"3. We achieve somewhat competitive downstream ImageNet classification results with CPC (Hénaff et al., 2019) and are surprisingly even better than the other contrastive approach for semi-supervised downstream classification, Selfie (Trinh et al., 2019) in spite of adopting a generative approach.",
"We have proposed a new architecture for generative pre-training on images called the PatchFormer.",
"We highlighted the key tricks to making our model learn useful representations for downstream classification tasks in spite of decoding pixels.",
"We have shown that we are competitive with state-ofthe-art contrastive pre-training methods such as CPC on the low data-regime ImageNet classification benchmark."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0714285671710968,
0.07407406717538834,
0.2631579041481018,
0,
0.07692307233810425,
0.20000000298023224,
0.09999999403953552,
0.05882352590560913,
0.0833333283662796,
0.1428571343421936,
0.14999999105930328,
0.04651162400841713,
0.0833333283662796,
0.04651162400841713,
0,
0,
0.04255318641662598,
0.02985074371099472,
0,
0.0624999962747097,
0.06896550953388214,
0.14814814925193787,
0.07999999821186066,
0.1428571343421936,
0,
0.13636362552642822,
0.060606054961681366,
0.11428570747375488,
0.1111111044883728,
0.13333332538604736,
0.04999999701976776,
0.23255813121795654,
0.1904761791229248,
0.19512194395065308,
0.307692289352417,
0.1090909093618393,
0.054054051637649536,
0.08888888359069824,
0.1818181723356247,
0.1818181723356247,
0,
0.05882352590560913,
0.0624999962747097,
0.09756097197532654,
0.1111111044883728,
0,
0.27586206793785095,
0.2222222238779068,
0.05714285373687744,
0.13793103396892548,
0.07692307233810425,
0.08888888359069824,
0,
0,
0.0833333283662796,
0.1621621549129486,
0,
0.08695651590824127,
0.0555555522441864,
0,
0.05882352590560913,
0.04255318641662598,
0.25,
0.12903225421905518,
0.0624999962747097
] | SJg1lxrYwS | true | [
"Decoding pixels can still work for representation learning on images"
] |
[
"Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix.",
"Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive.",
"We show how to modify full-matrix adaptive regularization in order to make it practical and effective.",
"We also provide novel theoretical analysis\n",
"for adaptive regularization in non-convex optimization settings.",
"The core of our algorithm, termed GGT, consists of efficient inverse computation of square roots of low-rank matrices.",
"Our preliminary experiments underscore improved convergence rate of GGT across a variety of synthetic tasks and standard deep learning benchmarks.",
"Stochastic gradient descent is the workhorse behind the recent deep learning revolution.",
"This simple and age-old algorithm has been supplemented with a variety of enhancements to improve its practical performance, and sometimes its theoretical guarantees.Amongst the acceleration methods there are three main categories: momentum, adaptive regularization, and variance reduction.",
"Momentum (in its various incarnations, like heavy-ball or Nesterov acceleration) is the oldest enhancement.",
"It has a well-developed theory, and is known to improve practical convergence in a variety of tasks, small and large.",
"It is also easy to implement.",
"Variance reduction is the most recent advancement; in theory and practice, it is mostly applicable to convex optimization, and is thus less influential in deep learning.This brings us to adaptive regularization: the most sophisticated, hard to implement, and debated acceleration method.",
"While state-of-the-art optimizers such as Adam and AdaGrad (Kingma & Ba, 2014; BID13 do use adaptive regularization, they do so in a very limited form: with diagonal matrices, often marketed as per-coordinate adaptive learning-rate methods.",
"Despite solid theoretical guarantees, the practical value of diagonal adaptive regularization as compared to \"vanilla\" SGD has been the subject of much debate BID48 .",
"However, the efficacy of full-matrix adaptive regularization has been relatively unexplored.",
"This is due to the prohibitive computational cost associated with full-matrix operations: full AdaGrad requires taking the inverse square root of a large matrix.In this paper, we present GGT, a practical solution to the computational problems plaguing fullmatrix adaptive regularization, making this technique scalable for modern deep models.",
"At the heart of our method is a simple, GPU-friendly way to apply the inverse square root of the low-rank second-moment matrix of recent gradients; see FIG0 .",
"GGT's running time is comparable to state-of-the-art optimizers.We proceed to show that full-matrix preconditioning allows for much better exploitation of anisotropic curvature in loss landscapes.",
"First, we show synthetic experiments which demonstate clear benefits of GGT over baselines, especially when the problem is ill-conditioned.",
"Then, we implement GGT at scale, and show that the benefits translate to faster training on standard deep learning benchmarks.",
"Our improvement is most salient in complicated landscapes like RNN training.Our algorithm comes with theoretical guarantees.",
"We give the first proof of convergence to firstorder critical points for an algorithm with adaptive regularization in a stochastic non-convex setting, featuring a rate which is dependent on an adaptive ratio.",
"We show examples where our bound is stronger than that for SGD, providing some theoretical basis for our empirical findings.",
"This work investigates full-matrix adaptive regularization: our main contribution is to make this technique viable for large-scale optimization, by a method for efficient multiplication by the inverse square root of a full second-moment matrix over a short window of gradients.",
"This leads to a new algorithm, GGT, a truly scalable optimization algorithm with full-matrix adaptive preconditioning.Through synthetic experiments, we have shown that GGT accelerates optimization in ill-conditioned loss landscapes; this is supported by accompanying adaptive convergence guarantees.",
"Preliminary experiments show accelerated convergence on standard deep learning benchmarks, with very different training dynamics from existing diagonal adaptive methods.",
"We accompany our algorithm and experiments with the first theoretical characterization of the benefits of adaptive regularization in a non-convex setting.",
"We hope that GGT will be the first of a new class of algorithms for the modern large-scale optimization toolbox, and to foster new discussion towards an ever-elusive understanding of loss landscapes in deep learning."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.0714285671710968,
0.14814814925193787,
0,
0.42105263471603394,
0,
0,
0,
0.08510638028383255,
0,
0,
0,
0.08888888359069824,
0.09090908616781235,
0.05882352590560913,
0.17391303181648254,
0.1818181723356247,
0,
0.10810810327529907,
0,
0,
0.0714285671710968,
0.24390242993831635,
0.06666666269302368,
0.12765957415103912,
0.25531914830207825,
0.1249999925494194,
0.19354838132858276,
0.09302325546741486
] | rkxd2oR9Y7 | true | [
"fast, truly scalable full-matrix AdaGrad/Adam, with theory for adaptive stochastic non-convex optimization"
] |
[
"Dialogue systems require a great deal of different but complementary expertise to assist, inform, and entertain humans.",
"For example, different domains (e.g., restaurant reservation, train ticket booking) of goal-oriented dialogue systems can be viewed as different skills, and so does ordinary chatting abilities of chit-chat dialogue systems.",
"In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP).",
"The experimental results show that this approach achieves competitive performance on a combined dataset of MultiWOZ (Budzianowski et al., 2018), In-Car Assistant (Eric et al.,2017), and Persona-Chat (Zhang et al., 2018).",
"Finally, we demonstrate that each dialogue skill is effectively learned and can be combined with other skills to produce selective responses.",
"Unlike humans who can do both, goal-oriented dialogues (Williams & Young, 2007; Young et al., 2013) and chit-chat conversations (Serban et al., 2016a; Vinyals & Le, 2015) are often learned with separate models.",
"A more desirable approach for the users would be to have a single chat interface that can handle both casual talk and tasks such as reservation or scheduling.",
"This can be formulated as a problem of learning different conversational skills across multiple domains.",
"A skill can be either querying a database, generating daily conversational utterances, or interacting with users in a particular task-domain (e.g. booking a restaurant).",
"One challenge of having multiple skills is that existing datasets either focus only on chit-chat or on goal-oriented dialogues.",
"This is due to the fact that traditional goal-oriented systems are modularized (Williams & Young, 2007; Hori et al., 2009; Lee et al., 2009; Levin et al., 2000; Young et al., 2013) ; thus, they cannot be jointly trained with end-to-end architecture as in chit-chat.",
"However, recently proposed end-to-end trainable models Wu et al., 2019; Reddy et al., 2018; Yavuz et al., 2018) and datasets (Bordes & Weston, 2017; allow us to combine goal-oriented (Budzianowski et al., 2018; and chit-chat (Zhang et al., 2018) into a single benchmark dataset with multiple conversational skills as shown in Table 1.",
"A straight forward solution would be to have a single model for all the conversational skills, which has shown to be effective to a certain extent by (Zhao et al., 2017) and (McCann et al., 2018) .",
"Putting aside the performance in the tasks, such fixed shared-parameter framework, without any task-specific designs, would lose controllability and interpretability in the response generation.",
"In this paper, instead, we propose to model multiple conversational skills using the Mixture of Experts (MoE) (Jacobs et al., 1991) paradigm, i.e., a model that learns and combine independent specialized experts using a gating function.",
"For instance, each expert could specialize in different dialogues domains (e.g., Hotel, Train, ChitChat etc.) and skills (e.g., generate SQL query).",
"A popular implementation of MoE ) uses a set of linear transformation (i.e., experts) in between two LSTM (Schmidhuber, 1987) layers.",
"However, several problems arise with this implementation:",
"1) the model is computationally expensive as it has to decode multiple times each expert and make the combination at the representation-level;",
"2) no prior knowledge is injected in the expert selection (e.g., domains);",
"3) Seq2Seq model has limited ability in extracting information from a Knowledge Base (KB) (i.e., generated by the SQL query) , as required in end-to-end task-oriented dialogues Table 1 : An example from the dataset which includes both chit-chat and task-oriented conversations.",
"The model has to predict all the Sys turn, which includes SQL query and generating response from a the Memory content, which is dynamically updated with the queries results.",
"The skills are the prior knowledge needed for the response, where Persona refers to chit-chat.",
"Spk.",
"Conversation Skills Usr: Can you help me find a cheap 2 star hotel?",
"In this paper, we propose a novel way to train a single end-to-end dialogue model with multiple composable and interpretable skills.",
"Unlike previous work, that mostly focused on the representationlevel mixing , our proposed approach, Attention over Parameters, learns how to softly combine independent sets of specialized parameters (i.e., making SQL-Query, conversing with consistent persona, etc.) into a single set of parameters.",
"By doing so, we not only achieve compositionality and interpretability but also gain algorithmically faster inference speed.",
"To train and evaluate our model, we organize a multi-domain task-oriented datasets into end-to-end trainable formats and combine it with a conversational dataset (i.e. Persona-Chat).",
"Our model learns to consider each task and domain as a separate skill that can be composed with each other, or used independently, and we verify the effectiveness of the interpretability and compositionality with competitive experimental results and thorough analysis."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.178571417927742,
0.9818181991577148,
0.17543859779834747,
0.2448979616165161,
0.03389830142259598,
0.1428571343421936,
0.1395348757505417,
0.039215680211782455,
0.08695651590824127,
0.05970148742198944,
0.11267605423927307,
0.1355932205915451,
0.04081632196903229,
0.380952388048172,
0.11764705181121826,
0.07999999821186066,
0.05714285373687744,
0.1249999925494194,
0,
0.05970148742198944,
0.1111111044883728,
0.0476190447807312,
0.04878048226237297,
0.375,
0.23188404738903046,
0.08888888359069824,
0.1538461446762085,
0.25806450843811035
] | BJepraEFPr | true | [
"In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP). "
] |
[
"Model distillation aims to distill the knowledge of a complex model into a simpler one.",
"In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one.",
"The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data.",
"For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic distilled images (one per class) and achieve close to the original performance, given a fixed network initialization.",
"We evaluate our method in various initialization settings. ",
"Experiments on multiple datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, demonstrate the ad-vantage of our approach compared to alternative methods. ",
"Finally, we include a real-world application of dataset distillation to the continual learning setting: we show that storing distilled images as episodic memory of previous tasks can alleviate forgetting more effectively than real images.",
"proposed network distillation as a way to transfer the knowledge from an ensemble of many separately-trained networks into a single, typically compact network, performing a type of model compression.",
"In this paper, we are considering a related but orthogonal task: rather than distilling the model, we propose to distill the dataset.",
"Unlike network distillation, we keep the model fixed but encapsulate the knowledge of the entire training dataset, which typically contains thousands to millions of images, into a small number of synthetic training images.",
"We show that we can go as low as one synthetic image per category, training the same model to reach surprisingly good performance on these synthetic images.",
"For example, in Figure 1a , we compress 60, 000 training images of MNIST digit dataset into only 10 synthetic images (one per category), given a fixed network initialization.",
"Training the standard LENET on these 10 images yields test-time MNIST recognition performance of 94%, compared to 99% for the original dataset.",
"For networks with unknown random weights, 100 synthetic images train to 89%.",
"We name our method Dataset Distillation and these images distilled images.",
"But why is dataset distillation interesting?",
"First, there is the purely scientific question of how much data is encoded in a given training set and how compressible it is?",
"Second, we wish to know whether it is possible to \"load up\" a given network with an entire dataset-worth of knowledge by a handful of images.",
"This is in contrast to traditional training that often requires tens of thousands of data samples.",
"Finally, on the practical side, dataset distillation enables applications that require compressing data with its task.",
"We demonstrate that under the continual learning setting, storing distilled images as memory of past task and data can alleviate catastrophic forgetting (McCloskey and Cohen, 1989) .",
"A key question is whether it is even possible to compress a dataset into a small set of synthetic data samples.",
"For example, is it possible to train an image classification model on synthetic images that are not on the manifold of natural images?",
"Conventional wisdom would suggest that the answer is no, as the synthetic training data may not follow the same distribution of the real test data.",
"Yet, in this work, we show that this is indeed possible.",
"We present an optimization algorithm for synthesizing a small number of synthetic data samples not only capturing much of the original training data but also tailored explicitly for fast model training with only a few data point.",
"To achieve our goal, we first derive the network weights as a We distill the knowledge of tens of thousands of images into a few synthetic training images called distilled images.",
"On MNIST, 100 distilled images can train a standard LENET with a random initialization to 89% test accuracy, compared to 99% when fully trained.",
"On CIFAR10, 100 distilled images can train a network with a random initialization to 41% test accuracy, compared to 80% when fully trained.",
"In Section 3.6, we show that these distilled images can efficiently store knowledge of previous tasks for continual learning.",
"differentiable function of our synthetic training data.",
"Given this connection, instead of optimizing the network weights for a particular training objective, we optimize the pixel values of our distilled images.",
"However, this formulation requires access to the initial weights of the network.",
"To relax this assumption, we develop a method for generating distilled images for randomly initialized networks.",
"To further boost performance, we propose an iterative version, where the same distilled images are reused over multiple gradient descent steps so that the knowledge can be fully transferred into the model.",
"Finally, we study a simple linear model, deriving a lower bound on the size of distilled data required to achieve the same performance as training on the full dataset.",
"We demonstrate that a handful of distilled images can be used to train a model with a fixed initialization to achieve surprisingly high performance.",
"For networks pre-trained on other tasks, our method can find distilled images for fast model fine-tuning.",
"We test our method on several initialization settings: fixed initialization, random initialization, fixed pre-trained weights, and random pre-trained weights.",
"Extensive experiments on four publicly available datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, show that our approach often outperforms existing methods.",
"Finally, we demonstrate that for continual learning methods that store limited-size past data samples as episodic memory (Lopez-Paz and Ranzato, 2017; Kirkpatrick et al., 2017) , storing our distilled data instead is much more effective.",
"Our distilled images contain richer information about the past data and tasks, and we show experimental evidence on standard continual learning benchmarks.",
"Our code, data, and models will be available upon publication.",
"In this paper, we have presented dataset distillation for compressing the knowledge of entire training data into a few synthetic training images.",
"We demonstrate how to train a network to reach surprisingly good performance with only a small number of distilled images.",
"Finally, the distilled images can efficiently store the memory of previous tasks in the continual learning setting.",
"Many challenges remain for knowledge distillation of data.",
"Although our method generalizes well to random initializations, it is still limited to a particular network architecture.",
"Since loss surfaces for different architectures might be drastically different, a more flexible method of applying the distilled data may overcome this difficulty.",
"Another limitation is the increasing computation and memory requirements for finding the distilled data as the number of images and steps increases.",
"To compress large-scale datasets such as ImageNet, we may need first-order gradient approximations to make the optimization computationally feasible.",
"Nonetheless, we are encouraged by the findings in this paper on the possibilities of training large models with a few distilled data, leading to potential applications such as accelerating network evaluation in neural architecture search (Zoph and Le, 2017) .",
"We believe that the ideas developed in this work might give new insights into the quantity and type of data that deep networks are able to process, and hopefully inspire others to think along this direction."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.2857142686843872,
0.2545454502105713,
0.25925925374031067,
0.13333332538604736,
0.1463414579629898,
0.23076923191547394,
0.21276594698429108,
0.24390242993831635,
0.2448979616165161,
0.260869562625885,
0.20408162474632263,
0.2380952388048172,
0.24242423474788666,
0.06451612710952759,
0.07407406717538834,
0.1904761791229248,
0.13636362552642822,
0.2222222238779068,
0.1621621549129486,
0.21739129722118378,
0.44999998807907104,
0.23255813121795654,
0.1904761791229248,
0.06451612710952759,
0.2745097875595093,
0.260869562625885,
0.1860465109348297,
0.1904761791229248,
0.1463414579629898,
0.2142857164144516,
0.0952380895614624,
0.1249999925494194,
0.1111111044883728,
0.15686273574829102,
0.260869562625885,
0.380952388048172,
0.10810810327529907,
0.0555555522441864,
0.04878048226237297,
0.07407406717538834,
0.0476190410554409,
0,
0.2857142686843872,
0.3589743673801422,
0.1111111044883728,
0.13793103396892548,
0.10810810327529907,
0.13636362552642822,
0.09999999403953552,
0.04999999329447746,
0.13793103396892548,
0.26923075318336487
] | ryxO3gBtPB | true | [
"We propose to distill a large dataset into a small set of synthetic data that can train networks close to original performance. "
] |
[
"We relate the minimax game of generative adversarial networks (GANs) to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively.",
"This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization.",
"The inherent connection does not only provide a theoretical convergence proof for training GANs in the function space, but also inspires a novel objective function for training.",
"The modified objective function forces the distribution of generator outputs to be updated along the direction according to the primal-dual subgradient methods.",
"A toy example shows that the proposed method is able to resolve mode collapse, which in this case cannot be avoided by the standard GAN or Wasserstein GAN.",
"Experiments on both Gaussian mixture synthetic data and real-world image datasets demonstrate the performance of the proposed method on generating diverse samples.",
"Generative adversarial networks (GANs) are a class of game theoretical methods for learning data distributions.",
"It trains the generative model by maintaining two deep neural networks, namely the discriminator network D and the generator network G. The generator aims to produce samples resembling real data samples, while the discriminator aims to distinguish the generated samples and real data samples.The standard GAN training procedure is formulated as the following minimax game: DISPLAYFORM0 where p d (x) is the data distribution and p z (z) is the noise distribution.",
"The generated samples G(z) induces a generated distribution p g (x).",
"Theoretically, the optimal solution to (1) is p * g = p d and D * (x) = 1/2 for all x in the support of data distribution.In practice, the discriminator network and the generator network are parameterized by θ θ θ d and θ θ θ g , respectively.",
"The neural network parameters are updated iteratively according to gradient descent.",
"In particular, the discriminator is first updated either with multiple gradient descent steps until convergence or with a single gradient descent step, then the generator is updated with a single descent step.",
"However, the analysis of the convergence properties on the training approaches is challenging, as noted by Ian Goodfellow in BID10 , \"For GANs, there is no theoretical prediction as to whether simultaneous gradient descent should converge or not. Settling this theoretical question, and developing algorithms guaranteed to converge, remain important open research problems.\".",
"There have been some recent studies on the convergence behaviours of GAN training (Nowozin et al., 2016; BID18 BID14 BID24 BID22 .The",
"simultaneous gradient descent method is proved to converge assuming the objective function is convex-concave in the network parameters (Nowozin et al., 2016) . The",
"local stability property is established in BID14 BID24 .One",
"notable inconvergence issue with GAN training is referred to as mode collapse, where the generator characterizes only a few modes of the true data distribution BID11 BID18 . Various",
"methods have been proposed to alleviate the mode collapse problem. Feature",
"matching for intermediate layers of the discriminator has been proposed in (Salimans et al., 2016) . In BID23",
", the generator is updated based on a sequence of previous unrolled discriminators. A mixture",
"of neural networks are used to generate diverse samples (Tolstikhin et al., 2017; BID15 BID2 . In , it was",
"proposed that adding noise perturbation on the inputs to the discriminator can alleviate the mode collapse problem. It is shown",
"that this training-with-noise technique is equivalent to adding a regularizer on the gradient norm of the discriminator (Roth et al., 2017) . The Wasserstein",
"divergence is proposed to resolve the problem of incontinuous divergence when the generated distribution and the data distribution have disjoint supports BID12 . Mode regularization",
"is used in the loss function to penalize the missing modes BID6 Srivastava et al., 2017) . The regularization",
"is usually based on heuristics, which tries to minimize the distance between the data samples and the generated samples, but lacks theoretical convergence guarantee.In this paper, we formulate the minimax optimization for GAN training (1) as finding the saddle points of the Lagrangian function for a convex optimization problem. In the convex optimization",
"problem, the discriminator function D(·) and the probabilities of generator outputs p g (·) play the roles of the primal variables and dual variables, respectively. This connection not only provides",
"important insights in understanding the convergence of GAN training, but also enables us to leverage the primal-dual subgradient methods to design a novel objective function that helps to alleviate mode collapse. A toy example reveals that for some",
"cases when standard GAN or WGAN inevitably leads to mode collapse, our proposed method can effectively avoid mode collapse and converge to the optimal point.In this paper, we do not aim at achieving superior performance over other GANs, but rather provide a new perspective of understanding GANs, and propose an improved training technique that can be applied on top of existing GANs. The contributions of the paper are",
"as follows:• The standard training of GANs in the function space is formulated as primal-dual subgradient methods for solving convex optimizations.• This formulation enables us to show",
"that with a proper gradient descent step size, updating the discriminator and generator probabilities according to the primal-dual algorithms will provably converge to the optimal point.• This formulation results in a novel",
"training objective for the generator. With the proposed objective function,",
"the generator is updated such that the probabilities of generator outputs are pushed to the optimal update direction derived by the primal-dual algorithms. Experiments have shown that this simple",
"objective function can effectively alleviate mode collapse in GAN training.• The convex optimization framework incorporates",
"different variants of GANs including the family of f -GAN (Nowozin et al., 2016) and an approximate variant of WGAN. For all these variants, the training objective can",
"be improved by including the optimal update direction of the generated probabilities.",
"In this paper, we propose a primal-dual formulation for generative adversarial learning.",
"This formulation interprets GANs from the perspective of convex optimization, and gives the optimal update of the discriminator and the generated distribution with convergence guarantee.",
"By framing different variants of GANs under the convex optimization framework, the corresponding training algorithms can all be improved by pushing the generated distribution along the optimal direction.",
"Experiments on two synthetic datasets demonstrate that the proposed formulation can effectively avoid mode collapse.",
"It also achieves competitive quantitative evaluation scores on two benchmark real-world image datasets.",
"The proof of convergence for dual-driven algorithms can be found in BID4 , Chapter 3).The",
"primal-dual-driven algorithm for continuous time update has been studied in BID8 . Here",
", we show the convergence for the discrete-time case.We choose a step size α(t) that satisfies DISPLAYFORM0 Let z(t) = [x(t), λ λ λ(t)] T be a vector consisting of the primal and dual variables at the t-th iteration. The",
"primal-dual-driven update can be expressed as: DISPLAYFORM1 where DISPLAYFORM2 and DISPLAYFORM3 Since the subgradient is bounded by assumption, there exists M > 0 such that ||T (·)|| 2 2 < M , where ||.|| 2",
"stands for the L 2 norm."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.16326530277729034,
0.3125,
0.21052631735801697,
0.11764705181121826,
0.1463414579629898,
0.11428570747375488,
0.13333332538604736,
0.0624999962747097,
0.07999999821186066,
0.07999999821186066,
0,
0.05405404791235924,
0.0952380895614624,
0.052631575614213943,
0.05405404791235924,
0,
0.1428571343421936,
0.1538461446762085,
0.060606054961681366,
0.06666666269302368,
0,
0.12121211737394333,
0.10526315122842789,
0.05714285373687744,
0,
0.17241379618644714,
0.04999999701976776,
0.2448979616165161,
0.2631579041481018,
0.2380952388048172,
0.13636362552642822,
0.17391303181648254,
0.10256409645080566,
0.25806450843811035,
0.1463414579629898,
0,
0.37037035822868347,
0.11428570747375488,
0.09999999403953552,
0.19999998807907104,
0,
0.06666666269302368,
0.07407406717538834,
0.15686273574829102,
0.08695651590824127,
0.0952380895614624
] | BJNRFNlRW | true | [
"We propose a primal-dual subgradient method for training GANs and this method effectively alleviates mode collapse."
] |
[
"Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior.",
"The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to risk aversion and beyond.",
"This fact seems like it will be strictly harmful to reward inference: it is already hard to infer the reward from rational behavior, and noise and systematic biases make actions have less direct of a relationship to the reward.",
"Our insight in this work is that, contrary to expectations, irrationality can actually help rather than hinder reward inference.",
"For some types and amounts of irrationality, the expert now produces more varied policies compared to rational behavior, which help disambiguate among different reward parameters -- those that otherwise correspond to the same rational behavior.",
"We put this to the test in a systematic analysis of the effect of irrationality on reward inference.",
"We start by covering the space of irrationalities as deviations from the Bellman update, simulate expert behavior, and measure the accuracy of inference to contrast the different types and study the gains and losses.",
"We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accurately model irrationality, as well as to what extent we might expect (or be able to train) real people to exhibit helpful irrationalities when teaching rewards to learners.",
"The application of reinforcement learning (RL) in increasingly complex environments has been most successful for problems that are already represented by a specified reward function (Lillicrap et al., 2015; Mnih et al., 2015; .",
"Unfortunately, not only do real-world tasks usually lack an explicit exogenously-specified reward function, but attempting to specify one tends to lead to unexpected side-effects as the agent is faced with new situations (Lehman et al., 2018) .",
"This has motivated the area of reward inference: the process of estimating a reward function from human inputs.",
"The inputs are traditionally demonstrations, leading to inverse reinforcement learning (IRL) (Ng et al., 2000; Abbeel & Ng, 2004) or inverse optimal control (IOC) (Kalman, 1964; Jameson & Kreindler, 1973; Mombaur et al., 2010; Finn et al., 2016) .",
"Recent work has expanded the range of inputs significantly,to comparisons (Wirth et al., 2017; Sadigh et al., 2017; Christiano et al., 2017) , natural language instructions (MacGlashan et al., 2015; Fu et al., 2019) , physical corrections (Jain et al., 2015; Bajcsy et al., 2017) , proxy rewards Ratner et al., 2018) , or scalar reward values (Griffith et al., 2013; Loftin et al., 2014) .",
"The central assumption behind these methods is that human behavior is rational, i.e. optimal with respect to the desired reward (cumulative, in expectation).",
"Unfortunately, decades of research in behavioral economics and cognitive science Chipman (2014) has unearthed a deluge of irrationalities, i.e. of ways in which people deviate from optimal decision making: hyperbolic discounting, scope insensitivity, optimism bias, decision noise, certainty effects, loss aversion, status quo bias, etc.",
"Work on reward inference has predominantly used one model of irrationality: decision-making noise, where the probability of an action relates to the value that action has.",
"The most widely used model by far is a Bolzmann distribution stemming from the Luce-Sherpard rule (Luce, 1959; Shepard, 1957; Lucas et al., 2009 ) and the principle of maximum (causal) entropy in (Ziebart et al., 2008; , which we will refer to as Bolzmann-rationality (Fisac et al., 2017) .",
"Recent work has started to incorporate systematic biases though, like risk-aversion (Singh et al., 2017) , having the wrong dynamics belief (Reddy et al., 2018) , and myopia and hyperbolic discounting (Evans & Goodman, 2015; Evans et al., 2016) .",
"Learning from irrational experts feels like daunting task: reward inference is already hard with rational behavior, but now a learner needs to make sense of behavior that is noisy or systematically biased.",
"Our goal in this work is to characterize just how muddied the waters are -how (and how much) do different irrationalities affect reward inference?",
"Our insight is that, contrary to expectations, irrationality can actually help, rather than hinder, reward inference.",
"Our explanation is that how good reward inference is depends on the mutual information between the policies produced by the expert and the reward parameters to be inferred.",
"While it is often possible for two reward parameters to produce the same rational behavior, irrationalities can sometimes produce different behaviors that disambiguate between those same two reward parameters.",
"For instance, noise can help when it is related to the value function, as Boltzmann noise is, because it distinguishes the difference in values even when the optimal action stays the same.",
"Optimism can be helpful because the expert takes fewer risk-avoiding actions and acts more directly on their goal.",
"Overall, we contribute",
"1) an analysis and comparison of the effects of different biases on reward inference testing our insight,",
"2) a way to systematically formalize and cover the space of irrationalities in order to conduct such an analysis, and",
"3) evidence for the importance of assuming the right type of irrationality during inference.",
"Our good news is that irrationalities can indeed be an ally for inference.",
"Of course, this is not always true -the details of which irrationality type and how much of it also matter.",
"We see these results as opening the door to a better understanding of reward inference, as well as to practical ways of making inference easier by asking for the right kind of expert demonstrations -after all, in some cases it might be easier for people to act optimistically or myopically than to act rationally.",
"Our results reinforce that optimal teaching is different from optimal doing, but point out that some forms of teaching might actually be easier than doing."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.0624999962747097,
0.0833333283662796,
0.1249999925494194,
0.17142856121063232,
0.1249999925494194,
0.1875,
0.1395348757505417,
0.0714285671710968,
0.08510638028383255,
0.039215680211782455,
0.12903225421905518,
0,
0,
0.05128204822540283,
0.07017543166875839,
0.10526315122842789,
0.06666666269302368,
0,
0.1702127605676651,
0,
0.1249999925494194,
0.10256409645080566,
0.09999999403953552,
0.0952380895614624,
0.1764705777168274,
0,
0.0624999962747097,
0.11764705181121826,
0.0714285671710968,
0.20689654350280762,
0.05714285373687744,
0.10169491171836853,
0.10526315122842789
] | BJlo91BYPr | true | [
"We find that irrationality from an expert demonstrator can help a learner infer their preferences. "
] |
[
"Natural Language Processing models lack a unified approach to robustness testing.",
"In this paper we introduce WildNLP - a framework for testing model stability in a natural setting where text corruptions such as keyboard errors or misspelling occur.",
"We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on aspects introduced in the framework.",
"In particular, we focus on a comparison between recent state-of-the- art text representations and non-contextualized word embeddings.",
"In order to improve robust- ness, we perform adversarial training on se- lected aspects and check its transferability to the improvement of models with various cor- ruption types.",
"We find that the high perfor- mance of models does not ensure sufficient robustness, although modern embedding tech- niques help to improve it.",
"We release cor- rupted datasets and code for WildNLP frame- work for the community.",
"Adversarial examples have been shown to severely degrade performance of deep learning models BID10 BID14 .",
"Natural Language Processing systems are no different in this respect.",
"Multiple areas of NLP, such as machine translation BID1 , question answering BID12 , or text classification have been studied to assess the impact of adversaries generated with various methods.",
"However, these works tend to focus on one area only, often with attacks designed just for the selected problem.",
"It makes comparisons between models, datasets, and NLP areas impossible.",
"In particular, the robustness of modern word embedding systems -such as ELMo BID17 , Flair BID0 and language model based BERT BID5 remains unstudied.In this article, we evaluate the behavior of natural language models in the wild.",
"We propose WildNLP -a systematic and comprehensive robustness testing framework which can be used for any NLP model.",
"Instead of focusing on elaborate attacks, which are unlikely to originate by accident, we measure the quality of models in a natural setting, where input data is poisoned with errors involuntarily generated by actual users.",
"We put these notions into a set of tests called aspects.",
"Moreover, we introduce the concept of corruption severity and prove that it is critical to model improvement via adversarial training.",
"The framework is aimed at any NLP problem irrespective of its form of input and output.In summary, our contributions are the following:1.",
"We offer a systematic framework for testing corruption robustness -the WildNLP.In total, we introduce 11 aspects of robustness testing, with multiple severity levels.",
"We release the code and a collection of popular datasets that are corrupted with WildNLP for the community 1 .",
"The framework is easy to extend.",
"New aspects can be defined by the community.2.",
"We test corruption robustness of a number of NLP tasks: question answering (Q&A), natural language inference (NLI), named entity recognition (NER), and sentiment analysis (SA).",
"We verify stability of models trained on contextualized embeddings like ELMo and Flair in contrast to noncontextualized FastText BID2 and GloVe BID16 .We",
"also analyze BERT in the task of Q&A. We",
"find that new forms of text representation, despite greater contextual awareness, do not offer a sufficient increase in robustness.3. We",
"find that model training on one aspect does improve performance on another aspect, contrary to previous studies BID1 . For",
"this to be true, two corruption types must be similar to some extent.In section 2 we present related literature in the domain of NLP robustness. In",
"section 3 we present WildNLP framework, describing in detail each introduced aspect. In",
"section 4 we compare robustness of NER, Q&A, NLI and Sentiment Analysis. In",
"section 5 we perform adversarial training on Qwerty aspect with different severities and test these models on other aspects. We",
"conclude in section 6.",
"In this work, we have presented the WildNLP framework for corruption robustness testing.",
"We have introduced 11 text corruption types (at various severity levels) which can occur naturally in model deployment setting: misspellings, keyboard errors, attempts at masking emotional language, and others.",
"We test on four NLP areas and 12 models in total, verifying corruption robustness of state-of-the-art BERT system and new LM-based embeddings: ELMo and Flair, contrasted with GloVe and Fasttext.",
"We find that the problem of lacking corruption robustness is not solved by these recent systems.",
"However, we find that the issue can be partially alleviated by adversarial training, even across aspects.",
"We believe that problem of adversarial examples in NLP is still vague and hard to quantify.",
"Without doubt, more work is needed to make models robust to natural noise, whether by robust word embeddings, model architectures, or better datasets."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1764705777168274,
0.04081632196903229,
0.8571428656578064,
0.09999999403953552,
0.1599999964237213,
0.1304347813129425,
0.1111111044883728,
0.15789473056793213,
0,
0.039215680211782455,
0.0476190410554409,
0.12121211737394333,
0.1428571343421936,
0.24390242993831635,
0.1428571343421936,
0.11764705181121826,
0.09302324801683426,
0.13333332538604736,
0.17391303181648254,
0.19512194395065308,
0,
0.0624999962747097,
0.25531914830207825,
0.22727271914482117,
0.1249999925494194,
0.13636362552642822,
0.09756097197532654,
0.12765957415103912,
0,
0.4444444477558136,
0.1904761791229248,
0,
0.1111111044883728,
0.07692307233810425,
0.2800000011920929,
0.20512819290161133,
0.05128204822540283,
0.20512819290161133,
0.09090908616781235
] | SkxgBPr3iN | true | [
"We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on perturbed inputs."
] |
[
"Training generative models like Generative Adversarial Network (GAN) is challenging for noisy data.",
"A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper.",
"The curriculum construction is based on the centrality of underlying clusters in data points. ",
"The data points of high centrality takes priority of being fed into generative models during training.",
"To make our algorithm scalable to large-scale data, the active set is devised, in the sense that every round of training proceeds only on an active subset containing a small fraction of already trained data and the incremental data of lower centrality.",
"Moreover, the geometric analysis is presented to interpret the necessity of cluster curriculum for generative models.",
"The experiments on cat and human-face data validate that our algorithm is able to learn the optimal generative models (e.g. ProGAN) with respect to specified quality metrics for noisy data.",
"An interesting finding is that the optimal cluster curriculum is closely related to the critical point of the geometric percolation process formulated in the paper.",
"Deep generative models have piqued researchers' interest in the past decade.",
"The fruitful progress has been achieved on this topic, such as auto-encoder (Hinton & Salakhutdinov, 2006) and variational auto-encoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) , generative adversarial network (GAN) (Goodfellow et al., 2014; , normalizing flow (Rezende & Mohamed, 2015; Dinh et al., 2015; Kingma & Dhariwal, 2018) , and autoregressive models (van den Oord et al., 2016b; a; .",
"However, it is non-trivial to train a deep generative model that can converge to a proper minimum of the associated optimization.",
"For example, GAN suffers non-stability, mode collapse, and generative distortion during training.",
"Many insightful algorithms have been proposed to circumvent those issues, including feature engineering (Salimans et al., 2016) , various discrimination metrics (Mao et al., 2016; Berthelot et al., 2017) , distinctive gradient penalties (Gulrajani et al., 2017; Mescheder et al., 2018) , spectral normalization to discriminator (Miyato et al., 2018) , and orthogonal regularization to generator (Brock et al., 2019) .",
"What is particularly of interest is that the breakthrough for GANs has been made with a simple technique of progressively growing neural networks of generators and discriminators from low-resolution images to high-resolution counterparts (Karras et al., 2018a) .",
"This kind of progressive growing also helps push the state of the arts to a new level by enabling StyleGAN to produce photo-realistic and detail-sharp results (Karras et al., 2018b) , shedding new light on wide applications of GANs in solving real problems.",
"This idea of progressive learning is actually a general manner of cognition process (Elman, 1993; Oudeyer et al., 2007) , which has been formally named curriculum learning in machine learning (Bengio et al., 2009) .",
"The central topic of this paper is to explore a new curriculum for training deep generative models.",
"To facilitate robust training of deep generative models with noisy data, we propose curriculum learning with clustering.",
"The key contributions are listed as follows:",
"• We first summarize four representative curricula for generative models, i.e. architecture (generation capacity), semantics (data content), dimension (data space), and cluster (data structure).",
"Among these curricula, cluster curriculum is newly proposed in this paper.",
"• Cluster curriculum is to treat data according to the centrality of each data point, which is pictorially illustrated and explained in detail.",
"To foster large-scale learning, we devise the active set algorithm that only needs an active data subset of small fixed size for training.",
"• The geometric principle is formulated to analyze hardness of noisy data and advantage of cluster curriculum.",
"The geometry pertains to counting a small sphere packed in an ellipsoid, on which is based the percolation theory we use.",
"The research on curriculum learning is diverse.",
"Our work focuses on curricula that are closely related to data attributes, beyond which is not the scope we concern in this paper.",
"Cluster curriculum is proposed for robust training of generative models.",
"The active set of cluster curriculum is devised to facilitate scalable learning.",
"The geometric principle behind cluster curriculum is analyzed in detail as well.",
"The experimental results on the LSUN cat dataset and CelebA face dataset demonstrate that the generative models trained with cluster curriculum is capable of learning the optimal parameters with respect to the specified quality metric such as Fréchet inception distance and sliced Wasserstein distance.",
"Geometric analysis indicates that the optimal curricula obtained from generative models are closely related to the critical points of the associated percolation processes established in this paper.",
"This intriguing geometric phenomenon is worth being explored deeply in terms of the theoretical connection between generative models and high-dimensional geometry.",
"It is worth emphasizing that the meaning of model optimality refers to the global minimum of the centrality-FID curve.",
"As we already noted, the optimality is metric-dependent.",
"We are able to obtain the optimal model with cluster curriculum, which does not mean that the algorithm only serves to this purpose.",
"We know that more informative data can help learn a more powerful model covering the large data diversity.",
"Here a trade-off arises, i.e. the robustness against noise and the capacity of fitting more data.",
"The centrality-FID curve provides a visual tool to monitor the state of model training, thus aiding us in understanding the learning process and selecting suitable models according to noisy degree of given data.",
"For instance, we can pick the trained model close to the optimal curriculum for heavily noisy data or the one near the end of the centrality-FID curve for datasets of little noise.",
"In fact, this may be the most common way of using cluster curriculum.",
"In this paper, we do not investigate the cluster-curriculum learning for the multi-class case, e.g. the ImageNet dataset with BigGAN (Brock et al., 2019) .",
"The cluster-curriculum learning of multiple classes is more complex than that we have already analyzed on the face and cat data.",
"We leave this study for future work."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.20689654350280762,
0.5161290168762207,
0.25806450843811035,
0.25806450843811035,
0.23076923191547394,
0.4516128897666931,
0.2666666507720947,
0.2702702581882477,
0.2222222238779068,
0.0615384578704834,
0.2857142686843872,
0.1428571343421936,
0.06896550953388214,
0.15686273574829102,
0.1111111044883728,
0.17777776718139648,
0.42424240708351135,
0.4375,
0,
0.05128204822540283,
0.2222222238779068,
0.277777761220932,
0.21052631735801697,
0.25,
0.1621621549129486,
0.260869562625885,
0.1538461446762085,
0.6153846383094788,
0.3571428656578064,
0.1428571343421936,
0.30188679695129395,
0.24390242993831635,
0.2702702581882477,
0.25,
0.1666666567325592,
0.1621621549129486,
0.0624999962747097,
0.1249999925494194,
0.21739129722118378,
0.1904761791229248,
0.20689654350280762,
0.09999999403953552,
0.21621620655059814,
0
] | BklTQCEtwH | true | [
"A novel cluster-based algorithm of curriculum learning is proposed to solve the robust training of generative models."
] |
[
"Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily (targeted) incorrect prediction on the testset with the same trigger embedded.",
"While federated learning (FL) is capable of aggregating information provided by different parties for training a better model, its distributed learning methodology and inherently heterogeneous data distribution across parties may bring new vulnerabilities.",
"In addition to recent centralized backdoor attacks on FL where each party embeds the same global trigger during training, we propose the distributed backdoor attack (DBA) --- a novel threat assessment framework developed by fully exploiting the distributed nature of FL.",
"DBA decomposes a global trigger pattern into separate local patterns and embed them into the training set of different adversarial parties respectively.",
"Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.",
"We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings.",
"Moreover, we find that distributed attacks are indeed more insidious, as DBA can evade two state-of-the-art robust FL algorithms against centralized backdoors.",
"We also provide explanations for the effectiveness of DBA via feature visual interpretation and feature importance ranking.\n",
"To further explore the properties of DBA, we test the attack performance by varying different trigger factors, including local trigger variations (size, gap, and location), scaling factor in FL, data distribution, and poison ratio and interval.",
"Our proposed DBA and thorough evaluation results shed lights on characterizing the robustness of FL.",
"Federated learning (FL) has been recently proposed to address the problems for training machine learning models without direct access to diverse training data, especially for privacy-sensitive tasks (Smith et al., 2017; McMahan et al., 2017; Zhao et al., 2018) .",
"Utilizing local training data of participants (i.e., parties), FL helps train a shared global model with improved performance.",
"There have been prominent applications and ever-growing trends in deploying FL in practice, such as loan status prediction, health situation assessment (e.g. potential cancer risk assessment), and next-word prediction while typing (Hard et al., 2018; Yang et al., 2018; 2019) .",
"Although FL is capable of aggregating dispersed (and often restricted) information provided by different parties to train a better model, its distributed learning methodology as well as inherently heterogeneous (i.e., non-i.i.d.) data distribution across different parties may unintentionally provide a venue to new attacks.",
"In particular, the fact of limiting access to individual party's data due to privacy concerns or regulation constraints may facilitate backdoor attacks on the shared model trained with FL.",
"Backdoor attack is a type of data poisoning attacks that aim to manipulate a subset of training data such that machine learning models trained on the tampered dataset will be vulnerable to the test set with similar trigger embedded (Gu et al., 2019) .",
"Backdoor attacks on FL have been recently studied in (Bagdasaryan et al., 2018; Bhagoji et al., 2019) .",
"However, current attacks do not fully exploit the distributed learning methodology of FL, as they embed the same global trigger pattern to all adversarial parties.",
"We call such attacking scheme Figure 1: Overview of centralized and distributed backdoor attacks (DBA) on FL.",
"The aggregator at round t + 1 combines information from local parties (benign and adversarial) in the previous round t, and update the shared model G t+1 .",
"When implementing backdoor attacks, centralized attacker uses a global trigger while distributed attacker uses a local trigger which is part of the global one.",
"centralized backdoor attack.",
"Leveraging the power of FL in aggregating dispersed information from local parties to train a shared model, in this paper we propose distributed backdoor attack (DBA) against FL.",
"Given the same global trigger pattern as the centralized attack, DBA decomposes it into local patterns and embed them to different adversarial parties respectively.",
"A schematic comparison between the centralized and distributed backdoor attacks is illustrated in Fig.1 .",
"Through extensive experiments on several financial and image datasets and in-depth analysis, we summarize our main contributions and findings as follows.",
"• We propose a novel distributed backdoor attack strategy DBA on FL and show that DBA is more persistent and effective than centralized backdoor attack.",
"Based on extensive experiments, we report a prominent phenomenon that although each adversarial party is only implanted with a local trigger pattern via DBA, their assembled pattern (i.e., global trigger) attains significantly better attack performance on the global model compared with the centralized attack.",
"The results are consistent across datasets and under different attacking scenarios such as one-time (single-shot) and continuous (multiple-shot) poisoning settings.",
"To the best of our knowledge, this paper is the first work studying distributed backdoor attacks.",
"• When evaluating the robustness of two recent robust FL methods against centralized backdoor attack (Fung et al., 2018; Pillutla et al., 2019) , we find that DBA is more effective and stealthy, as its local trigger pattern is more insidious and hence easier to bypass the robust aggregation rules.",
"• We provide in-depth explanations for the effectiveness of DBA from different perspectives, including feature visual interpretation and feature importance ranking.",
"• We perform comprehensive analysis and ablation studies on several trigger factors in DBA, including the size, gap, and location of local triggers, scaling effect in FL, poisoning interval, data poisoning ratio, and data distribution.",
"Specifically, at round t, the central server sends the current shared model G t to n ∈ [N ] selected parties, where [N ] denotes the integer set {1, 2, . . . , N }.",
"The selected party i locally computes the function f i by running an optimization algorithm such as stochastic gradient descent (SGD) for E local epochs with its own dataset D i and learning rate l r to obtain a new local model L t+1 i",
". The local party then sends model update L t+1 i − G t back to the central server, who will averages over all updates with its own learning rate η to generate a new global model G t+1 :",
"This aggregation process will be iterated until FL finds the final global model.",
"Unless specified otherwise, we use G t (L t i ) to denote the model parameters of the global (local) model at round t.",
"Attacker ability.",
"Based on the Kerckhoffs's theory (Shannon, 1949) , we consider the strong attacker here who has full control of their local training process, such as backdoor data injection and updating local training hyperparameters including E and l r .",
"This scenario is quite practical since each local dataset is usually owned by one of the local parties.",
"However, attackers do not have the ability to influence the privilege of central server such as changing aggregation rules, nor tampering the training process and model updates of other parties.",
"Objective of backdoor attack.",
"Backdoor attack is designed to mislead the trained model to predict a target label τ on any input data that has an attacker-chosen pattern (i.e., a trigger) embedded.",
"Instead of preventing the convergence in accuracy as Byzantine attacks (Blanchard et al., 2017) , the purpose of backdoor attacks in FL is to manipulate local models and simultaneously fit the main task and backdoor task, so that the global model would behave normally on untampered data samples while achieving high attack success rate on backdoored data samples.",
"The adversarial objective for attacker i in round t with local datatset D i and target label τ is:",
"Here, the poisoned dataset",
"The function R transforms clean data in any class into backdoored data that have an attacker-chosen trigger pattern using a set of parameters φ.",
"For example, for image data, φ is factored into trigger location TL, trigger size TS and trigger gap TG (φ = {TS, TG, TL}), which are shown in Fig.2 .",
"The attacker can design his own trigger pattern and choose an optimal poison ratio r to result in a better model parameter w * i , with which G t+1 can both assign the highest probability to target label τ for backdoored data R(x i j , φ) and the ground truth label y i j for benign data x i j .",
"Through extensive experiments on diverse datasets including LOAN and three image datasets in different settings, we show that in standard FL our proposed DBA is more persistent and effective than centralized backdoor attack: DBA achieves higher attack success rate, faster convergence and better resiliency in single-shot and multiple-shot attack scenarios.",
"We also demonstrate that DBA is more stealthy and can successfully evade two robust FL approaches.",
"The effectiveness of DBA is explained using feature visual interpretation for inspecting its role in aggregation.",
"We also perform an in-depth analysis on the important factors that are unique to DBA to explore its properties and limitations.",
"Our results suggest DBA is a new and more powerful attack on FL than current backdoor attacks.",
"Our analysis and findings can provide new threat assessment tools and novel insights for evaluating the adversarial robustness of FL.",
"A APPENDIX"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19999998807907104,
0.21212120354175568,
0.28169015049934387,
0.0714285671710968,
0.3333333432674408,
0.24561403691768646,
0.21052631735801697,
0.11538460850715637,
0.08955223113298416,
0.1599999964237213,
0.09090908616781235,
0.1090909019112587,
0.0555555522441864,
0.17721518874168396,
0.16129031777381897,
0.24657534062862396,
0.07843136787414551,
0.1355932205915451,
0.26923075318336487,
0.03389830142259598,
0.2222222238779068,
0.15789473056793213,
0.19672130048274994,
0.13793103396892548,
0.20000000298023224,
0.07407406717538834,
0.5357142686843872,
0.2432432323694229,
0.03703703358769417,
0.11999999731779099,
0.307692289352417,
0.072727270424366,
0.0923076868057251,
0.0307692252099514,
0.15789473056793213,
0.11428570747375488,
0.0833333283662796,
0.0363636314868927,
0.08695651590824127,
0.07843136787414551,
0.09677419066429138,
0.10256410390138626,
0.1904761791229248,
0.19512194395065308,
0.07547169178724289,
0,
0.06896550953388214,
0.0634920597076416,
0.0952380895614624,
0.33766233921051025,
0.31372547149658203,
0.039215680211782455,
0.2181818187236786,
0.307692289352417,
0.1111111044883728
] | rkgyS0VFvr | true | [
"We proposed a novel distributed backdoor attack on federated learning and show that it is not only more effective compared with standard centralized attacks, but also harder to be defended by existing robust FL methods"
] |
[
"Graph networks have recently attracted considerable interest, and in particular in the context of semi-supervised learning.",
"These methods typically work by generating node representations that are propagated throughout a given weighted graph.\n\n",
"Here we argue that for semi-supervised learning, it is more natural to consider propagating labels in the graph instead.",
"Towards this end, we propose a differentiable neural version of the classic Label Propagation (LP) algorithm.",
"This formulation can be used for learning edge weights, unlike other methods where weights are set heuristically.",
"Starting from a layer implementing a single iteration of LP, we proceed by adding several important non-linear steps that significantly enhance the label-propagating mechanism.\n\n",
"Experiments in two distinct settings demonstrate the utility of our approach.\n",
"We study the problem of graph-based semi-supervised learning (SSL), where the goal is to correctly label all nodes of a graph, of which only a few are labeled.",
"Methods for this problem are often based on assumptions regarding the relation between the graph and the predicted labels.",
"One such assumption is smoothness, which states that adjacent nodes are likely to have similar labels.",
"Smoothness can be encouraged by optimizing an objective where a loss term L over the labeled nodes is augmented with a quadratic penalty over edges: (1) Here, y are the true labels, f are \"soft\" label predictions, S is the set of labeled nodes, and w are non-negative edge weights.",
"The quadratic term in Eq. (1) is often referred to as Laplacian Regularization since (for directed graphs) it can equivalently be expressed using the graph Laplacian BID5 .",
"Many early methods for SSL have adopted the general form of Eq. (1) BID51 BID50 BID4 BID6 BID0 BID42 BID47 .",
"Algorithms such as the seminal Label Propagation BID51 are simple, efficient, and theoretically grounded but are limited in two important ways.",
"First, predictions are parameterized either naïvely or not at all.",
"Second, edge weights are assumed to be given as input, and in practice are often set heuristically.Recent deep learning methods address the first point by offering intricate predictive models that are trained discriminatively BID47 BID38 BID48 BID28 BID20 BID21 BID34 .",
"Nonetheless, many of them still require w as input, which may be surprising given the large body of work highlighting the importance of good weights BID51 BID24 BID46 BID4 BID25 .",
"While some methods consider some form of weight learning BID45 BID35 , to some extent they have drifted away from the original quadratic criterion.Other works address the second point by proposing disciplined ways for learning w.",
"However, these either assume specific simple parameterizations BID49 BID25 , or altogether consider weights disjointly from predictions BID46 BID32 .Our",
"goal in this paper is to simultaneously addresses both issues. We",
"propose a framework that, given a graph, jointly learns both a parametric predictive model and the edge weights. To",
"do this, we begin by revisiting the Label Propagation (LP), and casting it as a differentiable neural network. Each",
"layer in the network corresponds to a single iterative update, making a forward pass equivalent to a full run of the algorithm. Since",
"the network is differentiable, we can then optimize the weights of the LP solution using gradient descent. As we",
"show, this can be done efficiently with a suitable loss function.The key modeling point in our work is that labeled information is used as input to both the loss and the network. In contrast",
"to most current methods, our network's hidden layers directly propagate labeling information, rather than node or feature representations. Each layer",
"is therefore a self-map over the probability simplex; special care is therefore needed when introducing non-linearities. To this end",
", we introduce two novel architectural components that are explicitly designed to operate on distributions. The first",
"is an information-gated attention mechanism, where attention is directed based on the informativeness and similarity of neighboring nodes' states. The second",
"is a novel \"bifurcation\" operator that dynamically controls label convergence, and acts as a balancing factor to the model's depth.Our main guideline in designing our model was to tailor it to the semi-supervised setting. The result",
"is a slim model having relatively few parameters and only one model-specific hyper-parameter (depth), making it suitable for tasks where only few labeled nodes are available. The final",
"network provides a powerful generalization of the original propagation algorithm that can be trained efficiently. Experiments",
"on benchmark datasets in two distinct learning settings show that our model compares favorably against strong baselines.",
"In this work we presented a deep network for graph-based SSL.",
"Our design process revolved around two main ideas: that edge weights should be learned, and that labeled data should be propagated.",
"We began by revisiting the classic LP algorithm, whose simple structure allowed us to encode it as a differentiable neural network.",
"We then proposed two novel ad-hoc components: information-gated attention and bifurcation, and kept our design slim and lightly parameterized.",
"The resulting model is a powerful generalization of the original algorithm, that can be trained efficiently using the leave-one-out loss using few labeled nodes.We point out two avenues for future work.",
"First, despite its non-linearities, the current network still employs the same simple averaging updates that LP does.",
"An interesting challenge is to design general parametric update schemes, that can perhaps be learned.",
"Second, since the Laplacian's eigenvalues play an important role in both theory and in practice, an interesting question is whether these can be used as the basis for an explicit form of regularization.",
"We leave this for future work."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19354838132858276,
0.060606054961681366,
0.17142856121063232,
0.0624999962747097,
0.060606054961681366,
0.04999999701976776,
0.0714285671710968,
0.14999999105930328,
0.1818181723356247,
0,
0.06896550953388214,
0.0476190410554409,
0.1111111044883728,
0.1111111044883728,
0,
0.072727270424366,
0.04651162400841713,
0.08163265138864517,
0,
0,
0.12121211737394333,
0.11428570747375488,
0.05714285373687744,
0.0624999962747097,
0.0833333283662796,
0.2222222238779068,
0.060606054961681366,
0,
0.11428570747375488,
0.12244897335767746,
0.0952380895614624,
0.0624999962747097,
0,
0.14814814925193787,
0.05882352590560913,
0.05405404791235924,
0.060606054961681366,
0.08695651590824127,
0.0624999962747097,
0,
0.13333332538604736,
0.09090908616781235
] | r1g7y2RqYX | true | [
"Neural net for graph-based semi-supervised learning; revisits the classics and propagates *labels* rather than feature representations"
] |
[
"Neural architecture search (NAS) has made rapid progress incomputervision,wherebynewstate-of-the-artresultshave beenachievedinaseriesoftaskswithautomaticallysearched neural network (NN) architectures.",
"In contrast, NAS has not made comparable advances in natural language understanding (NLU).",
"Corresponding to encoder-aggregator meta architecture of typical neural networks models for NLU tasks (Gong et al. 2018), we re-define the search space, by splittingitinto twoparts:encodersearchspace,andaggregator search space.",
"Encoder search space contains basic operations such as convolutions, RNNs, multi-head attention and its sparse variants, star-transformers.",
"Dynamic routing is included in the aggregator search space, along with max (avg) pooling and self-attention pooling.",
"Our search algorithm is then fulfilled via DARTS, a differentiable neural architecture search framework.",
"We progressively reduce the search space every few epochs, which further reduces the search time and resource costs.",
"Experiments on five benchmark data-sets show that, the new neural networks we generate can achieve performances comparable to the state-of-the-art models that does not involve language model pre-training.\n",
"Neural architecture search (NAS) has recently attracted intensive attention.",
"On one hand, promising methodological innovation for NAS have been developed, e.g. the seminal gradient-based NAS approach DARTS (Liu, Simonyan, and Yang 2018) , followed by improvements such as SNAS (Xie et al. 2018 ), P-DARTS , PC-DARTS (Xu et al. 2019) , etc.",
"On the other hand, NAS has helped to discover better models to for a variety of vision tasks, e.g., image classification (Zoph and Le 2017; Zoph et al. 2017; Cai, Zhu, and Han 2018) , semantic segmentation , object detection (Ghiasi, Lin, and Le 2019) , superresolution (Ahn, Kang, and Sohn 2018) , etc.",
"For natural language processing tasks, NAS is relatively less studied.",
"Except for the general methodology-wise innovations NASNet (Zoph and Le 2016) , ENAS (Pham et al. 2018) and DARTS (Liu, Simonyan, and Yang 2018) which pay slight extra effort on searching for new RNN cells on Copyright c 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org).",
"All rights reserved.",
"language modeling (LM) tasks, there is little studies tailored to the NLU task.",
"One such an example is the evolved transformer (So, Liang, and Le 2019) , which uses the evolutionbased NAS algorithm to search for better transformer architecture for machine translation.",
"Although state-of-the-art performance has been achieved on 4 machine translation tasks, the computation cost is exceedingly high since they have to evaluate a large number of models.",
"In fact, NAS has not been fully investigated for a wide variety of fundamental natural language understanding (NLU) tasks, such as classification (e.g. or sentiment analysis), natural language inference (NLI), sequence tagging tasks such as named entity recognition (NER).",
"Especially, there is no existing work on the effectiveness of one-shot architecture search (Bender et al. 2018 ) methods on NLU tasks, which could also otherwise significantly reduce the search cost as done in vision tasks.",
"A typical neural network architecture for NLU includes an encoder which contextualizes the embedded text inputs and extracts higher-level features, and an aggregator that aggregates the encoded inputs to a fix-length vector to make a prediction (Gong et al. 2018) .",
"In terms of encoders, many previous NAS literature restrict the search space to nonlinear maps such as tanh and sigmoid, and the objective to be the discovery of a new recurrent cell to form a new type of recurrent neural network (RNN).",
"However, other than RNNs, there are many other available encoders, for example, convolutional networks (CNN) (Kim 2014) , and attentionbased model such as transformer (Vaswani et al. 2017) , etc.",
"In addition, recent works e.g. star-transformer (Guo et al. 2019) have proposed more sparse versions of transformer to reduce the computational complexity and improve the generalization when there is no pre-trained language model.",
"In addition, as far as we know, there is no existing work on searching for an aggregator.",
"A collection of aggregators are available (Gong et al. 2018) .",
"However, one have to choose manually in a trial-and-error fashion.",
"In this work, we design an encoder search space that contains a rich collection of encoders.",
"The involved operations include:",
"i) the zero map and identity map;",
"ii) the two most commonly used RNNs, LSTM (Hochreiter and Schmidhuber 1997) and GRU (Cho et al. 2014) ;",
"iii) highway network (Srivastava, Greff, and Schmidhuber 2015) ;",
"iv) a series of convolutional networks with different kernel sizes;",
"v) multi-head attention from (Vaswani et al. 2017) ;",
"vi) startransformer (Guo et al. 2019) and its variants, which will be explained later in the next section.",
"The combination of encoder operations is searched in a encoder search cell, which is a directed acyclic graph (DAG) of intermediate nodes collected by the encoder operations from the encoder search space.",
"To further reduce the human designs, we propose to search for a suitable aggregator along with the search of encoder cell via an aggregator search cell which includes max (average) pooling, self-attention pooling and dynamic routing (Gong et al. 2018) .",
"The aggregator search cell is a DAG with only one step in which the only node is connected to the inputs by a mixture of aggregators.",
"Our search strategy is mainly based on DARTS (Liu, Simonyan, and Yang 2018) .",
"To reduce computation cost, we employ a progressive search space reduction strategy similar to P-DARTS .",
"Experiments are performed on three different kinds of NLU tasks, i.e., text classification, NLI and NER, with 5 benchmark datasets.",
"For fair comparison, we only compare our results with former state-of-the-art (SOTA) models without large-scale LM pre-training, or any other outside resources like knowledge bases, or any human designed features.",
"Results have shown that with the help of NAS on our search space, we achieve results that are comparable to the SOTA on these 5 tasks, indicating the effectiveness of NAS in the field of NLU research.",
"Our work contributes the field by the following aspects:",
"• We re-define the search space for neural architecture search in NLU tasks, by extending and modifying the encoder search space from the evolved transformer, and define the aggregator search space.",
"• To the best of our knowledge, we are the first to conduct NAS experiments on NLU tasks such as classification, NLI, NER tasks, with one-shot NAS.",
"• Our approach achieves the results that are comparable to the state-of-the-art models designed by human experts, on various NLU tasks (classification, NLI, NER), by using neural architecture search over the search space defined above.",
"In addition, we demonstrate the effectiveness of one-shot architecture search for NLU tasks.",
"• We propose a modularized version of star-transformer and its variant, thus including a sparse version of transformer into the search space, which is also novel in the literature.",
"The resulting advantage is that the search cost can be reduced notably and the network's generalization capability can also be improved.",
"Related Work Recently, a new research field named neural architecture search (NAS) has been drawing more and more attention.",
"The goal is to find automatic mechanisms for generating new neural architectures to replace conventional handcrafted ones.",
"Recently, it is widely applied to computer vision tasks, such as image classification (Zoph and Le 2017; Zoph et al. 2017; Cai, Zhu, and Han 2018) , semantic segmentation , object detection (Ghiasi, Lin, and Le 2019) , super-resolution (Ahn, Kang, and Sohn 2018) , etc.",
"However, NAS is less well studied in the field of natural language understanding (NLU).",
"Recent works (Zoph and Le 2016; Pham et al. 2018; Liu, Simonyan, and Yang 2018) search new recurrent cells for the language modeling (LM) task on the PennTreebank dataset 1 .",
"The recurrent cell discovered by (Liu, Simonyan, and Yang 2018) achieves the test perplexity of 56.1, which is competitive with the stateof-the-art model enhanced by a mixture of softmaxes .",
"The evolved transformer (So, Liang, and Le 2019) applies NAS to discover better versions of the transformer architecture.",
"Eploying an evolution-based search algorithm, and the vanilla transformer as the initial population, it generates a better transformer architecture that consistently outperform the vanilla transformer on 4 benchmark machine translation tasks.",
"Our work contributes by going beyond the RNN structure and re-defining the search space to include a richer connection of operations.",
"Our work is implemented on DARTS (Liu, Simonyan, and Yang 2018) and P-DARTS .",
"DARTS relaxes the search space to be continuous, so that the architecture can be optimized with respect to its validation set performance by gradient descent.",
"Due to its simplicity, DARTS has inspired a series follow-up work to improve the search stability and efficiency.",
"Based on DARTS, P-DARTS ) divides the search process into multiple stages and progressively increase the network depth at the end of each stage.",
"Our work contributes to the gradient-based NAS (and more generally, one-shot NAS) research by investigating its effectiveness in discovering new NN architectures for a series of NLU tasks.",
"Our search space design takes advantages of the recent advances in the NLU field.",
"One of the most import advances in sentence encoding is the application of various self-attention mechanisms, among which the transformer (Vaswani et al. 2017 ) is the most prominent one, which has become ubiquitous in NLU research.",
"Specifically, the QANet ) modifies the transformer architecture to obtain the first place on the SQuaD leaderboard 2 .",
"The transformer is powerful due to its multi-head self-attention mechanism, which can well capture the contextual information.",
"However, the transformer maybe be difficult to train and generalize well on a small or medium sized data-set (Guo et al. 2019 ).",
"Thus, many other self-attention operations are proposed, e.g., dynamic self-attention (Yoon, Lee, and Lee 2018) and DiSAN (Shen et al. 2018) .",
"Recently, (Guo et al. 2019) propose the star-transformer, a sparser version of the multi-head attention model, and achieves competitive results on a series of benchmark datasets like SST-1, SNLI, CoNLL2003.",
"On the aggregation side, an important advancement is the application of capsule networks and dynamic routing policy in text classification Gong et al. 2018) .",
"Capsule networks can dynamically decide what and how much information need to be transferred from each word to the final encoding of the text sequence, thus achieving better results even with simple encoders (Gong et al. 2018 ).",
"Our work is built upon these work and contributes by:",
"i) include some of the most prominent attention based encoders and aggregators into the search space, and experiment on whether NAS can generate new architectures that have competitive results;",
"ii) we are the first to propose the aggregator search space;",
"iii) we include a modularized version of the star-transformer and its variant into the search space, thus we are the first to combine the dense and sparse multi-head self-attention operations into the same search space.",
"Results on SST Results on SST-1 and SST-2 datasets are listed in Table 2 .",
"On the SST-1, DARTS generate a network architecture (DARTS-SST-1-V0) that performs better than most of the traditional NN models.",
"Not that the encoder cell of DARTS-SST-1-V0 contains only RNN and CNN operations, but the exact details of combination of different level of features are impossible to design manually.",
"The best ar- (Le and Mikolov 2014) 48.7 87.8 MT-LSTM (F2S) 49.1 87.2 Tree-LSTM (Tai, Socher, and Manning 2015) 51.0 88.0 CNN-Tensor (Lei, Barzilay, and Jaakkola 2015) 51.2 -BiLSTM + max pooling (Gong et al. 2018) 48.0 87.0 BiLSTM + average pooling (Gong et al. 2018) 46.2 85.2 BiLSTM + self-att (Gong et al. 2018) 48.2 86.4 BiLSTM + dynamic routing (Gong et al. 2018) 50.5 87.6 Emb + self-att (Shen et al. 2018) 48.9 -DiSAN (Shen et al. 2018) 51.7 -BiLSTM + self-att (Yoon, Lee, and Lee 2018) 50.4 88.2 CNN + self-att (Yoon, Lee, and Lee 2018) 50.6 88.3 Dynamic self-att (Yoon, Lee, and Lee 2018) 50.6 88.5 Transformer (Guo et al. 2019) 50 chitecture (DARTS-SST-2-V0) we obtained on the SST-2 dataset involves a star-transformer operation and an identity map.",
"Note that since (Guo et al. 2019 ) did not provide results on SST-2, we use the code from fastNLP 4 to run the transformer and the original star-transformer on SST-2.",
"The results given by us are all the average of 10 different runs.",
"We can see that DARTS-SST-2-V0 can obtain results comparable to the SOTA on SST-2.",
"We also experiment on the transferability of the learned architectures.",
"From Table 2 , we can see that DARTS-SST-2-V0 performs worse than DARTS-SST-1-V0 on SST-1 with a significant margin, but DARTS-SST-1-V0 also performs competitively on SST-2.",
"Results on NLI tasks Among the architecture candidates derived from the search on SciTail, we find that the one obtained by accepting the null operation when it gets the highest score (DARTS-SciTail-V0) performs best.",
"In addition, this search run gives the average pooling as the aggregator instead of dynamic-routing.",
"The results are presented in Table 3 : Test accuracy (%) on the SciTail dataset.",
"Model ACC 600D ESIM 70.6 Decomposable Attention 72.3 DGEM 72.3 AdvEntuRe 79.0 HCRN (Tay, Luu, and Hui 2018) 80.0 DeIsTe (Yin, Schütze, and Roth 2018) 82.1 CAFE (Yin, Schütze, and Roth 2018) 83.3 MIMN 84.0 ConSeqNet 85.2 HBMP (Mihaylov et al. 2018) 86.0 star-transformer (Guo et al. 2019) 79 Table 3 .",
"DARTS-SciTail-V0 achieves a competitive performance on the test set, outperforming the baseline models such as ESIM and decomposable attention by a large margin.",
"It also outperforms the results of the star-transformer and transformer even after extensively parameters tuning.",
"Our model is actually the best one that has no inter-sentence attentions other than the final interaction before the prediction layer, and uses no outside resources, no manually designed features and no extra training mechanism like adversarial training.",
"As we can see from Figure 5 that, on the MedNLI dataset, the search gives out a architecture (DARTS-MedNLI-V0) that quite resembles the original implementation of the multi-head attention inside the transformer block, except the residual connection is replaced by a sep conv with kernel size 3.",
"DARTS-MedNLI-V0 performs worse than the original star-transformer, but it is better than the original transformer, and the baseline ESIM and InferSent.",
"We also look into the transferability between the two task.",
"We find that although the datasets are from different domains, the architecture searched on one performs comparable on the other.",
"This paper addresses NAS for a series of NLU tasks.",
"Corresponding to the encoder-aggregator architecture of typical NN models for NLU (Gong et al. 2018) , we redefine the search space, by splitting it into encoder search space and aggregator search space.",
"Our search strategy is based on DARTS (Liu, Simonyan, and Yang 2018) and P-DARTS .",
"Experiments shows that architectures discovered by NAS achieves results that are comparable to the previous SOTA models.",
"In the further, we would like to investigate one-shot architecture search on more large-scale NLU tasks."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.15789473056793213,
0,
0.4000000059604645,
0.09756097197532654,
0.09999999403953552,
0.21621620655059814,
0.14999999105930328,
0.1538461446762085,
0.1818181723356247,
0.0624999962747097,
0.2028985470533371,
0,
0.12121211737394333,
0,
0.1621621549129486,
0.19999998807907104,
0.19607841968536377,
0.13333332538604736,
0.21052631735801697,
0.20689654350280762,
0.25,
0.038461532443761826,
0.10526315122842789,
0.04999999701976776,
0.05882352590560913,
0.11764705181121826,
0.19999998807907104,
0,
0.06451612710952759,
0.0476190410554409,
0,
0.1764705777168274,
0,
0.0476190410554409,
0.21276594698429108,
0.20338982343673706,
0.21739129722118378,
0.05405404791235924,
0.20512820780277252,
0.08695651590824127,
0.038461532443761826,
0.18867923319339752,
0.0624999962747097,
0.260869562625885,
0.20408162474632263,
0.290909081697464,
0.37837836146354675,
0.16326530277729034,
0.0952380895614624,
0.1904761791229248,
0.14999999105930328,
0.03278687968850136,
0.10526315122842789,
0.1538461446762085,
0.11764705181121826,
0.24390242993831635,
0.19999998807907104,
0.27272728085517883,
0,
0.21739129722118378,
0.24390242993831635,
0.1304347813129425,
0.3461538553237915,
0.2702702581882477,
0.11320754140615463,
0.1538461446762085,
0.09756097197532654,
0.12765957415103912,
0,
0.15686273574829102,
0.08510638028383255,
0.09999999403953552,
0,
0.15686273574829102,
0.1764705777168274,
0.23529411852359772,
0,
0.2380952388048172,
0.12244897335767746,
0.03999999538064003,
0.07692307233810425,
0.10810810327529907,
0.10810810327529907,
0.12121211737394333,
0.04255318641662598,
0.15094339847564697,
0.15789473056793213,
0.05128204822540283,
0,
0.13333332538604736,
0.10526315122842789,
0.0363636314868927,
0.1538461446762085,
0.04999999701976776,
0.060606054961681366,
0.09756097197532654,
0.3529411852359772,
0.3461538553237915,
0.05405404791235924,
0.14999999105930328,
0.29999998211860657
] | rkgARFTUjB | true | [
"Neural Architecture Search for a series of Natural Language Understanding tasks. Design the search space for NLU tasks. And Apply differentiable architecture search to discover new models"
] |
[
"Network embedding (NE) methods aim to learn low-dimensional representations of network nodes as vectors, typically in Euclidean space.",
"These representations are then used for a variety of downstream prediction tasks.",
"Link prediction is one of the most popular choices for assessing the performance of NE methods.",
"However, the complexity of link prediction requires a carefully designed evaluation pipeline to provide consistent, reproducible and comparable results.",
"We argue this has not been considered sufficiently in recent works.",
"The main goal of this paper is to overcome difficulties associated with evaluation pipelines and reproducibility of results.",
"We introduce EvalNE, an evaluation framework to transparently assess and compare the performance of NE methods on link prediction.",
"EvalNE provides automation and abstraction for tasks such as hyper-parameter tuning, model validation, edge sampling, computation of edge embeddings and model validation.",
"The framework integrates efficient procedures for edge and non-edge sampling and can be used to easily evaluate any off-the-shelf embedding method.",
"The framework is freely available as a Python toolbox.",
"Finally, demonstrating the usefulness of EvalNE in practice, we conduct an empirical study in which we try to replicate and analyse experimental sections of several influential papers.",
"Link prediction is an important task with applications in a wide range of fields such as computer science, social sciences, biology, and medicine BID6 BID14 BID15 BID22 .",
"It amounts to estimating the likelihood for the existence of edges, between pairs of nodes that do not form an edge in the input graph.",
"Many Network Embedding (NE) methods (e.g., BID0 BID2 BID5 BID8 BID10 BID12 BID17 BID18 BID19 have recently been applied to solving link prediction problems, showing promising results. These methods map nodes in the network to vectors in IR d . This embedding is then used for a variety of tasks such as visualization, multi-label classification, clustering or link prediction.The challenges of evaluating NE methods for link prediction We argue that the practical performance of most NE methods is poorly understood and that experiments in many papers are difficult to compare due to variation in experimental setup and evaluation procedures. In this paper, we focus on a number of difficulties specific to the evaluation of NE methods for link prediction. Link prediction is a particularly challenging task to evaluate as it involve a number design choices, which can confound the results and are prone to errors.1) Train-test splitting of graphs For example, a typical implicit assumption is that the input graph is not complete, and the purpose is to accurately predict the missing edges.",
"To evaluate the performance of an NE method for link prediction, one thus needs an (incomplete) training graph along with a (more) complete version of that graph for testing.",
"Much research has been devoted to determining the best approach to generate these training graphs BID6 BID14 BID22 .",
"Strong theoretical and empirical evidence suggest that in order to fairly evaluate link prediction methods, snapshots of the network at different points in time should be used for training and testing.",
"In this way, the link prediction methods are tested on the natural evolutions of the networks.",
"However, the availability of such snapshots is uncommon and raises additional questions, such as how to choose the time intervals for splitting the network.For these reasons, authors typically resort to sampling sets of edges from the input graphs and using the resulting sub-graphs for training BID5 BID8 BID10 BID12 .",
"The remaining edges are used as positive test examples.",
"The process of sampling edges is not standardized and varies between scientific works.",
"The relative sizes of the train and test sets, for example, is a user-defined parameter which varies significantly.",
"In BID8 ; BID10 the authors use a 50-50 train-test split, in BID5 ) a 60-40, in Lai et al. (2017 an 80-20 and in BID20 values ranging from 30-70 up to 80-20.A related problem is that, in addition to the 'positive' train and test edges, often also 'negative' edges (or non-edges) are required.",
"Sometimes these are used to derive the embedding, while in other cases they are used only to train the classifier that predicts links.",
"These sets of non-edges can be selected according to different strategies (Kotnis & Nastase) and can be of various sizes.2) From node embeddings to edge predictions Furthermore, most NE methods simply provide node embeddings.",
"From these, edge embeddings need to be derived prior to performing predictions.",
"There are several approaches for deriving edge embeddings which also seem to have a strong impact on the performance of different methods BID8 .3) Evaluation measures Also the metrics used to evaluate the accuracy varies, e.g., from AUC-ROC BID10 , to precision-recall BID21 , to precision@k BID20 .",
"The recent surge of research in the area of network embeddings has resulted in a wide variety of data sets, metrics, and setups for evaluating and comparing the utility of embedding methods.",
"Comparability across studies is lacking and not all evaluations are equally sound.",
"This highlights the need for specific tools and pipelines to ensure the correct evaluation of these methods.",
"Particularly, the use of representation learning for link prediction tasks requires train and test sampling, non-edge sampling, and in many cases selection of edge embedding methods and binary classifiers.",
"The evaluation procedure, thus, becomes an ensemble of tasks which allow for many errors or inconsistencies.In this work we have proposed EvalNE, a novel framework that can be used to evaluate any network embedding method for link prediction.",
"Our pipeline automates the selection of train and test edge sets, simplifies the process of tuning model parameters and reports the accuracy of the methods according to many criteria.",
"Our experiments highlight the importance of the edge sampling strategy and parameter tuning for evaluating NE methods.",
"We have also introduced a scalable procedure to select edge sets from given networks and showed empirically that is orders or magnitude faster than the naive approaches used in recent literature."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1904761791229248,
0.2222222238779068,
0.2631579041481018,
0.3720930218696594,
0.05714285373687744,
0.3414634168148041,
0.4651162624359131,
0.1395348757505417,
0.13636362552642822,
0.1818181723356247,
0.1666666567325592,
0.15686273574829102,
0.1304347813129425,
0.22068965435028076,
0.20408162474632263,
0.04878048226237297,
0.2641509473323822,
0.42105263471603394,
0.1538461446762085,
0,
0.10810810327529907,
0.2380952388048172,
0.11267605423927307,
0.04651162400841713,
0.11320754140615463,
0,
0.1818181723356247,
0.3199999928474426,
0.0555555522441864,
0.29999998211860657,
0.3265306055545807,
0.3870967626571655,
0.1702127605676651,
0.25,
0.1090909019112587
] | H1eJH3IaLN | true | [
"In this paper we introduce EvalNE, a Python toolbox for automating the evaluation of network embedding methods on link prediction and ensuring the reproducibility of results."
] |
Dataset Card for SciTLDR
Dataset Summary
SciTLDR
: Extreme Summarization of Scientific Documents
SciTLDR is a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden.
Supported Tasks and Leaderboards
summarization
Languages
English
Dataset Structure
SciTLDR is split in to a 60/20/20 train/dev/test split. For each file, each line is a json, formatted as follows
{
"source":[
"sent0",
"sent1",
"sent2",
...
],
"source_labels":[binary list in which 1 is the oracle sentence],
"rouge_scores":[precomputed rouge-1 scores],
"paper_id":"PAPER-ID",
"target":[
"author-tldr",
"pr-tldr0",
"pr-tldr1",
...
],
"title":"TITLE"
}
The keys rouge_scores
and source_labels
are not necessary for any code to run, precomputed Rouge scores are provided for future research.
Data Instances
{ "source": [ "Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.", "MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training.", "Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages.", "We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter.", "We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods.", "We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point." ], "source_labels": [ 0, 0, 0, 1, 0, 0 ], "rouge_scores": [ 0.2399999958000001, 0.26086956082230633, 0.19999999531250012, 0.38095237636054424, 0.2051282003944774, 0.2978723360796741 ], "paper_id": "rJlnfaNYvB", "target": [ "We devise adaptive loss scaling to improve mixed precision training that surpass the state-of-the-art results.", "Proposal for an adaptive loss scaling method during backpropagation for mix precision training where scale rate is decided automatically to reduce the underflow.", "The authors propose a method to train models in FP16 precision that adopts a more elaborate way to minimize underflow in every layer simultaneously and automatically." ], "title": "Adaptive Loss Scaling for Mixed Precision Training" }
Data Fields
source
: The Abstract, Introduction and Conclusion (AIC) or Full text of the paper, with one sentence per line.source_labels
: Binary 0 or 1, 1 denotes the oracle sentence.rouge_scores
: Precomputed ROUGE baseline scores for each sentence.paper_id
: Arxiv Paper ID.target
: Multiple summaries for each sentence, one sentence per line.title
: Title of the paper.
Data Splits
train | valid | test | |
---|---|---|---|
SciTLDR-A | 1992 | 618 | 619 |
SciTLDR-AIC | 1992 | 618 | 619 |
SciTLDR-FullText | 1992 | 618 | 619 |
Dataset Creation
[More Information Needed]
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
Annotations
Annotation process
Given the title and first 128 words of a reviewer comment about a paper, re-write the summary (if it exists) into a single sentence or an incomplete phrase. Summaries must be no more than one sentence. Most summaries are between 15 and 25 words. The average rewritten summary is 20 words long.
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
To encourage further research in the area of extreme summarization of scientific documents.
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
Apache License 2.0
Citation Information
@article{cachola2020tldr, title={{TLDR}: Extreme Summarization of Scientific Documents}, author={Isabel Cachola and Kyle Lo and Arman Cohan and Daniel S. Weld}, journal={arXiv:2004.15011}, year={2020}, }
Contributions
Thanks to @Bharat123rox for adding this dataset.
- Downloads last month
- 414