uid
int64
4
318k
paper_url
stringlengths
39
81
arxiv_id
stringlengths
9
16
title
stringlengths
6
365
abstract
stringlengths
0
7.27k
url_abs
stringlengths
17
601
url_pdf
stringlengths
21
819
proceeding
stringlengths
7
1.03k
authors
sequence
tasks
sequence
date
float64
422B
1,672B
methods
list
__index_level_0__
int64
1
197k
172,339
https://paperswithcode.com/paper/maximum-a-posteriori-signal-recovery-for
2010.15682
Maximum a posteriori signal recovery for optical coherence tomography angiography image generation and denoising
Optical coherence tomography angiography (OCTA) is a novel and clinically promising imaging modality to image retinal and sub-retinal vasculature. Based on repeated optical coherence tomography (OCT) scans, intensity changes are observed over time and used to compute OCTA image data. OCTA data are prone to noise and artifacts caused by variations in flow speed and patient movement. We propose a novel iterative maximum a posteriori signal recovery algorithm in order to generate OCTA volumes with reduced noise and increased image quality. This algorithm is based on previous work on probabilistic OCTA signal models and maximum likelihood estimates. Reconstruction results using total variation minimization and wavelet shrinkage for regularization were compared against an OCTA ground truth volume, merged from six co-registered single OCTA volumes. The results show a significant improvement in peak signal-to-noise ratio and structural similarity. The presented algorithm brings together OCTA image generation and Bayesian statistics and can be developed into new OCTA image generation and denoising algorithms.
https://arxiv.org/abs/2010.15682v1
https://arxiv.org/pdf/2010.15682v1.pdf
null
[ "Lennart Husvogt", "Stefan B. Ploner", "Siyu Chen", "Daniel Stromer", "Julia Schottenhamml", "A. Yasin Alibhai", "Eric Moult", "Nadia K. Waheed", "James G. Fujimoto", "Andreas Maier" ]
[ "Denoising", "Image Generation" ]
1,603,929,600,000
[]
25,324
3,841
https://paperswithcode.com/paper/code-completion-with-neural-attention-and
1711.09573
Code Completion with Neural Attention and Pointer Networks
Intelligent code completion has become an essential research task to accelerate modern software development. To facilitate effective code completion for dynamically-typed programming languages, we apply neural language models by learning from large codebases, and develop a tailored attention mechanism for code completion. However, standard neural language models even with attention mechanism cannot correctly predict the out-of-vocabulary (OoV) words that restrict the code completion performance. In this paper, inspired by the prevalence of locally repeated terms in program source code, and the recently proposed pointer copy mechanism, we propose a pointer mixture network for better predicting OoV words in code completion. Based on the context, the pointer mixture network learns to either generate a within-vocabulary word through an RNN component, or regenerate an OoV word from local context through a pointer component. Experiments on two benchmarked datasets demonstrate the effectiveness of our attention mechanism and pointer mixture network on the code completion task.
http://arxiv.org/abs/1711.09573v2
http://arxiv.org/pdf/1711.09573v2.pdf
null
[ "Jian Li", "Yue Wang", "Michael R. Lyu", "Irwin King" ]
[ "Code Completion" ]
1,511,740,800,000
[]
140,067
151,672
https://paperswithcode.com/paper/naist-s-machine-translation-systems-for-iwslt
null
NAIST's Machine Translation Systems for IWSLT 2020 Conversational Speech Translation Task
This paper describes NAIST{'}s NMT system submitted to the IWSLT 2020 conversational speech translation task. We focus on the translation disfluent speech transcripts that include ASR errors and non-grammatical utterances. We tried a domain adaptation method by transferring the styles of out-of-domain data (United Nations Parallel Corpus) to be like in-domain data (Fisher transcripts). Our system results showed that the NMT model with domain adaptation outperformed a baseline. In addition, slight improvement by the style transfer was observed.
https://aclanthology.org/2020.iwslt-1.21
https://aclanthology.org/2020.iwslt-1.21.pdf
WS 2020 7
[ "Ryo Fukuda", "Katsuhito Sudoh", "Satoshi Nakamura" ]
[ "Domain Adaptation", "Machine Translation", "Style Transfer" ]
1,593,561,600,000
[]
124,264
124,349
https://paperswithcode.com/paper/influence-aware-memory-for-deep-reinforcement-1
1911.07643
Influence-aware Memory Architectures for Deep Reinforcement Learning
Due to its perceptual limitations, an agent may have too little information about the state of the environment to act optimally. In such cases, it is important to keep track of the observation history to uncover hidden state. Recent deep reinforcement learning methods use recurrent neural networks (RNN) to memorize past observations. However, these models are expensive to train and have convergence difficulties, especially when dealing with high dimensional input spaces. In this paper, we propose influence-aware memory (IAM), a theoretically inspired memory architecture that tries to alleviate the training difficulties by restricting the input of the recurrent layers to those variables that influence the hidden state information. Moreover, as opposed to standard RNNs, in which every piece of information used for estimating Q values is inevitably fed back into the network for the next prediction, our model allows information to flow without being necessarily stored in the RNN's internal memory. Results indicate that, by letting the recurrent layers focus on a small fraction of the observation variables while processing the rest of the information with a feedforward neural network, we can outperform standard recurrent architectures both in training speed and policy performance. This approach also reduces runtime and obtains better scores than methods that stack multiple observations to remove partial observability.
https://arxiv.org/abs/1911.07643v4
https://arxiv.org/pdf/1911.07643v4.pdf
null
[ "Miguel Suau", "Jinke He", "Elena Congeduti", "Rolf A. N. Starre", "Aleksander Czechowski", "Frans A. Oliehoek" ]
[ "reinforcement-learning" ]
1,574,035,200,000
[]
166,238
101,001
https://paperswithcode.com/paper/deep-unified-multimodal-embeddings-for
1905.07075
Deep Unified Multimodal Embeddings for Understanding both Content and Users in Social Media Networks
There has been an explosion of multimodal content generated on social media networks in the last few years, which has necessitated a deeper understanding of social media content and user behavior. We present a novel content-independent content-user-reaction model for social multimedia content analysis. Compared to prior works that generally tackle semantic content understanding and user behavior modeling in isolation, we propose a generalized solution to these problems within a unified framework. We embed users, images and text drawn from open social media in a common multimodal geometric space, using a novel loss function designed to cope with distant and disparate modalities, and thereby enable seamless three-way retrieval. Our model not only outperforms unimodal embedding based methods on cross-modal retrieval tasks but also shows improvements stemming from jointly solving the two tasks on Twitter data. We also show that the user embeddings learned within our joint multimodal embedding model are better at predicting user interests compared to those learned with unimodal content on Instagram data. Our framework thus goes beyond the prior practice of using explicit leader-follower link information to establish affiliations by extracting implicit content-centric affiliations from isolated users. We provide qualitative results to show that the user clusters emerging from learned embeddings have consistent semantics and the ability of our model to discover fine-grained semantics from noisy and unstructured data. Our work reveals that social multimodal content is inherently multimodal and possesses a consistent structure because in social networks meaning is created through interactions between users and content.
https://arxiv.org/abs/1905.07075v3
https://arxiv.org/pdf/1905.07075v3.pdf
null
[ "Karan Sikka", "Lucas Van Bramer", "Ajay Divakaran" ]
[ "Cross-Modal Retrieval" ]
1,558,051,200,000
[]
108,730
105,815
https://paperswithcode.com/paper/few-shot-learning-with-per-sample-rich
1906.03859
Few-Shot Learning with Per-Sample Rich Supervision
Learning with few samples is a major challenge for parameter-rich models like deep networks. In contrast, people learn complex new concepts even from very few examples, suggesting that the sample complexity of learning can often be reduced. Many approaches to few-shot learning build on transferring a representation from well-sampled classes, or using meta learning to favor architectures that can learn with few samples. Unfortunately, such approaches often struggle when learning in an online way or with non-stationary data streams. Here we describe a new approach to learn with fewer samples, by using additional information that is provided per sample. Specifically, we show how the sample complexity can be reduced by providing semantic information about the relevance of features per sample, like information about the presence of objects in a scene or confidence of detecting attributes in an image. We provide an improved generalization error bound for this case. We cast the problem of using per-sample feature relevance by using a new ellipsoid-margin loss, and develop an online algorithm that minimizes this loss effectively. Empirical evaluation on two machine vision benchmarks for scene classification and fine-grain bird classification demonstrate the benefits of this approach for few-shot learning.
https://arxiv.org/abs/1906.03859v1
https://arxiv.org/pdf/1906.03859v1.pdf
null
[ "Roman Visotsky", "Yuval Atzmon", "Gal Chechik" ]
[ "Few-Shot Learning", "Classification", "Meta-Learning", "Scene Classification" ]
1,560,124,800,000
[]
81,212
9,528
https://paperswithcode.com/paper/constrained-image-generation-using-binarized
1802.08795
Constrained Image Generation Using Binarized Neural Networks with Decision Procedures
We consider the problem of binary image generation with given properties. This problem arises in a number of practical applications, including generation of artificial porous medium for an electrode of lithium-ion batteries, for composed materials, etc. A generated image represents a porous medium and, as such, it is subject to two sets of constraints: topological constraints on the structure and process constraints on the physical process over this structure. To perform image generation we need to define a mapping from a porous medium to its physical process parameters. For a given geometry of a porous medium, this mapping can be done by solving a partial differential equation (PDE). However, embedding a PDE solver into the search procedure is computationally expensive. We use a binarized neural network to approximate a PDE solver. This allows us to encode the entire problem as a logical formula. Our main contribution is that, for the first time, we show that this problem can be tackled using decision procedures. Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints.
http://arxiv.org/abs/1802.08795v1
http://arxiv.org/pdf/1802.08795v1.pdf
null
[ "Svyatoslav Korneev", "Nina Narodytska", "Luca Pulina", "Armando Tacchella", "Nikolaj Bjorner", "Mooly Sagiv" ]
[ "Image Generation" ]
1,519,430,400,000
[]
121,913
63,916
https://paperswithcode.com/paper/learning-to-predict-denotational
null
Learning to Predict Denotational Probabilities For Modeling Entailment
We propose a framework that captures the denotational probabilities of words and phrases by embedding them in a vector space, and present a method to induce such an embedding from a dataset of denotational probabilities. We show that our model successfully predicts denotational probabilities for unseen phrases, and that its predictions are useful for textual entailment datasets such as SICK and SNLI.
https://aclanthology.org/E17-1068
https://aclanthology.org/E17-1068.pdf
EACL 2017 4
[ "Alice Lai", "Julia Hockenmaier" ]
[ "Coreference Resolution", "Natural Language Inference" ]
1,491,004,800,000
[]
74,007
201,003
https://paperswithcode.com/paper/adversarially-guided-actor-critic-1
2102.04376
Adversarially Guided Actor-Critic
Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a bottleneck. These methods consider a policy (the actor) and a value function (the critic) whose respective losses are built using different motivations and approaches. This paper introduces a third protagonist: the adversary. While the adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor, in addition to learning to solve the task, tries to differentiate itself from the adversary predictions. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks.
https://arxiv.org/abs/2102.04376v1
https://arxiv.org/pdf/2102.04376v1.pdf
ICLR 2021 1
[ "Yannis Flet-Berliac", "Johan Ferret", "Olivier Pietquin", "Philippe Preux", "Matthieu Geist" ]
[ "Efficient Exploration" ]
1,612,742,400,000
[]
50,348
75,241
https://paperswithcode.com/paper/generative-entity-networks-disentangling
null
Generative Entity Networks: Disentangling Entitites and Attributes in Visual Scenes using Partial Natural Language Descriptions
Generative image models have made significant progress in the last few years, and are now able to generate low-resolution images which sometimes look realistic. However the state-of-the-art models utilize fully entangled latent representations where small changes to a single neuron can effect every output pixel in relatively arbitrary ways, and different neurons have possibly arbitrary relationships with each other. This limits the ability of such models to generalize to new combinations or orientations of objects as well as their ability to connect with more structured representations such as natural language, without explicit strong supervision. In this work explore the synergistic effect of using partial natural language scene descriptions to help disentangle the latent entities visible an image. We present a novel neural network architecture called Generative Entity Networks, which jointly generates both the natural language descriptions and the images from a set of latent entities. Our model is based on the variational autoencoder framework and makes use of visual attention to identify and characterise the visual attributes of each entity. Using the Shapeworld dataset, we show that our representation both enables a better generative model of images, leading to higher quality image samples, as well as creating more semantically useful representations that improve performance over purely dicriminative models on a simple natural language yes/no question answering task.
https://openreview.net/forum?id=BJInMmWC-
https://openreview.net/pdf?id=BJInMmWC-
ICLR 2018 1
[ "Charlie Nash", "Sebastian Nowozin", "Nate Kushman" ]
[ "Question Answering" ]
1,514,764,800,000
[ { "code_snippet_url": "https://github.com/L1aoXingyu/pytorch-beginner/blob/9c86be785c7c318a09cf29112dd1f1a58613239b/08-AutoEncoder/simple_autoencoder.py#L38", "description": "An **Autoencoder** is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder).\r\n\r\nImage: [Michael Massi](https://en.wikipedia.org/wiki/Autoencoder#/media/File:Autoencoder_schema.png)", "full_name": "AutoEncoder", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "AutoEncoder", "source_title": "Reducing the Dimensionality of Data with Neural Networks", "source_url": "https://science.sciencemag.org/content/313/5786/504" } ]
5,299
298,219
https://paperswithcode.com/paper/where-are-my-neighbors-exploiting-patches
2206.00481
Where are my Neighbors? Exploiting Patches Relations in Self-Supervised Vision Transformer
Vision Transformers (ViTs) enabled the use of transformer architecture on vision tasks showing impressive performances when trained on big datasets. However, on relatively small datasets, ViTs are less accurate given their lack of inductive bias. To this end, we propose a simple but still effective self-supervised learning (SSL) strategy to train ViTs, that without any external annotation, can significantly improve the results. Specifically, we define a set of SSL tasks based on relations of image patches that the model has to solve before or jointly during the downstream training. Differently from ViT, our RelViT model optimizes all the output tokens of the transformer encoder that are related to the image patches, thus exploiting more training signal at each training step. We investigated our proposed methods on several image benchmarks finding that RelViT improves the SSL state-of-the-art methods by a large margin, especially on small datasets.
https://arxiv.org/abs/2206.00481v1
https://arxiv.org/pdf/2206.00481v1.pdf
null
[ "Guglielmo Camporese", "Elena Izzo", "Lamberto Ballan" ]
[ "Inductive Bias", "Self-Supervised Learning" ]
1,654,041,600,000
[]
192,503
197,581
https://paperswithcode.com/paper/fakebuster-a-deepfakes-detection-tool-for
2101.03321
FakeBuster: A DeepFakes Detection Tool for Video Conferencing Scenarios
This paper proposes a new DeepFake detector FakeBuster for detecting impostors during video conferencing and manipulated faces on social media. FakeBuster is a standalone deep learning based solution, which enables a user to detect if another person's video is manipulated or spoofed during a video conferencing based meeting. This tool is independent of video conferencing solutions and has been tested with Zoom and Skype applications. It uses a 3D convolutional neural network for predicting video segment-wise fakeness scores. The network is trained on a combination of datasets such as Deeperforensics, DFDC, VoxCeleb, and deepfake videos created using locally captured (for video conferencing scenarios) images. This leads to different environments and perturbations in the dataset, which improves the generalization of the deepfake network.
https://arxiv.org/abs/2101.03321v1
https://arxiv.org/pdf/2101.03321v1.pdf
null
[ "Vineet Mehta", "Parul Gupta", "Ramanathan Subramanian", "Abhinav Dhall" ]
[ "Face Swapping" ]
1,610,150,400,000
[]
5,388
168,778
https://paperswithcode.com/paper/a-deep-learning-based-interactive-sketching
2010.04413
A deep learning based interactive sketching system for fashion images design
In this work, we propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information. The major challenge behind this system is to generate high-quality and detailed texture according to the user-provided texture information. Prior works mainly use the texture patch representation and try to map a small texture patch to a whole garment image, hence unable to generate high-quality details. In contrast, inspired by intrinsic image decomposition, we decompose this task into texture synthesis and shading enhancement. In particular, we propose a novel bi-colored edge texture representation to synthesize textured garment images and a shading enhancer to render shading based on the grayscale edges. The bi-colored edge representation provides simple but effective texture cues and color constraints, so that the details can be better reconstructed. Moreover, with the rendered shading, the synthesized garment image becomes more vivid.
https://arxiv.org/abs/2010.04413v1
https://arxiv.org/pdf/2010.04413v1.pdf
null
[ "Yao Li", "Xianggang Yu", "Xiaoguang Han", "Nianjuan Jiang", "Kui Jia", "Jiangbo Lu" ]
[ "Intrinsic Image Decomposition", "Texture Synthesis" ]
1,602,201,600,000
[]
17,119
227,557
https://paperswithcode.com/paper/reinforcement-learning-based-dialogue-guided
2106.12384
Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit Argument Relations
Event extraction is a fundamental task for natural language processing. Finding the roles of event arguments like event participants is essential for event extraction. However, doing so for real-life event descriptions is challenging because an argument's role often varies in different contexts. While the relationship and interactions between multiple arguments are useful for settling the argument roles, such information is largely ignored by existing approaches. This paper presents a better approach for event extraction by explicitly utilizing the relationships of event arguments. We achieve this through a carefully designed task-oriented dialogue system. To model the argument relation, we employ reinforcement learning and incremental learning to extract multiple arguments via a multi-turned, iterative process. Our approach leverages knowledge of the already extracted arguments of the same sentence to determine the role of arguments that would be difficult to decide individually. It then uses the newly obtained information to improve the decisions of previously extracted arguments. This two-way feedback process allows us to exploit the argument relations to effectively settle argument roles, leading to better sentence understanding and event extraction. Experimental results show that our approach consistently outperforms seven state-of-the-art event extraction methods for the classification of events and argument role and argument identification.
https://arxiv.org/abs/2106.12384v2
https://arxiv.org/pdf/2106.12384v2.pdf
null
[ "Qian Li", "Hao Peng", "JianXin Li", "Jia Wu", "Yuanxing Ning", "Lihong Wang", "Philip S. Yu", "Zheng Wang" ]
[ "Event Extraction", "Incremental Learning", "reinforcement-learning" ]
1,624,406,400,000
[]
134,800
26,039
https://paperswithcode.com/paper/adversarial-examples-for-generative-models
1702.06832
Adversarial examples for generative models
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.
http://arxiv.org/abs/1702.06832v1
http://arxiv.org/pdf/1702.06832v1.pdf
null
[ "Jernej Kos", "Ian Fischer", "Dawn Song" ]
[ "Classification", "Classification" ]
1,487,721,600,000
[ { "code_snippet_url": "https://github.com/L1aoXingyu/pytorch-beginner/blob/9c86be785c7c318a09cf29112dd1f1a58613239b/08-AutoEncoder/simple_autoencoder.py#L38", "description": "An **Autoencoder** is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder).\r\n\r\nImage: [Michael Massi](https://en.wikipedia.org/wiki/Autoencoder#/media/File:Autoencoder_schema.png)", "full_name": "AutoEncoder", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "AutoEncoder", "source_title": "Reducing the Dimensionality of Data with Neural Networks", "source_url": "https://science.sciencemag.org/content/313/5786/504" }, { "code_snippet_url": "https://github.com/AntixK/PyTorch-VAE/blob/8700d245a9735640dda458db4cf40708caf2e77f/models/vanilla_vae.py#L8", "description": "A **Variational Autoencoder** is a type of likelihood-based generative model. It consists of an encoder, that takes in data $x$ as input and transforms this into a latent representation $z$, and a decoder, that takes a latent representation $z$ and returns a reconstruction $\\hat{x}$. Inference is performed via variational inference to approximate the posterior of the model.", "full_name": "Variational Autoencoder", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "VAE", "source_title": "Auto-Encoding Variational Bayes", "source_url": "http://arxiv.org/abs/1312.6114v10" } ]
153,759
279,975
https://paperswithcode.com/paper/cake-a-scalable-commonsense-aware-framework
2202.13785
CAKE: A Scalable Commonsense-Aware Framework For Multi-View Knowledge Graph Completion
Knowledge graphs store a large number of factual triples while they are still incomplete, inevitably. The previous knowledge graph completion (KGC) models predict missing links between entities merely relying on fact-view data, ignoring the valuable commonsense knowledge. The previous knowledge graph embedding (KGE) techniques suffer from invalid negative sampling and the uncertainty of fact-view link prediction, limiting KGC's performance. To address the above challenges, we propose a novel and scalable Commonsense-Aware Knowledge Embedding (CAKE) framework to automatically extract commonsense from factual triples with entity concepts. The generated commonsense augments effective self-supervision to facilitate both high-quality negative sampling (NS) and joint commonsense and fact-view link prediction. Experimental results on the KGC task demonstrate that assembling our framework could enhance the performance of the original KGE models, and the proposed commonsense-aware NS module is superior to other NS techniques. Besides, our proposed framework could be easily adaptive to various KGE models and explain the predicted results.
https://arxiv.org/abs/2202.13785v3
https://arxiv.org/pdf/2202.13785v3.pdf
ACL 2022 5
[ "Guanglin Niu", "Bo Li", "Yongfei Zhang", "ShiLiang Pu" ]
[ "Graph Embedding", "Knowledge Graph Completion", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction" ]
1,645,747,200,000
[]
53,744
184,651
https://paperswithcode.com/paper/mufold-betaturn-a-deep-dense-inception
1808.04322
MUFold-BetaTurn: A Deep Dense Inception Network for Protein Beta-Turn Prediction
Beta-turn prediction is useful in protein function studies and experimental design. Although recent approaches using machine-learning techniques such as SVM, neural networks, and K-NN have achieved good results for beta-turn pre-diction, there is still significant room for improvement. As previous predictors utilized features in a sliding window of 4-20 residues to capture interactions among sequentially neighboring residues, such feature engineering may result in incomplete or biased features, and neglect interactions among long-range residues. Deep neural networks provide a new opportunity to address these issues. Here, we proposed a deep dense inception network (DeepDIN) for beta-turn prediction, which takes advantages of the state-of-the-art deep neural network design of the DenseNet and the inception network. A test on a recent BT6376 benchmark shows that the DeepDIN outperformed the previous best BetaTPred3 significantly in both the overall prediction accuracy and the nine-type beta-turn classification. A tool, called MUFold-BetaTurn, was developed, which is the first beta-turn prediction tool utilizing deep neural networks. The tool can be downloaded at http://dslsrv8.cs.missouri.edu/~cf797/MUFoldBetaTurn/download.html.
http://arxiv.org/abs/1808.04322v1
http://arxiv.org/pdf/1808.04322v1.pdf
null
[]
[ "Experimental Design", "Feature Engineering" ]
1,534,118,400,000
[]
97,061
137,241
https://paperswithcode.com/paper/pool-based-unsupervised-active-learning-for
2003.07658
Pool-Based Unsupervised Active Learning for Regression Using Iterative Representativeness-Diversity Maximization (iRDM)
Active learning (AL) selects the most beneficial unlabeled samples to label, and hence a better machine learning model can be trained from the same number of labeled samples. Most existing active learning for regression (ALR) approaches are supervised, which means the sampling process must use some label information, or an existing regression model. This paper considers completely unsupervised ALR, i.e., how to select the samples to label without knowing any true label information. We propose a novel unsupervised ALR approach, iterative representativeness-diversity maximization (iRDM), to optimally balance the representativeness and the diversity of the selected samples. Experiments on 12 datasets from various domains demonstrated its effectiveness. Our iRDM can be applied to both linear regression and kernel regression, and it even significantly outperforms supervised ALR when the number of labeled samples is small.
https://arxiv.org/abs/2003.07658v2
https://arxiv.org/pdf/2003.07658v2.pdf
null
[ "Ziang Liu", "Xue Jiang", "Hanbin Luo", "Weili Fang", "Jiajing Liu", "Dongrui Wu" ]
[ "Active Learning" ]
1,584,403,200,000
[ { "code_snippet_url": null, "description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)", "full_name": "Linear Regression", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.", "name": "Generalized Linear Models", "parent": null }, "name": "Linear Regression", "source_title": null, "source_url": null } ]
120,211
293,867
https://paperswithcode.com/paper/cross-modal-cloze-task-a-new-task-to-brain-to
null
Cross-Modal Cloze Task: A New Task to Brain-to-Word Decoding
Decoding language from non-invasive brain activity has attracted increasing attention from both researchers in neuroscience and natural language processing. Due to the noisy nature of brain recordings, existing work has simplified brain-to-word decoding as a binary classification task which is to discriminate a brain signal between its corresponding word and a wrong one. This pairwise classification task, however, cannot promote the development of practical neural decoders for two reasons. First, it has to enumerate all pairwise combinations in the test set, so it is inefficient to predict a word in a large vocabulary. Second, a perfect pairwise decoder cannot guarantee the performance on direct classification. To overcome these and go a step further to a realistic neural decoder, we propose a novel Cross-Modal Cloze (CMC) task which is to predict the target word encoded in the neural image with a context as prompt. Furthermore, to address this task, we propose a general approach that leverages the pre-trained language model to predict the target word. To validate our method, we perform experiments on more than 20 participants from two brain imaging datasets. Our method achieves 28.91% top-1 accuracy and 54.19% top-5 accuracy on average across all participants, significantly outperforming several baselines. This result indicates that our model can serve as a state-of-the-art baseline for the CMC task. More importantly, it demonstrates that it is feasible to decode a certain word within a large vocabulary from its neural brain activity.
https://aclanthology.org/2022.findings-acl.54
https://aclanthology.org/2022.findings-acl.54.pdf
Findings (ACL) 2022 5
[ "Shuxian Zou", "Shaonan Wang", "Jiajun Zhang", "Chengqing Zong" ]
[ "Language Modelling" ]
1,651,363,200,000
[]
154,832
227,847
https://paperswithcode.com/paper/bayesian-inference-in-high-dimensional-time-1
2106.13379
Bayesian Inference in High-Dimensional Time-Serieswith the Orthogonal Stochastic Linear Mixing Model
Many modern time-series datasets contain large numbers of output response variables sampled for prolonged periods of time. For example, in neuroscience, the activities of 100s-1000's of neurons are recorded during behaviors and in response to sensory stimuli. Multi-output Gaussian process models leverage the nonparametric nature of Gaussian processes to capture structure across multiple outputs. However, this class of models typically assumes that the correlations between the output response variables are invariant in the input space. Stochastic linear mixing models (SLMM) assume the mixture coefficients depend on input, making them more flexible and effective to capture complex output dependence. However, currently, the inference for SLMMs is intractable for large datasets, making them inapplicable to several modern time-series problems. In this paper, we propose a new regression framework, the orthogonal stochastic linear mixing model (OSLMM) that introduces an orthogonal constraint amongst the mixing coefficients. This constraint reduces the computational burden of inference while retaining the capability to handle complex output dependence. We provide Markov chain Monte Carlo inference procedures for both SLMM and OSLMM and demonstrate superior model scalability and reduced prediction error of OSLMM compared with state-of-the-art methods on several real-world applications. In neurophysiology recordings, we use the inferred latent functions for compact visualization of population responses to auditory stimuli, and demonstrate superior results compared to a competing method (GPFA). Together, these results demonstrate that OSLMM will be useful for the analysis of diverse, large-scale time-series datasets.
https://arxiv.org/abs/2106.13379v2
https://arxiv.org/pdf/2106.13379v2.pdf
null
[ "Rui Meng", "Kristofer Bouchard" ]
[ "Bayesian Inference", "Gaussian Processes", "Time Series" ]
1,624,579,200,000
[ { "code_snippet_url": null, "description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams", "full_name": "Gaussian Process", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.", "name": "Non-Parametric Classification", "parent": null }, "name": "Gaussian Process", "source_title": null, "source_url": null } ]
102,352
236,184
https://paperswithcode.com/paper/modulating-language-models-with-emotions
2108.07886
Modulating Language Models with Emotions
Generating context-aware language that embodies diverse emotions is an important step towards building empathetic NLP systems. In this paper, we propose a formulation of modulated layer normalization -- a technique inspired by computer vision -- that allows us to use large-scale language models for emotional response generation. In automatic and human evaluation on the MojiTalk dataset, our proposed modulated layer normalization method outperforms prior baseline methods while maintaining diversity, fluency, and coherence. Our method also obtains competitive performance even when using only 10% of the available training data.
https://arxiv.org/abs/2108.07886v1
https://arxiv.org/pdf/2108.07886v1.pdf
Findings (ACL) 2021 8
[ "Ruibo Liu", "Jason Wei", "Chenyan Jia", "Soroush Vosoughi" ]
[ "Response Generation" ]
1,629,158,400,000
[ { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" } ]
97,900
290,977
https://paperswithcode.com/paper/defending-against-person-hiding-adversarial
2204.13004
Defending Against Person Hiding Adversarial Patch Attack with a Universal White Frame
Object detection has attracted great attention in the computer vision area and has emerged as an indispensable component in many vision systems. In the era of deep learning, many high-performance object detection networks have been proposed. Although these detection networks show high performance, they are vulnerable to adversarial patch attacks. Changing the pixels in a restricted region can easily fool the detection network in the physical world. In particular, person-hiding attacks are emerging as a serious problem in many safety-critical applications such as autonomous driving and surveillance systems. Although it is necessary to defend against an adversarial patch attack, very few efforts have been dedicated to defending against person-hiding attacks. To tackle the problem, in this paper, we propose a novel defense strategy that mitigates a person-hiding attack by optimizing defense patterns, while previous methods optimize the model. In the proposed method, a frame-shaped pattern called a 'universal white frame' (UWF) is optimized and placed on the outside of the image. To defend against adversarial patch attacks, UWF should have three properties (i) suppressing the effect of the adversarial patch, (ii) maintaining its original prediction, and (iii) applicable regardless of images. To satisfy the aforementioned properties, we propose a novel pattern optimization algorithm that can defend against the adversarial patch. Through comprehensive experiments, we demonstrate that the proposed method effectively defends against the adversarial patch attack.
https://arxiv.org/abs/2204.13004v1
https://arxiv.org/pdf/2204.13004v1.pdf
null
[ "Youngjoon Yu", "Hong Joo Lee", "Hakmin Lee", "Yong Man Ro" ]
[ "Autonomous Driving", "Object Detection", "Object Detection" ]
1,651,017,600,000
[]
191,602
290,047
https://paperswithcode.com/paper/towards-fewer-labels-support-pair-active
2204.10008
Towards Fewer Labels: Support Pair Active Learning for Person Re-identification
Supervised-learning based person re-identification (re-id) require a large amount of manual labeled data, which is not applicable in practical re-id deployment. In this work, we propose a Support Pair Active Learning (SPAL) framework to lower the manual labeling cost for large-scale person reidentification. The support pairs can provide the most informative relationships and support the discriminative feature learning. Specifically, we firstly design a dual uncertainty selection strategy to iteratively discover support pairs and require human annotations. Afterwards, we introduce a constrained clustering algorithm to propagate the relationships of labeled support pairs to other unlabeled samples. Moreover, a hybrid learning strategy consisting of an unsupervised contrastive loss and a supervised support pair loss is proposed to learn the discriminative re-id feature representation. The proposed overall framework can effectively lower the labeling cost by mining and leveraging the critical support pairs. Extensive experiments demonstrate the superiority of the proposed method over state-of-the-art active learning methods on large-scale person re-id benchmarks.
https://arxiv.org/abs/2204.10008v1
https://arxiv.org/pdf/2204.10008v1.pdf
null
[ "Dapeng Jin", "Minxian Li" ]
[ "Active Learning", "Person Re-Identification" ]
1,650,499,200,000
[]
22,530
822
https://paperswithcode.com/paper/addition-of-code-mixed-features-to-enhance
1806.03821
Addition of Code Mixed Features to Enhance the Sentiment Prediction of Song Lyrics
Sentiment analysis, also called opinion mining, is the field of study that analyzes people's opinions,sentiments, attitudes and emotions. Songs are important to sentiment analysis since the songs and mood are mutually dependent on each other. Based on the selected song it becomes easy to find the mood of the listener, in future it can be used for recommendation. The song lyric is a rich source of datasets containing words that are helpful in analysis and classification of sentiments generated from it. Now a days we observe a lot of inter-sentential and intra-sentential code-mixing in songs which has a varying impact on audience. To study this impact we created a Telugu songs dataset which contained both Telugu-English code-mixed and pure Telugu songs. In this paper, we classify the songs based on its arousal as exciting or non-exciting. We develop a language identification tool and introduce code-mixing features obtained from it as additional features. Our system with these additional features attains 4-5% accuracy greater than traditional approaches on our dataset.
http://arxiv.org/abs/1806.03821v1
http://arxiv.org/pdf/1806.03821v1.pdf
null
[ "Gangula Rama Rohit Reddy", "Radhika Mamidi" ]
[ "Language Identification", "Opinion Mining", "Sentiment Analysis" ]
1,528,675,200,000
[]
174,454
6,803
https://paperswithcode.com/paper/multi-lingual-neural-title-generation-for-e
1804.01041
Multi-lingual neural title generation for e-Commerce browse pages
To provide better access of the inventory to buyers and better search engine optimization, e-Commerce websites are automatically generating millions of easily searchable browse pages. A browse page consists of a set of slot name/value pairs within a given category, grouping multiple items which share some characteristics. These browse pages require a title describing the content of the page. Since the number of browse pages are huge, manual creation of these titles is infeasible. Previous statistical and neural approaches depend heavily on the availability of large amounts of data in a language. In this research, we apply sequence-to-sequence models to generate titles for high- & low-resourced languages by leveraging transfer learning. We train these models on multi-lingual data, thereby creating one joint model which can generate titles in various different languages. Performance of the title generation system is evaluated on three different languages; English, German, and French, with a particular focus on low-resourced French language.
http://arxiv.org/abs/1804.01041v1
http://arxiv.org/pdf/1804.01041v1.pdf
NAACL 2018 6
[ "Prashant Mathur", "Nicola Ueffing", "Gregor Leusch" ]
[ "Transfer Learning" ]
1,522,713,600,000
[]
185,413
193,153
https://paperswithcode.com/paper/understanding-interpretability-by-generalized
2012.03089
Understanding Interpretability by generalized distillation in Supervised Classification
The ability to interpret decisions taken by Machine Learning (ML) models is fundamental to encourage trust and reliability in different practical applications. Recent interpretation strategies focus on human understanding of the underlying decision mechanisms of the complex ML models. However, these strategies are restricted by the subjective biases of humans. To dissociate from such human biases, we propose an interpretation-by-distillation formulation that is defined relative to other ML models. We generalize the distillation technique for quantifying interpretability, using an information-theoretic perspective, removing the role of ground-truth from the definition of interpretability. Our work defines the entropy of supervised classification models, providing bounds on the entropy of Piece-Wise Linear Neural Networks (PWLNs), along with the first theoretical bounds on the interpretability of PWLNs. We evaluate our proposed framework on the MNIST, Fashion-MNIST and Stanford40 datasets and demonstrate the applicability of the proposed theoretical framework in different supervised classification scenarios.
https://arxiv.org/abs/2012.03089v1
https://arxiv.org/pdf/2012.03089v1.pdf
null
[ "Adit Agarwal", "Dr. K. K. Shukla", "Arjan Kuijper", "Anirban Mukhopadhyay" ]
[ "Classification", "Classification" ]
1,607,126,400,000
[ { "code_snippet_url": null, "description": "Please enter a description about the method here", "full_name": "Interpretability", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.", "name": "Image Models", "parent": null }, "name": "Interpretability", "source_title": "CAM: Causal additive models, high-dimensional order search and penalized regression", "source_url": "http://arxiv.org/abs/1310.1533v2" } ]
60,649
313,207
https://paperswithcode.com/paper/improving-multilayer-perceptron-mlp-based
2208.09711
Improving Multilayer-Perceptron(MLP)-based Network Anomaly Detection with Birch Clustering on CICIDS-2017 Dataset
Machine learning algorithms have been widely used in intrusion detection systems, including Multi-layer Perceptron (MLP). In this study, we proposed a two-stage model that combines the Birch clustering algorithm and MLP classifier to improve the performance of network anomaly multi-classification. In our proposed method, we first apply Birch or Kmeans as an unsupervised clustering algorithm to the CICIDS-2017 dataset to pre-group the data. The generated pseudo-label is then added as an additional feature to the training of the MLP-based classifier. The experimental results show that using Birch and K-Means clustering for data pre-grouping can improve intrusion detection system performance. Our method can achieve 99.73% accuracy in multi-classification using Birch clustering, which is better than similar researches using a stand-alone MLP model.
https://arxiv.org/abs/2208.09711v1
https://arxiv.org/pdf/2208.09711v1.pdf
null
[ "Yuhua Yin", "Julian Jang-Jaccard", "Fariza Sabrina", "Jin Kwak" ]
[ "Anomaly Detection", "Intrusion Detection", "pseudo label" ]
1,660,953,600,000
[ { "code_snippet_url": "https://cryptoabout.info", "description": "**k-Means Clustering** is a clustering algorithm that divides a training set into $k$ different clusters of examples that are near each other. It works by initializing $k$ different centroids {$\\mu\\left(1\\right),\\ldots,\\mu\\left(k\\right)$} to different values, then alternating between two steps until convergence:\r\n\r\n(i) each training example is assigned to cluster $i$ where $i$ is the index of the nearest centroid $\\mu^{(i)}$\r\n\r\n(ii) each centroid $\\mu^{(i)}$ is updated to the mean of all training examples $x^{(j)}$ assigned to cluster $i$.\r\n\r\nText Source: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [scikit-learn](https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html)", "full_name": "k-Means Clustering", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.", "name": "Clustering", "parent": null }, "name": "k-Means Clustering", "source_title": null, "source_url": null } ]
92,023
52,195
https://paperswithcode.com/paper/twitter-sentiment-analysis-via-bi-sense-emoji
1807.07961
Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM
Sentiment analysis on large-scale social media data is important to bridge the gaps between social media contents and real world activities including political election prediction, individual and public emotional status monitoring and analysis, and so on. Although textual sentiment analysis has been well studied based on platforms such as Twitter and Instagram, analysis of the role of extensive emoji uses in sentiment analysis remains light. In this paper, we propose a novel scheme for Twitter sentiment analysis with extra attention on emojis. We first learn bi-sense emoji embeddings under positive and negative sentimental tweets individually, and then train a sentiment classifier by attending on these bi-sense emoji embeddings with an attention-based long short-term memory network (LSTM). Our experiments show that the bi-sense embedding is effective for extracting sentiment-aware embeddings of emojis and outperforms the state-of-the-art models. We also visualize the attentions to show that the bi-sense emoji embedding provides better guidance on the attention mechanism to obtain a more robust understanding of the semantics and sentiments.
http://arxiv.org/abs/1807.07961v2
http://arxiv.org/pdf/1807.07961v2.pdf
null
[ "Yuxiao Chen", "Jianbo Yuan", "Quanzeng You", "Jiebo Luo" ]
[ "Sentiment Analysis", "Twitter Sentiment Analysis" ]
1,532,044,800,000
[ { "code_snippet_url": "https://github.com/aykutaaykut/Memory-Networks", "description": "A **Memory Network** provides a memory component that can be read from and written to with the inference capabilities of a neural network model. The motivation is that many neural networks lack a long-term memory component, and their existing memory component encoded by states and weights is too small and not compartmentalized enough to accurately remember facts from the past (RNNs for example, have difficult memorizing and doing tasks like copying). \r\n\r\nA memory network consists of a memory $\\textbf{m}$ (an array of objects indexed by $\\textbf{m}\\_{i}$ and four potentially learned components:\r\n\r\n- Input feature map $I$ - feature representation of the data input.\r\n- Generalization $G$ - updates old memories given the new input.\r\n- Output feature map $O$ - produces new feature map given $I$ and $G$.\r\n- Response $R$ - converts output into the desired response. \r\n\r\nGiven an input $x$ (e.g., an input character, word or sentence depending on the granularity chosen, an image or an audio signal) the flow of the model is as follows:\r\n\r\n1. Convert $x$ to an internal feature representation $I\\left(x\\right)$.\r\n2. Update memories $m\\_{i}$ given the new input: $m\\_{i} = G\\left(m\\_{i}, I\\left(x\\right), m\\right)$, $\\forall{i}$.\r\n3. Compute output features $o$ given the new input and the memory: $o = O\\left(I\\left(x\\right), m\\right)$.\r\n4. Finally, decode output features $o$ to give the final response: $r = R\\left(o\\right)$.\r\n\r\nThis process is applied at both train and test time, if there is a distinction between such phases, that\r\nis, memories are also stored at test time, but the model parameters of $I$, $G$, $O$ and $R$ are not updated. Memory networks cover a wide class of possible implementations. The components $I$, $G$, $O$ and $R$ can potentially use any existing ideas from the machine learning literature.\r\n\r\nImage Source: [Adrian Colyer](https://blog.acolyer.org/2016/03/10/memory-networks/)", "full_name": "Memory Network", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Working Memory Models** aim to supplement neural networks with a memory module to increase their capability for memorization and allowing them to more easily perform tasks such as retrieving and copying information. Below you can find a continuously updating list of working memory models.", "name": "Working Memory Models", "parent": null }, "name": "Memory Network", "source_title": "Memory Networks", "source_url": "http://arxiv.org/abs/1410.3916v11" } ]
87,823
164,737
https://paperswithcode.com/paper/an-incentive-mechanism-for-federated-learning
2009.10269
An Incentive Mechanism for Federated Learning in Wireless Cellular network: An Auction Approach
Federated Learning (FL) is a distributed learning framework that can deal with the distributed issue in machine learning and still guarantee high learning performance. However, it is impractical that all users will sacrifice their resources to join the FL algorithm. This motivates us to study the incentive mechanism design for FL. In this paper, we consider a FL system that involves one base station (BS) and multiple mobile users. The mobile users use their own data to train the local machine learning model, and then send the trained models to the BS, which generates the initial model, collects local models and constructs the global model. Then, we formulate the incentive mechanism between the BS and mobile users as an auction game where the BS is an auctioneer and the mobile users are the sellers. In the proposed game, each mobile user submits its bids according to the minimal energy cost that the mobile users experiences in participating in FL. To decide winners in the auction and maximize social welfare, we propose the primal-dual greedy auction mechanism. The proposed mechanism can guarantee three economic properties, namely, truthfulness, individual rationality and efficiency. Finally, numerical results are shown to demonstrate the performance effectiveness of our proposed mechanism.
https://arxiv.org/abs/2009.10269v1
https://arxiv.org/pdf/2009.10269v1.pdf
null
[ "Tra Huong Thi Le", "Nguyen H. Tran", "Yan Kyaw Tun", "Minh N. H. Nguyen", "Shashi Raj Pandey", "Zhu Han", "Choong Seon Hong" ]
[ "Federated Learning" ]
1,600,732,800,000
[]
25,683
314,754
https://paperswithcode.com/paper/spoofing-aware-attention-based-asv-back-end
2209.00423
Spoofing-Aware Attention based ASV Back-end with Multiple Enrollment Utterances and a Sampling Strategy for the SASV Challenge 2022
Current state-of-the-art automatic speaker verification (ASV) systems are vulnerable to presentation attacks, and several countermeasures (CMs), which distinguish bona fide trials from spoofing ones, have been explored to protect ASV. However, ASV systems and CMs are generally developed and optimized independently without considering their inter-relationship. In this paper, we propose a new spoofing-aware ASV back-end module that efficiently computes a combined ASV score based on speaker similarity and CM score. In addition to the learnable fusion function of the two scores, the proposed back-end module has two types of attention components, scaled-dot and feed-forward self-attention, so that intra-relationship information of multiple enrollment utterances can also be learned at the same time. Moreover, a new effective trials-sampling strategy is designed for simulating new spoofing-aware verification scenarios introduced in the Spoof-Aware Speaker Verification (SASV) challenge 2022.
https://arxiv.org/abs/2209.00423v1
https://arxiv.org/pdf/2209.00423v1.pdf
null
[ "Chang Zeng", "Lin Zhang", "Meng Liu", "Junichi Yamagishi" ]
[ "Speaker Verification" ]
1,661,990,400,000
[]
186,256
256,745
https://paperswithcode.com/paper/parbleu-augmenting-metrics-with-automatic
null
ParBLEU: Augmenting Metrics with Automatic Paraphrases for the WMT’20 Metrics Shared Task
We describe parBLEU, parCHRF++, and parESIM, which augment baseline metrics with automatically generated paraphrases produced by PRISM (Thompson and Post, 2020a), a multilingual neural machine translation system. We build on recent work studying how to improve BLEU by using diverse automatically paraphrased references (Bawden et al., 2020), extending experiments to the multilingual setting for the WMT2020 metrics shared task and for three base metrics. We compare their capacity to exploit up to 100 additional synthetic references. We find that gains are possible when using additional, automatically paraphrased references, although they are not systematic. However, segment-level correlations, particularly into English, are improved for all three metrics and even with higher numbers of paraphrased references.
https://aclanthology.org/2020.wmt-1.98
https://aclanthology.org/2020.wmt-1.98.pdf
WMT (EMNLP) 2020 11
[ "Rachel Bawden", "Biao Zhang", "Andre Tättar", "Matt Post" ]
[ "Machine Translation" ]
1,604,188,800,000
[]
32,834
207,192
https://paperswithcode.com/paper/learning-to-simulate-on-sparse-trajectory
2103.11845
Learning to Simulate on Sparse Trajectory Data
Simulation of the real-world traffic can be used to help validate the transportation policies. A good simulator means the simulated traffic is similar to real-world traffic, which often requires dense traffic trajectories (i.e., with a high sampling rate) to cover dynamic situations in the real world. However, in most cases, the real-world trajectories are sparse, which makes simulation challenging. In this paper, we present a novel framework ImInGAIL to address the problem of learning to simulate the driving behavior from sparse real-world data. The proposed architecture incorporates data interpolation with the behavior learning process of imitation learning. To the best of our knowledge, we are the first to tackle the data sparsity issue for behavior learning problems. We investigate our framework on both synthetic and real-world trajectory datasets of driving vehicles, showing that our method outperforms various baselines and state-of-the-art methods.
https://arxiv.org/abs/2103.11845v1
https://arxiv.org/pdf/2103.11845v1.pdf
null
[ "Hua Wei", "Chacha Chen", "Chang Liu", "Guanjie Zheng", "Zhenhui Li" ]
[ "Imitation Learning" ]
1,616,371,200,000
[]
148,197
13,588
https://paperswithcode.com/paper/a-variational-approach-to-shape-from-shading
1709.10354
A Variational Approach to Shape-from-shading Under Natural Illumination
A numerical solution to shape-from-shading under natural illumination is presented. It builds upon an augmented Lagrangian approach for solving a generic PDE-based shape-from-shading model which handles directional or spherical harmonic lighting, orthographic or perspective projection, and greylevel or multi-channel images. Real-world applications to shading-aware depth map denoising, refinement and completion are presented.
http://arxiv.org/abs/1709.10354v2
http://arxiv.org/pdf/1709.10354v2.pdf
null
[ "Yvain Quéau", "Jean Mélou", "Fabien Castan", "Daniel Cremers", "Jean-Denis Durou" ]
[ "Denoising" ]
1,506,643,200,000
[]
131,612
212,741
https://paperswithcode.com/paper/unsupervised-learning-of-explainable-parse
2104.04998
Unsupervised Learning of Explainable Parse Trees for Improved Generalisation
Recursive neural networks (RvNN) have been shown useful for learning sentence representations and helped achieve competitive performance on several natural language inference tasks. However, recent RvNN-based models fail to learn simple grammar and meaningful semantics in their intermediate tree representation. In this work, we propose an attention mechanism over Tree-LSTMs to learn more meaningful and explainable parse tree structures. We also demonstrate the superior performance of our proposed model on natural language inference, semantic relatedness, and sentiment analysis tasks and compare them with other state-of-the-art RvNN based methods. Further, we present a detailed qualitative and quantitative analysis of the learned parse trees and show that the discovered linguistic structures are more explainable, semantically meaningful, and grammatically correct than recent approaches. The source code of the paper is available at https://github.com/atul04/Explainable-Latent-Structures-Using-Attention.
https://arxiv.org/abs/2104.04998v1
https://arxiv.org/pdf/2104.04998v1.pdf
null
[ "Atul Sahay", "Ayush Maheshwari", "Ritesh Kumar", "Ganesh Ramakrishnan", "Manjesh Kumar Hanawal", "Kavi Arya" ]
[ "Natural Language Inference", "Sentiment Analysis" ]
1,618,099,200,000
[]
137,812
277,335
https://paperswithcode.com/paper/towards-weakly-supervised-text-spotting-using
2202.05508
Towards Weakly-Supervised Text Spotting using a Multi-Task Transformer
Text spotting end-to-end methods have recently gained attention in the literature due to the benefits of jointly optimizing the text detection and recognition components. Existing methods usually have a distinct separation between the detection and recognition branches, requiring exact annotations for the two tasks. We introduce TextTranSpotter (TTS), a transformer-based approach for text spotting and the first text spotting framework which may be trained with both fully- and weakly-supervised settings. By learning a single latent representation per word detection, and using a novel loss function based on the Hungarian loss, our method alleviates the need for expensive localization annotations. Trained with only text transcription annotations on real data, our weakly-supervised method achieves competitive performance with previous state-of-the-art fully-supervised methods. When trained in a fully-supervised manner, TextTranSpotter shows state-of-the-art results on multiple benchmarks.
https://arxiv.org/abs/2202.05508v2
https://arxiv.org/pdf/2202.05508v2.pdf
CVPR 2022 1
[ "Yair Kittenplon", "Inbal Lavi", "Sharon Fogel", "Yarin Bar", "R. Manmatha", "Pietro Perona" ]
[ "Text Spotting" ]
1,644,537,600,000
[]
6,532
168,919
https://paperswithcode.com/paper/a-novel-strategy-for-covid-19-classification
2010.05690
COVID-19 Classification Using Staked Ensembles: A Comprehensive Analysis
The issue of COVID-19, increasing with a massive mortality rate. This led to the WHO declaring it as a pandemic. In this situation, it is crucial to perform efficient and fast diagnosis. The reverse transcript polymerase chain reaction (RTPCR) test is conducted to detect the presence of SARS-CoV-2. This test is time-consuming and instead chest CT (or Chest X-ray) can be used for a fast and accurate diagnosis. Automated diagnosis is considered to be important as it reduces human effort and provides accurate and low-cost tests. The contributions of our research are three-fold. First, it is aimed to analyse the behaviour and performance of variant vision models ranging from Inception to NAS networks with the appropriate fine-tuning procedure. Second, the behaviour of these models is visually analysed by plotting CAMs for individual networks and determining classification performance with AUCROC curves. Thirdly, stacked ensembles techniques are imparted to provide higher generalisation on combining the fine-tuned models, in which six ensemble neural networks are designed by combining the existing fine-tuned networks. Implying these stacked ensembles provides a great generalization to the models. The ensemble model designed by combining all the fine-tuned networks obtained a state-of-the-art accuracy score of 99.17%. The precision and recall for the COVID-19 class are 99.99% and 89.79% respectively, which resembles the robustness of the stacked ensembles.
https://arxiv.org/abs/2010.05690v3
https://arxiv.org/pdf/2010.05690v3.pdf
null
[ "Lalith Bharadwaj B", "Rohit Boddeda", "Sai Vardhan K", "Madhu G" ]
[ "Classification" ]
1,602,028,800,000
[]
2,990
264,422
https://paperswithcode.com/paper/multilingual-pre-training-with-language-and
null
Multilingual pre-training with Language and Task Adaptation for Multilingual Text Style Transfer
We exploit the pre-trained seq2seq model mBART for multilingual text style transfer. Using machine translated data as well as gold aligned English sentences yields state-of-the-art results in the three target languages we consider. Besides, in view of the general scarcity of parallel data, we propose a modular approach for multilingual formality transfer, which consists of two training strategies that target adaptation to both language and task. Our approach achieves competitive performance without monolingual task-specific parallel data and can be applied to other style transfer tasks as well as to other languages.
https://openreview.net/forum?id=rWPLdCIiY6g
https://openreview.net/pdf?id=rWPLdCIiY6g
ACL ARR November 2021 11
[ "Anonymous" ]
[ "Style Transfer", "Text Style Transfer" ]
1,637,020,800,000
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329", "description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)", "full_name": "Tanh Activation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.", "name": "Activation Functions", "parent": null }, "name": "Tanh Activation", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277", "description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.", "full_name": "Sigmoid Activation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.", "name": "Activation Functions", "parent": null }, "name": "Sigmoid Activation", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)", "full_name": "Long Short-Term Memory", "introduced_year": 1997, "main_collection": { "area": "Sequential", "description": "", "name": "Recurrent Neural Networks", "parent": null }, "name": "LSTM", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**mBART** is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the [BART objective](https://paperswithcode.com/method/bart). The input texts are noised by masking phrases and permuting sentences, and a single [Transformer model](https://paperswithcode.com/method/transformer) is learned to recover the texts. Different from other pre-training approaches for machine translation, mBART pre-trains a complete autoregressive [Seq2Seq](https://paperswithcode.com/method/seq2seq) model. mBART is trained once for all languages, providing a set of parameters that can be fine-tuned for any of the language pairs in both supervised and unsupervised settings, without any task-specific or language-specific modifications or initialization schemes.", "full_name": "mBART", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "mBART", "source_title": "Multilingual Denoising Pre-training for Neural Machine Translation", "source_url": "https://arxiv.org/abs/2001.08210v2" }, { "code_snippet_url": null, "description": "**Seq2Seq**, or **Sequence To Sequence**, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one [LSTM](https://paperswithcode.com/method/lstm), the *encoder*, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector representation (a context vector), and then to use another LSTM, the *decoder*, to extract the output sequence\r\nfrom that vector. The second LSTM is essentially a recurrent neural network language model except that it is conditioned on the input sequence.\r\n\r\n(Note that this page refers to the original seq2seq not general sequence-to-sequence models)", "full_name": "Sequence to Sequence", "introduced_year": 2000, "main_collection": { "area": "Sequential", "description": "", "name": "Sequence To Sequence Models", "parent": null }, "name": "Seq2Seq", "source_title": "Sequence to Sequence Learning with Neural Networks", "source_url": "http://arxiv.org/abs/1409.3215v3" } ]
2,148
215,525
https://paperswithcode.com/paper/discovering-an-aid-policy-to-minimize-student
2104.10258
Discovering an Aid Policy to Minimize Student Evasion Using Offline Reinforcement Learning
High dropout rates in tertiary education expose a lack of efficiency that causes frustration of expectations and financial waste. Predicting students at risk is not enough to avoid student dropout. Usually, an appropriate aid action must be discovered and applied in the proper time for each student. To tackle this sequential decision-making problem, we propose a decision support method to the selection of aid actions for students using offline reinforcement learning to support decision-makers effectively avoid student dropout. Additionally, a discretization of student's state space applying two different clustering methods is evaluated. Our experiments using logged data of real students shows, through off-policy evaluation, that the method should achieve roughly 1.0 to 1.5 times as much cumulative reward as the logged policy. So, it is feasible to help decision-makers apply appropriate aid actions and, possibly, reduce student dropout.
https://arxiv.org/abs/2104.10258v1
https://arxiv.org/pdf/2104.10258v1.pdf
null
[ "Leandro M. de Lima", "Renato A. Krohling" ]
[ "reinforcement-learning" ]
1,618,876,800,000
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" } ]
77,804
8,616
https://paperswithcode.com/paper/learning-approximate-inference-networks-for
1803.03376
Learning Approximate Inference Networks for Structured Prediction
Structured prediction energy networks (SPENs; Belanger & McCallum 2016) use neural network architectures to define energy functions that can capture arbitrary dependencies among parts of structured outputs. Prior work used gradient descent for inference, relaxing the structured output to a set of continuous variables and then optimizing the energy with respect to them. We replace this use of gradient descent with a neural network trained to approximate structured argmax inference. This "inference network" outputs continuous values that we treat as the output structure. We develop large-margin training criteria for joint training of the structured energy function and inference network. On multi-label classification we report speed-ups of 10-60x compared to (Belanger et al, 2017) while also improving accuracy. For sequence labeling with simple structured energies, our approach performs comparably to exact inference while being much faster at test time. We then demonstrate improved accuracy by augmenting the energy with a "label language model" that scores entire output label sequences, showing it can improve handling of long-distance dependencies in part-of-speech tagging. Finally, we show how inference networks can replace dynamic programming for test-time inference in conditional random fields, suggestive for their general use for fast inference in structured settings.
http://arxiv.org/abs/1803.03376v1
http://arxiv.org/pdf/1803.03376v1.pdf
ICLR 2018 1
[ "Lifu Tu", "Kevin Gimpel" ]
[ "Language Modelling", "Multi-Label Classification", "Part-Of-Speech Tagging", "Structured Prediction" ]
1,520,553,600,000
[]
56,649
221,481
https://paperswithcode.com/paper/stytr-2-unbiased-image-style-transfer-with
2105.14576
StyTr$^2$: Image Style Transfer with Transformers
The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content. Owing to the locality in convolutional neural networks (CNNs), extracting and maintaining the global information of input images is difficult. Therefore, traditional neural style transfer methods face biased content representation. To address this critical issue, we take long-range dependencies of input images into account for image style transfer by proposing a transformer-based approach called StyTr$^2$. In contrast with visual transformers for other vision tasks, StyTr$^2$ contains two different transformer encoders to generate domain-specific sequences for content and style, respectively. Following the encoders, a multi-layer transformer decoder is adopted to stylize the content sequence according to the style sequence. We also analyze the deficiency of existing positional encoding methods and propose the content-aware positional encoding (CAPE), which is scale-invariant and more suitable for image style transfer tasks. Qualitative and quantitative experiments demonstrate the effectiveness of the proposed StyTr$^2$ compared with state-of-the-art CNN-based and flow-based approaches. Code and models are available at https://github.com/diyiiyiii/StyTR-2.
https://arxiv.org/abs/2105.14576v3
https://arxiv.org/pdf/2105.14576v3.pdf
null
[ "Yingying Deng", "Fan Tang", "WeiMing Dong", "Chongyang Ma", "Xingjia Pan", "Lei Wang", "Changsheng Xu" ]
[ "Style Transfer" ]
1,622,332,800,000
[]
130,489
206,830
https://paperswithcode.com/paper/consistency-based-active-learning-for-object
2103.10374
Consistency-based Active Learning for Object Detection
Active learning aims to improve the performance of task model by selecting the most informative samples with a limited budget. Unlike most recent works that focused on applying active learning for image classification, we propose an effective Consistency-based Active Learning method for object Detection (CALD), which fully explores the consistency between original and augmented data. CALD has three appealing benefits. (i) CALD is systematically designed by investigating the weaknesses of existing active learning methods, which do not take the unique challenges of object detection into account. (ii) CALD unifies box regression and classification with a single metric, which is not concerned by active learning methods for classification. CALD also focuses on the most informative local region rather than the whole image, which is beneficial for object detection. (iii) CALD not only gauges individual information for sample selection, but also leverages mutual information to encourage a balanced data distribution. Extensive experiments show that CALD significantly outperforms existing state-of-the-art task-agnostic and detection-specific active learning methods on general object detection datasets. Based on the Faster R-CNN detector, CALD consistently surpasses the baseline method (random selection) by 2.9/2.8/0.8 mAP on average on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO. Code is available at \url{https://github.com/we1pingyu/CALD}
https://arxiv.org/abs/2103.10374v3
https://arxiv.org/pdf/2103.10374v3.pdf
null
[ "Weiping Yu", "Sijie Zhu", "Taojiannan Yang", "Chen Chen" ]
[ "Active Learning", "Classification", "Classification", "Image Classification", "Object Detection", "Object Detection" ]
1,616,025,600,000
[ { "code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10", "description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed.\r\n\r\nThe actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box.\r\n\r\nImage Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)", "full_name": "RoIPool", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.", "name": "RoI Feature Extractors", "parent": null }, "name": "RoIPool", "source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "source_url": "http://arxiv.org/abs/1311.2524v5" }, { "code_snippet_url": null, "description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.", "full_name": "Region Proposal Network", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Region Proposal", "parent": null }, "name": "RPN", "source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", "source_url": "http://arxiv.org/abs/1506.01497v3" }, { "code_snippet_url": null, "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/chenyuntc/simple-faster-rcnn-pytorch/blob/367db367834efd8a2bc58ee0023b2b628a0e474d/model/faster_rcnn.py#L22", "description": "**Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) for detection. RPN and Fast [R-CNN](https://paperswithcode.com/method/r-cnn) are merged into a single network by sharing their convolutional features: the RPN component tells the unified network where to look.\r\n\r\nAs a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions.", "full_name": "Faster R-CNN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.", "name": "Object Detection Models", "parent": null }, "name": "Faster R-CNN", "source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", "source_url": "http://arxiv.org/abs/1506.01497v3" } ]
62,718
52,784
https://paperswithcode.com/paper/news-session-based-recommendations-using-deep
1808.00076
News Session-Based Recommendations using Deep Neural Networks
News recommender systems are aimed to personalize users experiences and help them to discover relevant articles from a large and dynamic search space. Therefore, news domain is a challenging scenario for recommendations, due to its sparse user profiling, fast growing number of items, accelerated item's value decay, and users preferences dynamic shift. Some promising results have been recently achieved by the usage of Deep Learning techniques on Recommender Systems, specially for item's feature extraction and for session-based recommendations with Recurrent Neural Networks. In this paper, it is proposed an instantiation of the CHAMELEON -- a Deep Learning Meta-Architecture for News Recommender Systems. This architecture is composed of two modules, the first responsible to learn news articles representations, based on their text and metadata, and the second module aimed to provide session-based recommendations using Recurrent Neural Networks. The recommendation task addressed in this work is next-item prediction for users sessions: "what is the next most likely article a user might read in a session?" Users sessions context is leveraged by the architecture to provide additional information in such extreme cold-start scenario of news recommendation. Users' behavior and item features are both merged in an hybrid recommendation approach. A temporal offline evaluation method is also proposed as a complementary contribution, for a more realistic evaluation of such task, considering dynamic factors that affect global readership interests like popularity, recency, and seasonality. Experiments with an extensive number of session-based recommendation methods were performed and the proposed instantiation of CHAMELEON meta-architecture obtained a significant relative improvement in top-n accuracy and ranking metrics (10% on Hit Rate and 13% on MRR) over the best benchmark methods.
http://arxiv.org/abs/1808.00076v3
http://arxiv.org/pdf/1808.00076v3.pdf
null
[ "Gabriel de Souza P. Moreira", "Felipe Ferreira", "Adilson Marques da Cunha" ]
[ "News Recommendation", "Recommendation Systems", "Session-Based Recommendations" ]
1,532,995,200,000
[]
166,734
254,403
https://paperswithcode.com/paper/are-factuality-checkers-reliable-adversarial
null
Are Factuality Checkers Reliable? Adversarial Meta-evaluation of Factuality in Summarization
With the continuous upgrading of the summarization systems driven by deep neural networks, researchers have higher requirements on the quality of the generated summaries, which should be not only fluent and informative but also factually correct. As a result, the field of factual evaluation has developed rapidly recently. Despite its initial progress in evaluating generated summaries, the meta-evaluation methodologies of factuality metrics are limited in their opacity, leading to the insufficient understanding of factuality metrics’ relative advantages and their applicability. In this paper, we present an adversarial meta-evaluation methodology that allows us to (i) diagnose the fine-grained strengths and weaknesses of 6 existing top-performing metrics over 24 diagnostic test datasets, (ii) search for directions for further improvement by data augmentation. Our observations from this work motivate us to propose several calls for future research. We make all codes, diagnostic test datasets, trained factuality models available: https://github.com/zide05/AdvFact.
https://aclanthology.org/2021.findings-emnlp.179
https://aclanthology.org/2021.findings-emnlp.179.pdf
Findings (EMNLP) 2021 11
[ "Yiran Chen", "PengFei Liu", "Xipeng Qiu" ]
[ "Data Augmentation" ]
1,635,724,800,000
[]
110,904
169,201
https://paperswithcode.com/paper/block-term-tensor-neural-networks
2010.04963
Block-term Tensor Neural Networks
Deep neural networks (DNNs) have achieved outstanding performance in a wide range of applications, e.g., image classification, natural language processing, etc. Despite the good performance, the huge number of parameters in DNNs brings challenges to efficient training of DNNs and also their deployment in low-end devices with limited computing resources. In this paper, we explore the correlations in the weight matrices, and approximate the weight matrices with the low-rank block-term tensors. We name the new corresponding structure as block-term tensor layers (BT-layers), which can be easily adapted to neural network models, such as CNNs and RNNs. In particular, the inputs and the outputs in BT-layers are reshaped into low-dimensional high-order tensors with a similar or improved representation power. Sufficient experiments have demonstrated that BT-layers in CNNs and RNNs can achieve a very large compression ratio on the number of parameters while preserving or improving the representation power of the original DNNs.
https://arxiv.org/abs/2010.04963v2
https://arxiv.org/pdf/2010.04963v2.pdf
null
[ "Jinmian Ye", "Guangxi Li", "Di Chen", "Haiqin Yang", "Shandian Zhe", "Zenglin Xu" ]
[ "Image Classification" ]
1,602,288,000,000
[]
150,066
244,768
https://paperswithcode.com/paper/aggregation-with-feature-detection
null
Aggregation With Feature Detection
Aggregating features from different depths of a network is widely adopted to improve the network capability. Lots of modern architectures are equipped with skip connections, which actually makes the feature aggregation happen in all these networks. Since different features tell different semantic meanings, there are inconsistencies and incompatibilities to be solved. However, existing works naively blend deep features via element-wise summation or concatenation with a convolution behind. Better feature aggregation method beyond summation or concatenation is rarely explored. In this paper, given two layers of features to be aggregated together, we first detect and identify where and what needs to be updated in one layer, then replace the feature at the identified location with the information of the other layer. This process, which we call DEtect-rePLAce (DEPLA), enables us to avoid inconsistent patterns while keeping useful information in the merged outputs. Experimental results demonstrate our method largely boosts multiple baselines e.g. ResNet, FishNet and FPN on three major vision tasks including ImageNet classification, MS COCO object detection and instance segmentation.
http://openaccess.thecvf.com//content/ICCV2021/html/Sun_Aggregation_With_Feature_Detection_ICCV_2021_paper.html
http://openaccess.thecvf.com//content/ICCV2021/papers/Sun_Aggregation_With_Feature_Detection_ICCV_2021_paper.pdf
ICCV 2021 10
[ "Shuyang Sun", "Xiaoyu Yue", "Xiaojuan Qi", "Wanli Ouyang", "Victor Adrian Prisacariu", "Philip H.S. Torr" ]
[ "Instance Segmentation", "Object Detection", "Object Detection", "Semantic Segmentation" ]
1,609,459,200,000
[ { "code_snippet_url": "", "description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)", "full_name": "Average Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Average Pooling", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13", "description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$", "full_name": "Rectified Linear Units", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118", "description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.", "full_name": "Residual Connection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.", "name": "Skip Connections", "parent": null }, "name": "Residual Connection", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" }, { "code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116", "description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.", "full_name": "Batch Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Batch Normalization", "source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "source_url": "http://arxiv.org/abs/1502.03167v3" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157", "description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.", "full_name": "Global Average Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Global Average Pooling", "source_title": "Network In Network", "source_url": "http://arxiv.org/abs/1312.4400v3" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75", "description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.", "full_name": "Bottleneck Residual Block", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:", "name": "Skip Connection Blocks", "parent": null }, "name": "Bottleneck Residual Block", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389", "description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.", "full_name": "Kaiming Initialization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.", "name": "Initialization", "parent": null }, "name": "Kaiming Initialization", "source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification", "source_url": "http://arxiv.org/abs/1502.01852v1" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35", "description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.", "full_name": "Residual Block", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:", "name": "Skip Connection Blocks", "parent": null }, "name": "Residual Block", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" }, { "code_snippet_url": null, "description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)", "full_name": "Max Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Max Pooling", "source_title": null, "source_url": null }, { "code_snippet_url": "https://www.healthnutra.org/es/maxup/", "description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)", "full_name": "1x1 Convolution", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "1x1 Convolution", "source_title": "Network In Network", "source_url": "http://arxiv.org/abs/1312.4400v3" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/resnet.py#L124", "description": "**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https://paperswithcode.com/method/residual-block) ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}(x):=\\mathcal{H}(x)-x$. The original mapping is recast into $\\mathcal{F}(x)+x$.\r\n\r\nThere is empirical evidence that these types of network are easier to optimize, and can gain accuracy from considerably increased depth.", "full_name": "Residual Network", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.", "name": "Convolutional Neural Networks", "parent": "Image Models" }, "name": "ResNet", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" }, { "code_snippet_url": null, "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
39,332
186,426
https://paperswithcode.com/paper/towards-adversarial-learning-of-speaker
1903.09606
Towards adversarial learning of speaker-invariant representation for speech emotion recognition
Speech emotion recognition (SER) has attracted great attention in recent years due to the high demand for emotionally intelligent speech interfaces. Deriving speaker-invariant representations for speech emotion recognition is crucial. In this paper, we propose to apply adversarial training to SER to learn speaker-invariant representations. Our model consists of three parts: a representation learning sub-network with time-delay neural network (TDNN) and LSTM with statistical pooling, an emotion classification network and a speaker classification network. Both the emotion and speaker classification network take the output of the representation learning network as input. Two training strategies are employed: one based on domain adversarial training (DAT) and the other one based on cross-gradient training (CGT). Besides the conventional data set, we also evaluate our proposed models on a much larger publicly available emotion data set with 250 speakers. Evaluation results show that on IEMOCAP, DAT and CGT provides 5.6% and 7.4% improvement respectively, over a baseline system without speaker-invariant representation learning on 5-fold cross validation. On the larger emotion data set, while CGT fails to yield better results than baseline, DAT can still provide 9.8% relative improvement on a standalone test set.
http://arxiv.org/abs/1903.09606v1
http://arxiv.org/pdf/1903.09606v1.pdf
null
[]
[ "Classification", "Emotion Classification", "Emotion Recognition", "Representation Learning", "Speech Emotion Recognition" ]
1,553,212,800,000
[]
91,257
110,612
https://paperswithcode.com/paper/chinese-relation-extraction-with-multi
null
Chinese Relation Extraction with Multi-Grained Information and External Linguistic Knowledge
Chinese relation extraction is conducted using neural networks with either character-based or word-based inputs, and most existing methods typically suffer from segmentation errors and ambiguity of polysemy. To address the issues, we propose a multi-grained lattice framework (MG lattice) for Chinese relation extraction to take advantage of multi-grained language information and external linguistic knowledge. In this framework, (1) we incorporate word-level information into character sequence inputs so that segmentation errors can be avoided. (2) We also model multiple senses of polysemous words with the help of external linguistic knowledge, so as to alleviate polysemy ambiguity. Experiments on three real-world datasets in distinct domains show consistent and significant superiority and robustness of our model, as compared with other baselines. We will release the source code of this paper in the future.
https://aclanthology.org/P19-1430
https://aclanthology.org/P19-1430.pdf
ACL 2019 7
[ "Ziran Li", "Ning Ding", "Zhiyuan Liu", "Hai-Tao Zheng", "Ying Shen" ]
[ "Relation Extraction" ]
1,561,939,200,000
[]
122,862
98,124
https://paperswithcode.com/paper/transformable-bottleneck-networks
1904.06458
Transformable Bottleneck Networks
We propose a novel approach to performing fine-grained 3D manipulation of image content via a convolutional neural network, which we call the Transformable Bottleneck Network (TBN). It applies given spatial transformations directly to a volumetric bottleneck within our encoder-bottleneck-decoder architecture. Multi-view supervision encourages the network to learn to spatially disentangle the feature space within the bottleneck. The resulting spatial structure can be manipulated with arbitrary spatial transformations. We demonstrate the efficacy of TBNs for novel view synthesis, achieving state-of-the-art results on a challenging benchmark. We demonstrate that the bottlenecks produced by networks trained for this task contain meaningful spatial structure that allows us to intuitively perform a variety of image manipulations in 3D, well beyond the rigid transformations seen during training. These manipulations include non-uniform scaling, non-rigid warping, and combining content from different images. Finally, we extract explicit 3D structure from the bottleneck, performing impressive 3D reconstruction from a single input image.
https://arxiv.org/abs/1904.06458v5
https://arxiv.org/pdf/1904.06458v5.pdf
ICCV 2019 10
[ "Kyle Olszewski", "Sergey Tulyakov", "Oliver Woodford", "Hao Li", "Linjie Luo" ]
[ "3D Reconstruction", "Novel View Synthesis" ]
1,555,113,600,000
[]
120,802
107,961
https://paperswithcode.com/paper/volmap-a-real-time-model-for-semantic
1906.11873
VolMap: A Real-time Model for Semantic Segmentation of a LiDAR surrounding view
This paper introduces VolMap, a real-time approach for the semantic segmentation of a 3D LiDAR surrounding view system in autonomous vehicles. We designed an optimized deep convolution neural network that can accurately segment the point cloud produced by a 360\degree{} LiDAR setup, where the input consists of a volumetric bird-eye view with LiDAR height layers used as input channels. We further investigated the usage of multi-LiDAR setup and its effect on the performance of the semantic segmentation task. Our evaluations are carried out on a large scale 3D object detection benchmark containing a LiDAR cocoon setup, along with KITTI dataset, where the per-point segmentation labels are derived from 3D bounding boxes. We show that VolMap achieved an excellent balance between high accuracy and real-time running on CPU.
https://arxiv.org/abs/1906.11873v1
https://arxiv.org/pdf/1906.11873v1.pdf
null
[ "Hager Radi", "Waleed Ali" ]
[ "3D Object Detection", "Autonomous Vehicles", "Object Detection", "Object Detection", "Semantic Segmentation" ]
1,560,297,600,000
[ { "code_snippet_url": null, "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
71,378
123,059
https://paperswithcode.com/paper/using-dynamic-embeddings-to-improve-static
1911.02929
How Can BERT Help Lexical Semantics Tasks?
Contextualized embeddings such as BERT can serve as strong input representations to NLP tasks, outperforming their static embeddings counterparts such as skip-gram, CBOW and GloVe. However, such embeddings are dynamic, calculated according to a sentence-level context, which limits their use in lexical semantics tasks. We address this issue by making use of dynamic embeddings as word representations in training static embeddings, thereby leveraging their strong representation power for disambiguating context information. Results show that this method leads to improvements over traditional static embeddings on a range of lexical semantics tasks, obtaining the best reported results on seven datasets.
https://arxiv.org/abs/1911.02929v2
https://arxiv.org/pdf/1911.02929v2.pdf
null
[ "Yile Wang", "Leyang Cui", "Yue Zhang" ]
[ "Word Embeddings" ]
1,573,084,800,000
[ { "code_snippet_url": "", "description": "**GloVe Embeddings** are a type of word embedding that encode the co-occurrence probability ratio between two words as vector differences. GloVe uses a weighted least squares objective $J$ that minimizes the difference between the dot product of the vectors of two words and the logarithm of their number of co-occurrences:\r\n\r\n$$ J=\\sum\\_{i, j=1}^{V}f\\left(𝑋\\_{i j}\\right)(w^{T}\\_{i}\\tilde{w}_{j} + b\\_{i} + \\tilde{b}\\_{j} - \\log{𝑋}\\_{ij})^{2} $$\r\n\r\nwhere $w\\_{i}$ and $b\\_{i}$ are the word vector and bias respectively of word $i$, $\\tilde{w}_{j}$ and $b\\_{j}$ are the context word vector and bias respectively of word $j$, $X\\_{ij}$ is the number of times word $i$ occurs in the context of word $j$, and $f$ is a weighting function that assigns lower weights to rare and frequent co-occurrences.", "full_name": "GloVe Embeddings", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Word Embeddings", "parent": null }, "name": "GloVe", "source_title": "GloVe: Global Vectors for Word Representation", "source_url": "https://aclanthology.org/D14-1162" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118", "description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.", "full_name": "Residual Connection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.", "name": "Skip Connections", "parent": null }, "name": "Residual Connection", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" }, { "code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271", "description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$", "full_name": "Attention Dropout", "introduced_year": 2018, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Attention Dropout", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.", "full_name": "Linear Warmup With Linear Decay", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.", "name": "Learning Rate Schedules", "parent": null }, "name": "Linear Warmup With Linear Decay", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al", "full_name": "Weight Decay", "introduced_year": 1943, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Weight Decay", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L584", "description": "The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\\Phi(x)$, where $\\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswithcode.com/method/relu) ($x\\mathbf{1}_{x>0}$). Consequently the GELU can be thought of as a smoother ReLU.\r\n\r\n$$\\text{GELU}\\left(x\\right) = x{P}\\left(X\\leq{x}\\right) = x\\Phi\\left(x\\right) = x \\cdot \\frac{1}{2}\\left[1 + \\text{erf}(x/\\sqrt{2})\\right],$$\r\nif $X\\sim \\mathcal{N}(0,1)$.\r\n\r\nOne can approximate the GELU with\r\n$0.5x\\left(1+\\tanh\\left[\\sqrt{2/\\pi}\\left(x + 0.044715x^{3}\\right)\\right]\\right)$ or $x\\sigma\\left(1.702x\\right),$\r\nbut PyTorch's exact implementation is sufficiently fast such that these approximations may be unnecessary. (See also the [SiLU](https://paperswithcode.com/method/silu) $x\\sigma(x)$ which was also coined in the paper that introduced the GELU.)\r\n\r\nGELUs are used in [GPT-3](https://paperswithcode.com/method/gpt-3), [BERT](https://paperswithcode.com/method/bert), and most other Transformers.", "full_name": "Gaussian Error Linear Units", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.", "name": "Activation Functions", "parent": null }, "name": "GELU", "source_title": "Gaussian Error Linear Units (GELUs)", "source_url": "https://arxiv.org/abs/1606.08415v4" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6", "description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.", "full_name": "Adam", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "Adam", "source_title": "Adam: A Method for Stochastic Optimization", "source_url": "http://arxiv.org/abs/1412.6980v9" }, { "code_snippet_url": "", "description": "**WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:\r\n\r\n1. Initialize the word unit inventory with all the characters in the text.\r\n2. Build a language model on the training data using the inventory from 1.\r\n3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.\r\n4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.\r\n\r\nText: [Source](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944)\r\n\r\nImage: WordPiece as used in [BERT](https://paperswithcode.com/method/bert)", "full_name": "WordPiece", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "WordPiece", "source_title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation", "source_url": "http://arxiv.org/abs/1609.08144v2" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9", "description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)", "full_name": "Multi-Head Attention", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.", "name": "Attention Modules", "parent": "Attention" }, "name": "Multi-Head Attention", "source_title": "Attention Is All You Need", "source_url": "http://arxiv.org/abs/1706.03762v5" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7", "description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.", "full_name": "Scaled Dot-Product Attention", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.", "name": "Attention Mechanisms", "parent": "Attention" }, "name": "Scaled Dot-Product Attention", "source_title": "Attention Is All You Need", "source_url": "http://arxiv.org/abs/1706.03762v5" }, { "code_snippet_url": "https://github.com/google-research/bert", "description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.", "full_name": "BERT", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "BERT", "source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "source_url": "https://arxiv.org/abs/1810.04805v2" } ]
135,696
307,643
https://paperswithcode.com/paper/funqg-molecular-representation-learning-via
2207.08597
FunQG: Molecular Representation Learning Via Quotient Graphs
Learning expressive molecular representations is crucial to facilitate the accurate prediction of molecular properties. Despite the significant advancement of graph neural networks (GNNs) in molecular representation learning, they generally face limitations such as neighbors-explosion, under-reaching, over-smoothing, and over-squashing. Also, GNNs usually have high computational complexity because of the large-scale number of parameters. Typically, such limitations emerge or increase when facing relatively large-size graphs or using a deeper GNN model architecture. An idea to overcome these problems is to simplify a molecular graph into a small, rich, and informative one, which is more efficient and less challenging to train GNNs. To this end, we propose a novel molecular graph coarsening framework named FunQG utilizing Functional groups, as influential building blocks of a molecule to determine its properties, based on a graph-theoretic concept called Quotient Graph. By experiments, we show that the resulting informative graphs are much smaller than the molecular graphs and thus are good candidates for training GNNs. We apply the FunQG on popular molecular property prediction benchmarks and then compare the performance of a GNN architecture on the obtained datasets with several state-of-the-art baselines on the original datasets. By experiments, this method significantly outperforms previous baselines on various datasets, besides its dramatic reduction in the number of parameters and low computational complexity. Therefore, the FunQG can be used as a simple, cost-effective, and robust method for solving the molecular representation learning problem.
https://arxiv.org/abs/2207.08597v1
https://arxiv.org/pdf/2207.08597v1.pdf
null
[ "Hossein Hajiabolhassan", "Zahra Taheri", "Ali Hojatnia", "Yavar Taheri Yeganeh" ]
[ "Molecular Property Prediction", "Representation Learning" ]
1,658,102,400,000
[]
54,202
182,790
https://paperswithcode.com/paper/mosaicked-multispectral-image-compression
1801.03577
Mosaicked multispectral image compression based on inter- and intra-band correlation
Multispectral imaging has been utilized in many fields, but the cost of capturing and storing image data is still high. Single-sensor cameras with multispectral filter arrays can reduce the cost of capturing images at the expense of slightly lower image quality. When multispectral filter arrays are used, conventional multispectral image compression methods can be applied after interpolation, but the compressed image data after interpolation has some redundancy because the interpolated data are computed from the captured raw data. In this paper, we propose an efficient image compression method for single-sensor multispectral cameras. The proposed method encodes the captured multispectral data before interpolation. We also propose a new spectral transform method for the compression of mosaicked multispectral images. This transform is designed by considering the filter arrangement and the spectral sensitivities of a multispectral filter array. The experimental results show that the proposed method achieves a higher peak signal-to-noise ratio at higher bit rates than a conventional compression method that encodes a multispectral image after interpolation, e.g., 3-dB gain over conventional compression when coding at rates of over 0.1 bit/pixel/bands.
http://arxiv.org/abs/1801.03577v1
http://arxiv.org/pdf/1801.03577v1.pdf
null
[]
[ "Image Compression" ]
1,515,542,400,000
[]
149,774
98,226
https://paperswithcode.com/paper/swtvm-exploring-the-automated-compilation-for
1904.07404
swTVM: Towards Optimized Tensor Code Generation for Deep Learning on Sunway Many-Core Processor
The flourish of deep learning frameworks and hardware platforms has been demanding an efficient compiler that can shield the diversity in both software and hardware in order to provide application portability. Among the existing deep learning compilers, TVM is well known for its efficiency in code generation and optimization across diverse hardware devices. In the meanwhile, the Sunway many-core processor renders itself as a competitive candidate for its attractive computational power in both scientific computing and deep learning workloads. This paper combines the trends in these two directions. Specifically, we propose swTVM that extends the original TVM to support ahead-of-time compilation for architecture requiring cross-compilation such as Sunway. In addition, we leverage the architecture features during the compilation such as core group for massive parallelism, DMA for high bandwidth memory transfer and local device memory for data locality, in order to generate efficient codes for deep learning workloads on Sunway. The experiment results show that the codes generated by swTVM achieves 1.79x on average compared to the state-of-the-art deep learning framework on Sunway, across six representative benchmarks. This work is the first attempt from the compiler perspective to bridge the gap of deep learning and Sunway processor particularly with productivity and efficiency in mind. We believe this work will encourage more people to embrace the power of deep learning and Sunway many-core processor.
https://arxiv.org/abs/1904.07404v3
https://arxiv.org/pdf/1904.07404v3.pdf
null
[ "Mingzhen Li", "Changxi Liu", "Jianjin Liao", "Xuegui Zheng", "Hailong Yang", "Rujun Sun", "Jun Xu", "Lin Gan", "Guangwen Yang", "Zhongzhi Luan", "Depei Qian" ]
[ "Code Generation" ]
1,555,372,800,000
[ { "code_snippet_url": "https://www.healthnutra.org/es/maxup/", "description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)", "full_name": "1x1 Convolution", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "1x1 Convolution", "source_title": "Network In Network", "source_url": "http://arxiv.org/abs/1312.4400v3" }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/normalization.py#L13", "description": "**Local Response Normalization** is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and increasing sensory perception. In practice, we can either normalize within the same channel or normalize across channels when we apply LRN to convolutional neural networks.\r\n\r\n$$ b_{c} = a_{c}\\left(k + \\frac{\\alpha}{n}\\sum_{c'=\\max(0, c-n/2)}^{\\min(N-1,c+n/2)}a_{c'}^2\\right)^{-\\beta} $$\r\n\r\nWhere the size is the number of neighbouring channels used for normalization, $\\alpha$ is multiplicative factor, $\\beta$ an exponent and $k$ an additive factor", "full_name": "Local Response Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Local Response Normalization", "source_title": "ImageNet Classification with Deep Convolutional Neural Networks", "source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks" }, { "code_snippet_url": "https://github.com/prlz77/ResNeXt.pytorch/blob/39fb8d03847f26ec02fb9b880ecaaa88db7a7d16/models/model.py#L42", "description": "A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it.", "full_name": "Grouped Convolution", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Grouped Convolution", "source_title": "ImageNet Classification with Deep Convolutional Neural Networks", "source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks" }, { "code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13", "description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$", "full_name": "Rectified Linear Units", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)", "full_name": "Max Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Max Pooling", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/dansuh17/alexnet-pytorch/blob/d0c1b1c52296ffcbecfbf5b17e1d1685b4ca6744/model.py#L40", "description": "**AlexNet** is a classic convolutional neural network architecture. It consists of convolutions, [max pooling](https://paperswithcode.com/method/max-pooling) and dense layers as the basic building blocks. Grouped convolutions are used in order to fit the model across two GPUs.", "full_name": "AlexNet", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.", "name": "Convolutional Neural Networks", "parent": "Image Models" }, "name": "AlexNet", "source_title": "ImageNet Classification with Deep Convolutional Neural Networks", "source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks" }, { "code_snippet_url": null, "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
62,359
197,959
https://paperswithcode.com/paper/instantaneous-psd-estimation-for-speech
2007.00542
Instantaneous PSD Estimation for Speech Enhancement based on Generalized Principal Components
Power spectral density (PSD) estimates of various microphone signal components are essential to many speech enhancement procedures. As speech is highly non-nonstationary, performance improvements may be gained by maintaining time-variations in PSD estimates. In this paper, we propose an instantaneous PSD estimation approach based on generalized principal components. Similarly to other eigenspace-based PSD estimation approaches, we rely on recursive averaging in order to obtain a microphone signal correlation matrix estimate to be decomposed. However, instead of estimating the PSDs directly from the temporally smooth generalized eigenvalues of this matrix, yielding temporally smooth PSD estimates, we propose to estimate the PSDs from newly defined instantaneous generalized eigenvalues, yielding instantaneous PSD estimates. The instantaneous generalized eigenvalues are defined from the generalized principal components, i.e. a generalized eigenvector-based transform of the microphone signals. We further show that the smooth generalized eigenvalues can be understood as a recursive average of the instantaneous generalized eigenvalues. Simulation results comparing the multi-channel Wiener filter (MWF) with smooth and instantaneous PSD estimates indicate better speech enhancement performance for the latter. A MATLAB implementation is available online.
https://arxiv.org/abs/2007.00542v1
https://arxiv.org/pdf/2007.00542v1.pdf
null
[]
[ "Speech Enhancement" ]
1,593,561,600,000
[]
166,124
300,148
https://paperswithcode.com/paper/transformer-based-urdu-handwritten-text
2206.04575
Transformer based Urdu Handwritten Text Optical Character Reader
Extracting Handwritten text is one of the most important components of digitizing information and making it available for large scale setting. Handwriting Optical Character Reader (OCR) is a research problem in computer vision and natural language processing computing, and a lot of work has been done for English, but unfortunately, very little work has been done for low resourced languages such as Urdu. Urdu language script is very difficult because of its cursive nature and change of shape of characters based on it's relative position, therefore, a need arises to propose a model which can understand complex features and generalize it for every kind of handwriting style. In this work, we propose a transformer based Urdu Handwritten text extraction model. As transformers have been very successful in Natural Language Understanding task, we explore them further to understand complex Urdu Handwriting.
https://arxiv.org/abs/2206.04575v1
https://arxiv.org/pdf/2206.04575v1.pdf
null
[ "Mohammad Daniyal Shaiq", "Musa Dildar Ahmed Cheema", "Ali Kamal" ]
[ "Natural Language Understanding", "Optical Character Recognition" ]
1,654,732,800,000
[]
884

Dataset Card for "PwC"

More Information needed

Downloads last month
43
Edit dataset card