Papers
arxiv:1803.08375

Deep Learning using Rectified Linear Units (ReLU)

Published on Mar 22, 2018

Abstract

We introduce the use of rectified linear units (ReLU) as the classification function in a deep neural network (DNN). Conventionally, ReLU is used as an activation function in DNNs, with Softmax function as their classification function. However, there have been several studies on using a classification function other than Softmax, and this study is an addition to those. We accomplish this by taking the activation of the penultimate layer h_{n - 1} in a neural network, then multiply it by weight parameters theta to get the raw scores o_{i}. Afterwards, we threshold the raw scores o_{i} by 0, i.e. f(o) = max(0, o_{i}), where f(o) is the ReLU function. We provide class predictions y through argmax function, i.e. argmax f(x).

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1803.08375 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1803.08375 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1803.08375 in a Space README.md to link it from this page.

Collections including this paper 1