Papers
arxiv:2110.06482

Parallel Deep Neural Networks Have Zero Duality Gap

Published on Oct 13, 2021
Authors:
,
,

Abstract

Training <PRE_TAG>deep neural networks</POST_TAG> is a challenging <PRE_TAG>non-convex optimization</POST_TAG> problem. Recent work has proven that the <PRE_TAG>strong duality</POST_TAG> holds (which means zero <PRE_TAG>duality gap</POST_TAG>) for regularized finite-width two-layer ReLU networks and consequently provided an equivalent convex training problem. However, extending this result to deeper networks remains to be an open problem. In this paper, we prove that the <PRE_TAG>duality gap</POST_TAG> for deeper linear networks with vector outputs is non-zero. In contrast, we show that the zero <PRE_TAG>duality gap</POST_TAG> can be obtained by stacking standard deep networks in parallel, which we call a parallel architecture, and modifying the regularization. Therefore, we prove the strong duality and existence of equivalent convex problems that enable globally optimal training of deep networks. As a by-product of our analysis, we demonstrate that the weight decay regularization on the network parameters explicitly encourages low-rank solutions via closed-form expressions. In addition, we show that <PRE_TAG>strong duality</POST_TAG> holds for three-layer standard ReLU networks given rank-1 data matrices.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2110.06482 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2110.06482 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2110.06482 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.