Papers
arxiv:2204.01800

The Fast Johnson-Lindenstrauss Transform is Even Faster

Published on Apr 4, 2022
Authors:
,
,

Abstract

The seminal Fast Johnson-Lindenstrauss (Fast JL) transform by Ailon and Chazelle (SICOMP'09) embeds a set of n points in d-dimensional Euclidean space into optimal k=O(varepsilon^{-2} ln n) dimensions, while preserving all pairwise distances to within a factor (1 pm varepsilon). The Fast JL transform supports computing the embedding of a data point in O(d ln d +k ln^2 n) time, where the d ln d term comes from multiplication with a d times d Hadamard matrix and the k ln^2 n term comes from multiplication with a sparse k times d matrix. Despite the Fast JL transform being more than a decade old, it is one of the fastest dimensionality reduction techniques for many tradeoffs between varepsilon, d and n. In this work, we give a surprising new analysis of the Fast JL transform, showing that the k ln^2 n term in the embedding time can be improved to (k ln^2 n)/alpha for an alpha = Omega(min{varepsilon^{-1}ln(1/varepsilon), ln n}). The improvement follows by using an even sparser matrix. We also complement our improved analysis with a lower bound showing that our new analysis is in fact tight.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2204.01800 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2204.01800 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2204.01800 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.