Jul 24, 2023
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
The Johnson-Lindenstaruss lemma <cit.> is a cornerstone result in dimensionality reduction, stating it is possible to embed a set of n points in d-dimensional Euclidean space into optimal k=O(^-2ln n) dimensions, while preserving all pairwise distances to within a factor (1 ±).The seminal Fast Johnson-Lindenstrauss (Fast JL) transform by Ailon and Chazelle (SICOMP'09) supports computing the embedding of a data point in O(d ln d +k ln^2 n) time, where the d ln d term comes from multiplication with a d × d Hadamard matrix and the k ln^2 n term comes from multiplication with a sparse k × d matrix. Despite the Fast JL transform being more than a decade old, it is one of the fastest dimensionality reduction techniques for many tradeoffs between , d and n. In this work, we give a surprising new analysis of the Fast JL transform, showing that the k ln^2 n term in the embedding time can be improved to (k ln^2 n)/α for an α = Ω(min{^-1ln(1/), ln n}). The improvement follows by using an even sparser matrix. We complement our improved analysis with a lower bound showing that our new analysis is in fact tight.The Johnson-Lindenstaruss lemma <cit.> is a cornerstone result in dimensionality reduction, stating it is possible to embed a set of n points in d-dimensional Euclidean space into optimal k=O(^-2ln n) dimensions, while preserving all pairwise distances to within a factor (1 ±).The seminal Fast Johnson-Lindenstrauss (Fast JL) transform by Ailon and Chazelle (SICOMP'09) supports computing the embedding of a data point in O(d ln d +k ln^2 n) time, where the d ln d term comes from multiplicat…
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Jialin Liu, …
Ivan Lyzhin, …
Junbiao Cui, …
Paul Liang, …
Yuning Cui, …