Nov 28, 2022
Speaker · 0 followers
Speaker · 0 followers
Speaker · 2 followers
Speaker · 0 followers
Recent work of Acharya et al. (NeurIPS 2019) showed how to estimate the entropy of a distribution 𝒟 over an alphabet of size k up to ±ϵ additive error by streaming over (k/ϵ^3) ·polylog(1/ϵ) i.i.d. samples and using only O(1) words of memory. In this work, we give a new constant memory scheme that reduces the sample complexity to (k/ϵ^2)·polylog(1/ϵ). We conjecture that this is optimal up to polylog(1/ϵ) factors.Recent work of Acharya et al. (NeurIPS 2019) showed how to estimate the entropy of a distribution 𝒟 over an alphabet of size k up to ±ϵ additive error by streaming over (k/ϵ^3) ·polylog(1/ϵ) i.i.d. samples and using only O(1) words of memory. In this work, we give a new constant memory scheme that reduces the sample complexity to (k/ϵ^2)·polylog(1/ϵ). We conjecture that this is optimal up to polylog(1/ϵ) factors.
Account · 952 followers
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Ali Kavis, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Yufei Chen, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%