Dez 6, 2021
Sprecher:in · 0 Follower:innen
Offset Rademacher complexities have been shown to imply sharp, data-dependent upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that in the statistical setting, the offset complexity upper bound can be generalized to any loss satisfying a certain uniform convexity condition. Surprisingly, this condition is shown to also capture exponential concavity and self-concordance, uniting several apparently disparate results.By a unified geometric argument, these bounds translate to improper learning in a non-convex class using Audibert's star algorithm. Thus, we provide a sharp analytic tool that covers both convex empirical risk minimization and improper learning under general entropy conditions. As applications, we recover the optimal rates for proper and improper learning with the p-loss for 1 < p < ∞, and show that improper variants of empirical risk minimization can attain fast rates for logistic regression and other generalized linear models.Offset Rademacher complexities have been shown to imply sharp, data-dependent upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that in the statistical setting, the offset complexity upper bound can be generalized to any loss satisfying a certain uniform convexity condition. Surprisingly, this condition is shown to also capture exponential concavity and self-concordance, uniting several apparently disparate results…
Konto · 1,9k Follower:innen
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Professionelle Aufzeichnung und Livestreaming – weltweit.
Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind