Jul 24, 2023
We study the task of agnostically learning halfspaces under the Gaussian distribution.Specifically, given labeled examples ( mathbfx,y) from an unknown distribution on mathbbR^n times ± 1, whose marginal distribution on mathbfx is the standard Gaussian and the labels y can be arbitrary, the goal is to output a hypothesis with 0-1 loss mathrmOPT+ epsilon, where mathrmOPT is the 0-1 loss of the best-fitting halfspace.We prove a near-optimal computational hardness result for this task, under the widely believed sub-exponential time hardness of the Learning with Errors (LWE) problem. Prior hardness results are eitherqualitatively suboptimal or apply to restricted families of algorithms. Our techniques extend to yield near-optimal lower bounds for related problems, including ReLU regression.We study the task of agnostically learning halfspaces under the Gaussian distribution.Specifically, given labeled examples ( mathbfx,y) from an unknown distribution on mathbbR^n times ± 1, whose marginal distribution on mathbfx is the standard Gaussian and the labels y can be arbitrary, the goal is to output a hypothesis with 0-1 loss mathrmOPT+ epsilon, where mathrmOPT is the 0-1 loss of the best-fitting halfspace.We prove a near-optimal computational hardness result for this task, under the wi…
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Junfan Li, …
Dachuan Shi, …