A Competitive Algorithm for Agnostic Active Learning

Dec 10, 2023

Speakers

About

For some hypothesis classes and input distributions, active agnostic learning needs exponentially fewer samples than passive learning; for other classes and distributions, it offers little to no improvement. The most popular algorithms for agnostic active learning express their performance in terms of a parameter called the disagreement coefficient, but it is known that these algorithms are inefficient on some inputs. We take a different approach to agnostic active learning, getting an algorithm that is competitive with the optimal algorithm for any binary hypothesis class H and distribution 𝒟_X over X. In particular, if any algorithm can use m^* queries to get O(η) error, then our algorithm uses O(m^* log H) queries to get O(η) error. Our algorithm lies in the vein of the splitting-based approach of Dasgupta [2004], which gets a similar result for the realizable (η = 0) setting. We also show that it is NP-hard to do better than our algorithm's O(log H) overhead in general.

Organizer

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2023