Dec 13, 2019
Speaker · 15 followers
Information theory is deeply connected to two key tasks in machine learning: prediction and representation learning. Due to these connections, information theory has found wide applications in machine learning, such as proving generalization bounds, certifying fairness and privacy, optimizing information content of unsupervised/supervised representations, and proving limitations to prediction performance. Conversely, progress in machine learning has driven advancements in classical information theory problems such as compression and transmission. This workshop aims to bring together researchers from different disciplines, identify common grounds, and spur discussion on how information theory can apply to and benefit from modern machine learning tools. Topics include, but are not limited to: -> Controlling information quantities for performance guarantees, such as PAC-Bayes, interactive data analysis, information bottleneck, fairness, privacy. Information theoretic performance limitations of learning algorithms. -> Information theory for representation learning and unsupervised learning, such as its applications to generative models, learning latent representations, and domain adaptation. -> Methods to estimate information theoretic quantities for high dimensional observations, such as variational methods and sampling methods. -> Quantification of usable / useful information, e.g. the information an algorithm can use for prediction. -> Machine learning applied to information theory, such as designing better error-correcting codes, and compression optimized for human perception.Information theory is deeply connected to two key tasks in machine learning: prediction and representation learning. Due to these connections, information theory has found wide applications in machine learning, such as proving generalization bounds, certifying fairness and privacy, optimizing information content of unsupervised/supervised representations, and proving limitations to prediction performance. Conversely, progress in machine learning has driven advancements in classical information …
Category · 10.8k presentations
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Carla Gomes, …