Bayesian neural network unit priors and generalized Weibull-tail property

Nov 17, 2021

Speakers

About

The connection between Bayesian neural networks and Gaussian processes gained a lot of attention in the last few years. Hidden units have been proven to follow a Gaussian process limit when the layer width tends to infinity. Recent work has suggested that finite Bayesian neural networks may outperform their infinite counterparts because they can flexibly adapt their internal representations. To establish solid ground for future research on finite-width neural networks, our goal is to study the prior induced on hidden units. Our main result is an accurate description of hidden units tails which shows that unit priors become heavier-tailed going deeper. This finding sheds light on the behavior of hidden units of finite Bayesian neural networks.

Organizer

About ACML 2021

The 13th Asian Conference on Machine Learning ACML 2021 aims to provide a leading international forum for researchers in machine learning and related fields to share their new ideas, progress and achievements.

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow ACML 2021