Tutorial on normalizing flows

by · Jun 15, 2019 · 2,122 views ·

Invertible neural networks have been a significant thread of research in the ICML community for several years. Such transformations can offer a range of unique benefits: (1) They preserve information, allowing perfect reconstruction (up to numerical limits) and obviating the need to store hidden activations in memory for backpropagation. (2) They are often designed to track the changes in probability density that applying the transformation induces (as in normalizing flows). (3) Like autoregressive models, normalizing flows can be powerful generative models which allow exact likelihood computations; with the right architecture, they can also allow for much cheaper sampling than autoregressive models. While many researchers are aware of these topics and intrigued by several high-profile papers, few are familiar enough with the technical details to easily follow new developments and contribute. Many may also be unaware of the wide range of applications of invertible neural networks, beyond generative modelling and variational inference.