In this session, we will have an invited talk from Maja Rudolph at Bosch Center for Artificial Intelligence, as well as two contributed talks from Wessel Bruinsma (The Gaussian Neural Process) and Tomas Geffner (Empirical Evaluation of Biased Methods for Alpha Divergence Minimization). See http://approximateinference.org/schedule/ for details.
Maja Rudolph: Variational Dynamic Mixtures
Abstract: Many deep probabilistic time series models struggle with sequences with multi-modal dynamics. While powerful generative models have been developed, we show evidence that the associated approximate inference methods are usually too restrictive and can lead to mode averaging. Mode averaging is problematic in highly multi-modal real world sequences, as it can result in unphysical predictions (e.g., predicted taxi trajectories might run through buildings on the street map if they average between the options to go either right or left). This talk is about variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables with multi-modal dynamics. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains.