The highly structured sequential data, such as from speech and handwriting, often contain complex relationships between the underlaying variational factors and the observed data. This paper will present different ways of using high level latent random variables in RNN to model the variability in the sequential data. We have developed the two-steps training algorithms of such RNN model under the VAE (Variational Autoencoder) principle. We proposed novel approach of using adversarial training to regularize the latent variable distributions in the variational RNN model. Contrary to competing approaches, our approach has theoretical optimum in the model training and provides better model training stability. Our approach also improves the posterior approximation in the variational inference network by a separated adversarial training step. Numerical results simulated from TIMIT speech data show that reconstruction loss and evidence lower bound converge to the same level and adversarial training loss converges to 0.
QC 20220530
Part of proceedings ISBN 978-1-6654-4337-1