When: Oct 21 2021 @ 1:30 PM
Categories:

Title: The Riemannian Geometry of Deep Neural Networks

Abstract: In this talk I will present several ways in which Riemannian geometry plays a role in deep neural networks. The first is in deep generative models, which learn a mapping from a low-dimensional latent space to a high-dimensional data space. Under certain regularity conditions, these models parameterize nonlinear manifolds in the data space. We develop efficient algorithms for computing geodesic curves and distances on such manifolds. Next, we develop an algorithm for parallel translation of tangent vectors. We show how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point. Our experiments on real image data show that the manifolds learned by deep generative models, while nonlinear, are surprisingly close to zero curvature. I will discuss the practical impact of this fact and hypothesize why it might be the case. Second, I show that classification models also naturally involve manifold geometry. This has implications for adversarial examples, where small perturbations of an input can surprisingly fool an otherwise accurate classifier. Using concepts from information geometry, we show how to find directions that fool a classifier with minimal perturbation. In the final application of manifold geometry, I will present recent work on making deep classifiers more interpretable by relating their geometry to gradients of interpretable features.

Here is the new link and meeting ID+passcode:
https://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80NnpwZUFRUzRWUT09

Meeting ID: 914 6737 5713
Passcode: 272254