BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Applied Mathematics and Statistics - ECPv6.0.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Applied Mathematics and Statistics
X-ORIGINAL-URL:https://engineering.jhu.edu/ams
X-WR-CALDESC:Events for Department of Applied Mathematics and Statistics
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20211021T133000
DTEND;TZID=America/New_York:20211021T143000
DTSTAMP:20230531T072355
CREATED:20210907T200516Z
LAST-MODIFIED:20211230T135054Z
UID:36398-1634823000-1634826600@engineering.jhu.edu
SUMMARY:AMS Seminar w/ Tom Fletcher (UVA) @ Remsen 101 or on Zoom
DESCRIPTION:Title: The Riemannian Geometry of Deep Neural Networks \nAbstract: In this talk I will present several ways in which Riemannian geometry plays a role in deep neural networks. The first is in deep generative models\, which learn a mapping from a low-dimensional latent space to a high-dimensional data space. Under certain regularity conditions\, these models parameterize nonlinear manifolds in the data space. We develop efficient algorithms for computing geodesic curves and distances on such manifolds. Next\, we develop an algorithm for parallel translation of tangent vectors. We show how parallel translation can be used to generate analogies\, i.e.\, to transport a change in one data point into a semantically similar change of another data point. Our experiments on real image data show that the manifolds learned by deep generative models\, while nonlinear\, are surprisingly close to zero curvature. I will discuss the practical impact of this fact and hypothesize why it might be the case. Second\, I show that classification models also naturally involve manifold geometry. This has implications for adversarial examples\, where small perturbations of an input can surprisingly fool an otherwise accurate classifier. Using concepts from information geometry\, we show how to find directions that fool a classifier with minimal perturbation. In the final application of manifold geometry\, I will present recent work on making deep classifiers more interpretable by relating their geometry to gradients of interpretable features. \nHere is the new link and meeting ID+passcode:\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80NnpwZUFRUzRWUT09 \nMeeting ID: 914 6737 5713\nPasscode: 272254
URL:https://engineering.jhu.edu/ams/event/ams-seminar-w-tom-fletcher-uva-remsen-101-or-on-zoom/
CATEGORIES:Seminars and Lectures
END:VEVENT
END:VCALENDAR