Title: Learning with entropy-regularized optimal transport
Abstract: Entropy-regularized OT (EOT) was first introduced by Cuturi in 2013 as a solution to the computational burden of OT for machine learning problems. In this talk, after studying the properties of EOT, we will introduce a new family of losses between probability measures called Sinkhorn Divergences. Based on EOT, this family of losses actually interpolates between OT (no regularization) and MMD (infinite regularization). We will illustrate these theoretical claims on a set of learning problems formulated as minimizations over the space of measures.
Your cloud recording is now available.
Topic: AMS Department Seminar (Fall 2020)
Date: Oct 1, 2020 01:21 PM Eastern Time (US and Canada)
For host only, click here to view your recording (Viewers cannot access this page):
Share recording with viewers:
https://wse.zoom.us/rec/share/cuYXVU99jAdaLuq4FfIew8x7dxjZ40hORkqQyQpfPCAB_B69q1XeDJmLFw5yuZrb.QIj2wn6azpc4V96E Passcode: *$xMJcX6