Calendar

Sep
17
Thu
AMS Seminar w/ Fabio Mercurio (Bloomberg) on Zoom
Sep 17 @ 1:30 pm – 2:30 pm

Title: Looking Forward to Backward-Looking Rates: A Modeling Framework for Term Rates Replacing LIBOR

 

Abstract: LIBOR and other similar IBOR rates represent the cost of short-term funding among large global banks, and are the reference rates in millions of financial contracts with a total market exposure worldwide of 400 trillion dollars. Lack of liquidity in the unsecured short-term lending market, as well as evidence of LIBOR manipulation during the 2007-09 credit crisis, led regulators to identify new rate benchmarks. In this talk, we introduce and model the new new interest-rate benchmarks and their compounded setting-in-arrears term rates, which will be replacing IBORs globally. We show that the classic interest-rate modeling framework can be naturally extended to describe the evolution of both the forward-looking (IBOR-like) and backward-looking (setting-in-arrears) term rates using the same stochastic process. We then introduce an extension of the LIBOR Market Model to backward-looking rates. Applications will be presented and numerical examples showcased.

 

Your cloud recording is now available.

Topic: AMS Department Seminar (Fall 2020)
Date: Sep 17, 2020 01:18 PM Eastern Time (US and Canada)

For host only, click here to view your recording (Viewers cannot access this page):
https://wse.zoom.us/recording/detail?meeting_id=VX%2FqA9N%2FQ%2BynIoBw1R9Mzg%3D%3D

Share recording with viewers:
https://wse.zoom.us/rec/share/CBAf80Hb_1ZlYLpz8DoKhdOwx7k9F1zOsmr4EUdXV9LTgmF5TNou-ugp9RkERWlP.bTMc0SwGWnbz4dqY Passcode: uL5&[email protected]!1

Sep
24
Thu
AMS Seminar w/ Dustin Mixon (Ohio State University) on Zoom
Sep 24 @ 1:30 pm – 2:30 pm

Title: Ingredients matter: Quick and easy recipes for estimating clusters, manifolds, and epidemics

Abstract: Data science resembles the culinary arts in the sense that better ingredients allow for better results. We consider three instances of this phenomenon. First, we estimate clusters in graphs, and we find that more signal allows for faster estimation. Here, “signal” refers to having more edges within planted communities than across communities. Next, in the context of manifolds, we find that an informative prior allows for estimates of lower error. In particular, we apply the prior that the unknown manifold enjoys a large, unknown symmetry group. Finally, we consider the problem of estimating parameters in epidemiological models, where we find that a certain diversity of data allows one to design estimation algorithms with provable guarantees. In this case, data diversity refers to certain combinatorial features of the social network. Joint work with Jameson Cahill, Charles Clum, Hans Parshall, and Kaiying Xie.

 

Your cloud recording is now available.

Topic: AMS Department Seminar (Fall 2020)
Date: Sep 24, 2020 12:59 PM Eastern Time (US and Canada)

For host only, click here to view your recording (Viewers cannot access this page):
https://wse.zoom.us/recording/detail?meeting_id=D3Hbv%2Fe5QXKcE0FUgQFdVg%3D%3D

Share recording with viewers:
https://wse.zoom.us/rec/share/fChPLSraWeF5AhXKbY0jkOOfv0zAhnX4d6qWeWVa9_Goyup0aLcKi0VETt7T2Wan.xDWyUYFDujlhPvqt Passcode: 79W*[email protected]

Oct
1
Thu
AMS Seminar w/ Aude Genevay (Massachusetts Institute of Technology) on Zoom
Oct 1 @ 1:30 pm – 2:30 pm

Title: Learning with entropy-regularized optimal transport

Abstract: Entropy-regularized OT (EOT) was first introduced by Cuturi in 2013 as a solution to the computational burden of OT for machine learning problems. In this talk, after studying the properties of EOT, we will introduce a new family of losses between probability measures called Sinkhorn Divergences. Based on EOT, this family of losses actually interpolates between OT (no regularization) and MMD (infinite regularization). We will illustrate these theoretical claims on a set of learning problems formulated as minimizations over the space of measures.

 

Your cloud recording is now available.

Topic: AMS Department Seminar (Fall 2020)
Date: Oct 1, 2020 01:21 PM Eastern Time (US and Canada)

For host only, click here to view your recording (Viewers cannot access this page):
https://wse.zoom.us/recording/detail?meeting_id=zXLatYK3QFieUi0kc9N%2BRA%3D%3D

Share recording with viewers:
https://wse.zoom.us/rec/share/cuYXVU99jAdaLuq4FfIew8x7dxjZ40hORkqQyQpfPCAB_B69q1XeDJmLFw5yuZrb.QIj2wn6azpc4V96E Passcode: *$xMJcX6

Oct
8
Thu
AMS Seminar w/ Kevin Pratt (Carnegie Mellon University) on Zoom
Oct 8 @ 1:30 pm – 2:30 pm

Title: Subgraph isomorphism via partial differentiation

Abstract: In this talk I will discuss a recent approach to the algorithmic problem of subgraph isomorphism: given a host graph G and target graph H, decide whether G contains a subgraph isomorphic to H. For simplicity, I will illustrate the approach in the case when H is a path. I will describe an algorithm whose runtime comes close to that of the state of the art, while using a new approach based on identifying polynomials with prescribed combinatorial supports (i.e., monomials appearing with nonzero coefficient), and whose partial derivatives (of all orders) span a vector space of small dimension. Connections to previous approaches and avenues for further improvement will also be discussed.

Part of this talk is based on joint work with Cornelius Brand.

 

Here is the link and the meeting info:

https://wse.zoom.us/j/98200438645?pwd=d3M3WEljc0sxd3BRQldUU3dudzhvdz09

Meeting ID: 982 0043 8645

Passcode: 374212

 

Oct
15
Thu
AMS Seminar w/ David Gu (Stony Brook University) on Zoom
Oct 15 @ 1:30 pm – 2:30 pm

Title: A Geometric Understanding of Deep Learning

Abstract: This work introduces an optimal transportation (OT) view of generative adversarial networks (GANs). Natural datasets have intrinsic patterns, which can be summarized as the manifold distribution principle: the distribution of a class of data is close to a low-dimensional manifold. GANs mainly accomplish two tasks: manifold learning and probability distribution transformation. The latter can be carried out using the classical OT method. From the OT perspective, the generator computes the OT map, while the discriminator computes the Wasserstein distance between the generated data distribution and the real data distribution; both can be reduced to a convex geometric optimization process. Furthermore, OT theory discovers the intrinsic collaborative—instead of competitive—relation between the generator and the discriminator, and the fundamental reason for mode collapse. We also propose a novel generative model, which uses an autoencoder (AE) for manifold learning and OT map for probability distribution transformation. This AE–OT model improves the theoretical rigor and transparency, as well as the computational stability and efficiency; in particular, it eliminates the mode collapse. The experimental results validate our hypothesis, and demonstrate the advantages of our proposed model.

 

Meeting Recording:

https://wse.zoom.us/rec/share/NmcAgaDnXT0YgkEVAa5vX2TaEDXq28gpdwBxve9QXRXfoi9vlqG_9IyqV8d337Fq.4piGXPQfnZi1oDCI

Access Passcode: Fc1=nKmE

Oct
29
Thu
AMS Seminar w/ Vince Lyzinski (University of Maryland, College Park) on Zoom
Oct 29 @ 1:30 pm – 2:30 pm

Title: The Importance of Being Correlated: Implications of Dependence in Joint Spectral Inference across Multiple Networks

Abstract: Spectral inference on multiple networks is a rapidly-developing subfield of graph statistics. Recent work has demonstrated that joint, or simultaneous, spectral embedding of multiple independent network realizations can deliver more accurate estimation than individual spectral decompositions of those same networks. Little attention has been paid, however, to the network correlation that such joint embedding procedures necessarily induce. In this paper, we present a detailed analysis of induced correlation in a {\em generalized omnibus} embedding for multiple networks. We show that our embedding procedure is flexible and robust, and, moreover, we prove a central limit theorem for this embedding and explicitly compute the limiting covariance. We examine how this covariance can impact inference in a network time series, and we construct an appropriately calibrated omnibus embedding that can detect changes in real biological networks that previous embedding procedures could not discern. Our analysis confirms that the effect of induced correlation can be both subtle and transformative, with import in theory and practice.

 

Your cloud recording is now available.

Topic: AMS Department Seminar (Fall 2020)
Date: Oct 29, 2020 01:18 PM Eastern Time (US and Canada)

For host only, click here to view your recording (Viewers cannot access this page):
https://wse.zoom.us/recording/detail?meeting_id=%2FQgtuDojRnaeVqoi0IWcuw%3D%3D

Share recording with viewers:
https://wse.zoom.us/rec/share/1fETcswYJGsGY6HgXmvs4Xd1EaAI1ThzZceI3AhmxD6c1g-0dkyxc1QLJ5BJFUd4.bXWPzF3wH4zpoKm6 Passcode: ?a9s6%xR

Nov
5
Thu
AMS Seminar w/ Eric Vanden-Eijnden (New York University) on Zoom
Nov 5 @ 1:30 pm – 2:30 pm

Title: Trainability and accuracy of artificial neural networks

Abstract: The methods and models of machine learning (ML) are rapidly becoming de facto tools for the analysis and interpretation of large data sets. Complex classification tasks such as speech and image recognition, automatic translation, decision making, etc. that were out of reach a decade ago are now routinely performed by computers with a high degree of reliability using (deep) neural networks. These performances suggest that DNNs may approximate high-dimensional functions with controllably small errors, potentially outperforming standard interpolation methods based e.g. on Galerkin truncation or finite elements that have been the workhorses of scientific computing. In support of this prospect, in this talk I will present results about the trainability and accuracy of neural networks, obtained by mapping the parameters of the network to a system of interacting particles relaxing on a potential determined by the loss function. This mapping can be used to prove a dynamical variant of the universal approximation theorem showing that the optimal neural network representation can be attained by (stochastic) gradient descent, with a approximation error scaling as the inverse of the network size.  I will also show how these findings can be used to accelerate the training of  networks and optimize their architecture, using e.g nonlocal transport involving birth/death processes in parameter space.

 

Meeting recording link:
https://wse.zoom.us/rec/share/WO_nf9zgmnKfPniZsBSzECdAdNBp5wiyMP34tsMNAbb1jgVtgqQAV4YtrJjCGPY7.S1xykYLxepdibbZQ

Passcode: MH!7JDN2

Nov
12
Thu
AMS Seminar w/ Mete Soner (Princeton University) on Zoom
Nov 12 @ 1:30 pm – 2:30 pm

Title: Monte-Carlo methods for high-dimensional problems in quantitative finance

Abstract: Stochastic optimal control has been an effective tool for many problems in quantitative finance and financial economics. Although it provides much needed quantitative modeling for such problems, until recently it has been intractable in high-dimensional settings. However, several recent studies report impressive numerical results: Cheredito et al. studied the optimal stopping problem (a problem closely connected to pricing American-type options in quantitative finance finale) providing tight error bounds and an efficient algorithm in problems in up to 100 dimensions.  Buehler et al., on the other hand, consider the problem of hedging and again report results for high-dimensional problems that were intractable. These papers use a Monte Carlo type algorithm combined with deep neural networks proposed by E. Han and Jentzen.  In this talk I will outline this approach and discuss its properties.  Numerical results, while validating the power of the method in high dimensions, also show the dependence on the dimension and the size of the training data.  This is joint work with Max Reppen of Boston University.

 

Here is the link and the meeting info:

https://wse.zoom.us/j/98200438645?pwd=d3M3WEljc0sxd3BRQldUU3dudzhvdz09

Meeting ID: 982 0043 8645

Passcode: 374212

Enjoy.

Nov
19
Thu
The Goldman Distinguished Lecture Series: Rekha Thomas (University of Washington, Seattle) on Zoom
Nov 19 @ 1:30 pm – 2:30 pm

Goldman Lecture 11-19-2020(pdf)

Title: Lifting for Simplicity: Concise Descriptions of Convex Sets

Abstract: A common theme in many areas of mathematics is to find a simpler representation of an object indirectly by expressing it as the projection of an object in some higher-dimensional space.  In 1991 Yannakakis proved a remarkable connection between a lifted representation of a polytope and the nonnegative rank of a matrix associated with the polytope. In recent years, this idea has been generalized to cone lifts of convex sets, with applications in, and tools coming from, many areas of mathematics and theoretical computer science. This talk will survey the central ideas, results, and questions in this field.

Bio: Rekha Thomas is the Walker Family Endowed Professor of Mathematics at
the University of Washington. She received her Ph.D. in Operations Research from Cornell University in 1994 followed by postdoctoral work at Yale and Berlin. Her research interests are in Optimization and Applied Algebraic Geometry.

Cloud recording is now available.

Topic: AMS Department Seminar (Fall 2020)
Date: Nov 19, 2020 01:09 PM Eastern Time (US and Canada)

https://wse.zoom.us/rec/share/TOtVoSQbpp6QITONuy0Mqg6bVsfxYrN6BGJxfX2tw_Dho0NPzqBzcMRhmmM4V0hu.mTMckbQU6k0nBYFv

Passcode: +$0sH0iT

 

Dec
3
Thu
The Acheson J. Duncan Lecture Series: AMS Seminar: Kavita Ramanan (Brown University) on Zoom
Dec 3 @ 1:30 pm – 2:30 pm

Title: Beyond Mean-field Limits for Large-scale Stochastic Systems

Abstract: Many large-scale stochastic systems that arise as models in a variety of fields including neuroscience, epidemiology, physics, engineering and computer science, can be described in terms of a large collection of “locally” interacting Markov chains, where each particle’s transition rates depend only on the states of neighboring particles with respect to an underlying (possibly random) graph. Since these dynamics are typically not amenable to exact analysis, a common paradigm is to instead study a more tractable approximation that is asymptotically exact as the number of particles goes to infinity in order to gain qualitative insight into the system. A frequently used approximation is the mean-field approximation, which works provably well when the interaction graph is sufficiently dense. However, it performs quite poorly when the interaction graph is sparse, which is the case in many applications. We describe new asymptotically accurate approximations that can be developed in the latter setting, and show how they perform in various applications.   This is joint work with A. Ganguly.

Bio: Kavita Ramanan is the Roland George Dwight Richardson University Professor and Associate Chair at the Division of Applied Mathematics, Brown University. Her field of research is probability theory, stochastic processes and their applications. She has received several honors in recognition of her research, including a Guggenheim Fellowship, a Distinguished Alumni Award from IIT-Bombay, and the Newton Award from the Department of Defense (DoD), all in 2020, a Simons Fellowship in 2018, an IMS Medallion in 2015 and the Erlang Prize from the INFORMS Applied Probability Society in 2006 for “outstanding contributions to applied probability.”   She serves on multiple editorial boards and is an elected fellow of several societies, including AAAS, AMS, INFORMS, IMS and SIAM.

More information about her can be found at her website:
https://www.brown.edu/academics/applied-mathematics/faculty/kavita-ramanan/home

 

Your cloud recording is now available.

https://wse.zoom.us/rec/share/Sm8YAbi3gBRLub6b3VD189QIkJRpo3LCrjCpoF0U-IGJ-jj2qatcKEtlwybSftiQ.MolOusPltsRpYQR8

Passcode: b#[email protected]

Back to top