Title: Merchant Options of Energy Trading Network
Title: “Statistical network modeling via exchangeable interaction processes”
Many modern network datasets arise from processes of interactions in a population, such as phone calls, e-mail exchanges, co-authorships, and professional collaborations. In such interaction networks, the interactions comprise the fundamental statistical units, making a framework for interaction-labeled networks more appropriate for statistical analysis. In this talk, we present exchangeable interaction network models and explore their basic statistical properties. These models allow for sparsity and power law degree distributions, both of which are widely observed empirical network properties. I will start by presenting the Hollywood model, which is computationally tractable, admits a clear interpretation, exhibits good theoretical properties, and performs reasonably well in estimation and prediction.
In many settings, the series of interactions are structured. E-mail exchanges, for example, have a single sender and potentially multiple receivers. I will introduce hierarchical exchangeable interaction models for the study of structured interaction networks. In particular, I will introduce the Enron model as a canonical example, which partially pools information via a latent, shared population-level distribution. A detailed simulation study and supporting theoretical analysis provide clear model interpretation, and establish global power-law degree distributions. A computationally tractable Gibbs sampling algorithm is derived. Inference will be shown on the Enron e-mail dataset. I will end with a discussion of how to perform posterior predictive checks on interaction data. Using these proposed checks, I will show that the model fits the data well.
Title: Computational Anatomy: Structuring and Searching Shape Spaces.
Abstract: 100 years after the celebrated D’Arcy Thompson’s masterpiece “Growth and Forms”, the modeling and the understanding of both variability and dynamics of related biological shapes are still particularly challenging from both modeling and computational point of view. The luminous idea of his “Theory of Transformations” has been turned within the digital era into a versatile mathematical and computational framework coined as diffeomorphometry and living in the vicinity of riemannian geometry, fluid dynamics, optimal control and statistics. We will discuss about the mathematical side of this framework as well as some of challenges that still need to be faced.
Title: Growing Graceful Trees
In my talk I will describe and motivate the Graceful Labeling Conjecture. I will also describe constructions based on Gaussian elimination for listing and enumerating special induced edge label sequences of graphs. Our enumeration construction settles in the affirmative a conjecture raised by Whitty on the existence of matrix constructions whose determinant enumerate gracefully labeled trees. I will also describe and algorithm for obtaining all graceful labelings of a given graphs. If time permits I will conclude the paper with a conjugation algorithm which determines the set of graphs on n vertices having no isolated vertices which admit no graceful labeling.
The talk is based on joint work with Isaac Wass.
The Fall 2018 AMS Picnic will be held on Friday, September 14th from 12-2pm in Great Hall which is located in the Levering Building.
See you there.
Title: Learning Enabled Optimization
Abstract: Traditionally, Stochastic Optimization deals with optimization models in which some of the data is modeled using random variables. In contrast, Learning Models are intended to capture the behavior of covariates, where the goal is to characterize the behavior of the response (random variable) to the predictors (random variables). The field of Statistical (or Machine) Learning focuses on understanding these relationships. The goal of this talk is to present a new class of composite optimization models in which the learning and optimization models live symbiotically. We will discuss several examples of such problems, and how they give rise to a rich class of problems. (This talk is based on the work of several Ph.D. students, and in particular Yunxiao Deng, Junyi Liu and Shuotao Diao).
Bio: Suvrajeet Sen is Professor at the Daniel J. Epstein Department of Industrial and Systems Engineering at the University of Southern California. Prior to joining USC, he was a Professor at Ohio State University and University of Arizona. He has also served as the Program Director of OR as well as Service Enterprise Systems at the National Science Foundation. Professor Sen’s research is devoted to many categories of optimization models, and he has published over a hundred papers, with the vast majority of them dealing with models, algorithms and applications of Stochastic Programming problems. He has served on several editorial boards, including Operations Research as Area Editor for Optimization and as Associate Editor for INFORMS Journal on Computing, Journal of Telecommunications Systems, Mathematical Programming B, and Operations Research. He also serves as an Advisory Editor for several newer journals and an Associate Editor of INFORMS J. on Optimization. Professor Sen was instrumental in founding the INFORMS Optimization Society in 1995, and has also served as its Chair (2015-16). Except for his years at NSF, he has received continuous extramural research funding from NSF and other basic research agencies, totaling over ten million dollars as PI over the past 25 years. In 2015, this research and his group’s contributions were recognized by the INFORMS Computing Society for seminal work on Stochastic Mixed-Integer Programming. Professor Sen is a Fellow of INFORMS.
Title: Incorporating Confidence into Systemic Risk
Abstract: In a crisis when faced with insolvency, banks can issue shares/sell their treasury stock in the stock market and borrow money in order to raise funds. We propose a simple model to find the maximum amount of new funds the banks can raise in this way. To do this we incorporate market confidence of the bank together with market confidence of all the other banks into the overnight borrowing rate.
Additionally, for a given shortfall, we find the optimal mix of borrowing and stock selling. We show that the existence and uniqueness of Nash equilibrium strategy for all these problems. We then calibrate this model to market data and conduct an empirical study to access whether the current financial system is safer than it was before the last financial crisis.
In a related model of financial contagion in a network subject to fire sales and price impacts, we allow for firms to borrow to cover their shortfall as well. We consider both uncollateralized and collateralized loans. The main results of this work are providing sufficient conditions for existence and uniqueness of the clearing solutions (i.e., payments, liquidations, and borrowing); in such a setting any clearing solution is the Nash equilibrium of an aggregation game.
Title: Consistent Inter-Model Specification for Stochastic Volatility and VIX Market Models
Abstract: This talk addresses the following question: If a stochastic model is specified for the curve of VIX futures, what are the restrictions in order for it to be consistent with a stochastic volatility model? In other words, assuming that a stochastic volatility model is in place, a so-called market model will need to satisfy some conditions in order for there to not be any inter-model arbitrage or mis-priced derivatives. The present work gives such a condition, and also shows how to recover the correctly specified stochastic volatility function from the market model.
Title: Multivariate Records
Abstract: Given a vector-valued time series, a multivariate record is said to occur at some time if no previous observation dominates it in every coordinate. This notion of a record generalizes the usual notion in one dimension, and gives rise to some interesting phenomena, some of which will be presented. An efficient algorithm for sampling the multivariate records process that enables one to study the process empirically and discover new phenomena related to record growth in time will be described, and theoretical results illuminated from simulations will be presented. (This is joint work with Fred Torcaso and Vincent Lyzinzki).
Title: Hilbert’s Nullstellensatz and Linear Algebra: An Algorithm for Determining Combinatorial Infeasibility
Unlike systems of linear equations, systems of multivariate polynomial equations over the complex numbers or finite fields can be compactly used to model combinatorial problems. In this way, a problem is feasible (e.g. a graph is 3-colorable, Hamiltonian, etc.) if and only if a given system of polynomial equations has a solution. Via Hilbert’s Nullstellensatz, we generate a sequence of large-scale, sparse linear algebra computations from these non-linear models to describe an algorithm for solving the underlying combinatorial problem. As a byproduct of this algorithm, we produce algebraic certificates of the non-existence of a solution (i.e., non-3-colorability, non-Hamiltonicity, or non-existence of an independent set of size k).
In this talk, we present theoretical and experimental results on the size of these sequences, and the complexity of the Hilbert’s Nullstellensatz algebraic certificates. For non-3-colorability over a finite field, we utilize this method to successfully solve graph problem instances having thousands of nodes and tens of thousands of edges. We also describe methods of optimizing this method, such as finding alternative forms of the Nullstellensatz, adding carefully-constructed polynomials to the system, branching and exploiting symmetry.
Graduate students are happily advised that no background in algebraic geometry or familiarity with Hilbert’s Nullstellensatz is assumed for this talk. All theorems and terms are clearly explained with friendly pictures and examples. 🙂