Title: Distributed Synchronization in Engineering Networks
This talk presents a systematic study of synchronization on distributed (networked) systems that spans from theoretical modeling and stability analysis to distributed controller design, implementation and verification. We first focus on developing a theoretical foundation for synchronization of networked oscillators. We study how the interaction type (coupling) and network configuration (topology) affect the behavior of a population of heterogeneous coupled oscillators. Unlike existing literature that restricts to specific scenarios, we show that phase consensus (common phase value) can be achieved for arbitrary network topologies under very general conditions on the oscillators’ model.
We then focus on more practical aspects of synchronization on computer networks. Unlike existing solutions that tend to rely on expensive hardware to improve accuracy, we provide a novel algorithm that reduces jitter by synchronizing networked computers without estimating the frequency difference between clocks (skew) or introducing offset corrections. We show that a necessary and sufficient condition on the network topology for synchronization (in the presence of noise) is the existence of a unique leader in the communication graph. A Linux-based implementation on a cluster of IBM BladeCenter servers experimentally verifies that the proposed algorithm outperforms well-established solutions and that loops can help reduce jitter.
The Graduate Career Advisor for Financial Math and Applied Math & Statistics will teach some strategies to make the most of the winter after fall classes end.
Learn how to best kick off or revamp your job or internship search!
*Food will be served, grab it 15 minutes before the event!
RSVP on Handshake @ https://app.joinhandshake.com/events/108603
Title: The Growing Importance of Satellite Data for Health and Air Quality Applications
Satellite data are growing in importance for health and air quality end users in the U.S. and around the world. From their “Gods-eye” view, satellites provide a level of spatial coverage unobtainable by surface monitoring networks. Satellite observations of various pollutants, such as nitrogen dioxide and sulfur dioxide, vividly demonstrate the steady improvement of air quality in the U.S. over the last several decades thanks to environmental regulations, such as the Clean Air Act. However, while better, U.S. air quality is still not at healthy levels and there are occasionally extreme events (e.g., wildfires, toxic spills in Houston after Hurricane Harvey) that expose Americans to high levels of pollution. Satellite data also show that air quality in many parts of the world is rapidly degrading, and is likely to continue to do so as the global population is expected to increase by 2 billion by 2050. In this presentation, I will discuss the strengths and limitations of current satellite data for health and air quality applications as well as the potential upcoming satellites offer. I will present examples of successful uses of satellite data, discuss potential uses, and highlight ongoing challenges (e.g., data processing and visualization) for satellite data end users.
Dr. Bryan Duncan is an Earth scientist at NASA’s Goddard Space Flight Center and has a keen interest in using NASA satellite data for societal benefit, including for health and air quality applications. He frequently speaks to representatives of various U.S. and international agencies (e.g., World Bank, UNICEF) about how satellite data may benefit their objectives and is a member of the NASA Health and Air Quality Applied Sciences Team (HAQAST). He is also the Project Scientist of the NASA Aura satellite mission, which has observing air quality from space as one of its objectives.
Title: Principled non-convex optimization for deep learning and phase retrieval
Abstract: This talk looks at two classes of non-convex problems. First, we discuss phase retrieval problems, and present a new formulation, called PhaseMax, that reduces this class of non-convex problems into a convex linear program. Then, we turn our attention to more complex non-convex problems that arise in deep learning. We’ll explore the non-convex structure of deep networks using a range of visualization methods. Finally, we discuss a class of principled algorithms for training “binarized” neural networks, and show that these algorithms have theoretical properties that enable them to overcome the non-convexities present in neural loss functions.
Title: Approximating Minimal Cut-Generating Functions by Extreme Functions
With applications in scheduling, networks, and generalized assignment problems, integer programs are ubiquitous in a variety of engineering disciplines. Often, integer programming algorithms make use of strategically chosen cutting planes in order to trim the region bounded by the linear constraints without removing any feasible points. Recently, there has been a resurgence of interest in the theory of (minimal) cut generating functions, as such functions can be used to produce quality cuts. Moreover, the family of minimal functions forms a convex set; in order to better understand this class of functions, we wish to study the extreme functions of this set. In this talk, we shall see that the set of continuous minimal cut generating functions contains a dense subset of extreme function.
Title: Limit theorems for eigenvectors of the normalized Laplacian for random graphs
“We prove a central limit theorem for the components of the eigenvectors corresponding to the d largest eigenvalues of the normalized Laplacian matrix of a finite dimensional random dot product graph. As a corollary, we show that for stochastic blockmodel graphs, the rows of the spectral embedding of the normalized Laplacian converge to multivariate normals and furthermore the mean and the covariance matrix of each row are functions of the associated vertex’s block membership. Together with prior results for the eigenvectors of the adjacency matrix, we then compare, via the Chernoff information between multivariate normal distributions, how the choice of embedding method impacts subsequent inference. We demonstrate that neither embedding method dominates with respect to the inference task of recovering the latent block assignments.”
Title: Monotonicity of optimal contracts without the first-order approach
We develop a simple sufficient condition for an optimal contract of a moral
hazard problem to be monotone in the output signal. Existing results on monotonicity
require conditions on the output distribution (namely, the monotone likelihood ratio
property (MLRP)) and additional conditions to guarantee that agent’s decision is
approachable via the first-order approach of replacing that
problem with its first-order conditions. We know of no positive monotonicity
results in the setting where the first-order approach does not apply. Indeed, it is
well-documented that when there are finitely-many possible outputs, and the
first-order approach does not apply, the MLRP alone is insufficient to guarantee monotonicity.
However, we show that when there is an interval of possible output signals,
the MLRP does suffice to establish monotonicity under additional technical assumptions
that do not guarantee the validity of the first-order approach.
This is joint work with Rongzhu Ke (Hong Kong Baptist University).
Title: The Learning Premium
Abstract: We find equilibrium stock prices and interest rates in a
representative-agent model with uncertain dividends’ growth, gradually
revealed by dividends themselves, where asset prices are rational –
reflect current information and anticipate the impact of future
knowledge on future prices. In addition to the usual premium for risk,
stock returns include a learning premium, which reflects the expected
change in prices from new information. In the long run, the learning
premium vanishes, as prices and interest rates converge to their
counterparts in the standard setting with known growth. The model
explains the increase in price-dividend ratios of the past century if
both relative risk aversion and elasticity of intertemporal
substitution are above one. This is a joint work with Paolo Guasoni.
Title: Maximum Likelihood Density Estimation under Total Positivity
Abstract: Nonparametric density estimation is a challenging problem in theoretical statistics—in general the maximum likelihood estimate (MLE) does not even exist! Introducing shape constraints allows a path forward. This talk offers an invitation to non-parametric density estimation under total positivity (i.e. log-supermodularity) and log-concavity. Totally positive random variables are ubiquitous in real world data and possess appealing mathematical properties. Given i.i.d. samples from such a distribution, we prove that the maximum likelihood estimator under these shape constraints exists with probability one. We characterize the domain of the MLE and show that it is in general larger than the convex hull of the observations. If the observations are 2-dimensional or binary, we show that the logarithm of the MLE is a tent function (i.e. a piecewise linear function) with “poles” at the observations, and we show that a certain convex program can find it. In the general case the MLE is more complicated. We give necessary and sufficient conditions for a tent function to be concave and supermodular, which characterizes all the possible candidates for the MLE in the general case.